Knitting is a from-scratch concurrency framework for JavaScript built on shared memory. Instead of manually wiring message protocols, you just call functions across threads.
Why I built it
A couple years ago I was doing a multithreaded simulation in Python. It was slow, so I rewrote it in JS — and then hit workers.
Using postMessage felt like more work than the simulation itself, and it still wasn’t fast.
That left me with a question: why is message passing the default when shared memory already exists?
What’s different
The core idea is simple: make cross-thread calls feel like normal function calls.
call.myTask(args) → returns a Promise, no manual protocol.
Under the hood: - shared-memory request/response mailboxes - atomic slot coordination (no locks) - small payloads inline, larger ones in buffers - Atomics.wait / notify for wakeups
Why it might be interesting
This works best when coordination overhead dominates.
In Deno benchmarks: - ~3.5x faster than postMessage for 1 message - ~9.5x at 25 messages - ~10.7x at 50 messages
There are also Node, Bun, and Tokio comparisons in the docs.
Try it
Docs: https://knittingdocs.netlify.app/ Benchmarks: https://knittingdocs.netlify.app/benchmarks/ GitHub: https://github.com/mimiMonads/knitting
Limitations
- No browser support yet - No pub/sub model - Fixed thread pool size
Feedback
I built this mainly for workloads like HTTP / sockets.
I’m especially interested in: - where this model actually breaks - whether function-call IPC is a dead end or a useful abstraction