Why can’t they simply say -
Mamba-3 focuses on being faster and more efficient when making predictions, rather than just being fast to train like Mamba-2.
It's a nice opening as it is imo
?
That is not a reason for snark.
As other commenters have noted, it’s well written.
Because the blog post is a technical one and the intro contains very common jargon, and the proposed alternative was wrong.
Mamba is an architecture for the middle layers of the network (the trunk) which assumes decoding takes place through an autoregressive sequence (popping out tokens in order). This is the SSM they talk about.
Diffusion is an alternative to the autoregressive approach where decoding takes place through iterative refinement on a batch of tokens (instead of one at a time processing and locking each one in only looking forward). This can require different architectures for the trunk, the output heads, and modifications to the objective to make the whole thing trainable. Could mamba like ideas be useful in diffusion networks...maybe but it's a different problem setup.
Yes, batch=1 inference is mostly memory bandwidth bound, not GPU compute bound. But no provider does batch=1 inference. Everyone groups all the requests into a batch, and the GPU computes them together.
With a fused kernel, that means the GPU streams the tensors from VRAM, and does a bunch of compute on different conversations in the batch, at the same time.
If they increase the amount of compute required per token, that just reduces the maximum batch size a GPU can handle. In practice, yes this does mean each GPU can serve less users. Providers aren't leaving GPU cores idle normally during inference.
You're only saving on fetching read-only parameters, and not even on that if you're using MoE models where each inference in the batch might require a different expert (unless you rearrange batches so that sharing experts becomes more likely, but that's difficult since experts change per-token or even per-layer). Everything else - KV-cache, activations - gets multiplied by your batch size. You scale both compute and memory pressure by largely the same amount. Yes, GPUs are great at hiding memory fetch latency, but that applies also to n=1 inference.
Read-only parameters is also usually the majority of space. Deepseek is 700GB of params. Meanwhile kv cache is small (Deepseek is about 7GB at max context) and ssm/conv1d cache is even smaller- IIRC Qwen 3.5 is 146MB per token regardless of context size. Not sure about how Mamba-3 works, but I suspect read-only parameters are still a significant amount of memory bandwidth.
I guess the question isn't whether compute is 1:1 with memory, but rather if you run out of compute before you run out of vram adding more users.
Experts are usually chosen on a per-layer basis, not just by token, so I'd think this requires having lots of GPU's to make it worthwhile. You could do it with a single physical GPU by switching expert-layer mixes in a round-robin fashion after the batch for any single expert-layer mix is completed (essentially a refined version of expert offloading). But still, not easy.
https://arxiv.org/pdf/2412.19437
> The minimum deployment unit of the decoding stage consists of 40 nodes with 320 GPUs. The attention part employs TP4 with SP, combined with DP80, while the MoE part uses EP320.
EP320 means expert parallelism, each on 320 GPUs.
https://arxiv.org/html/2412.19437v1 "the batch size per expert is relatively small (usually within 256 tokens)"
I can see it for engineering - coding with slow ai is painful
That's like gamers thinking most of Nvidia's revenue coming from gaming GPUs, so Nvidia should prioritize gamers.
Inference is ruled by inference providers, not local. Local inference is a rounding error, and will remain as such unless there is economic incentive otherwise.
Not sure they target local though…
Instead, you can get benefits from both by doing both in parallel. This can let you reduce the size of the O(n^2) attention mechanism, so while it's still quadratic, it reduces the constant quite a bit while still retaining a lot of performance, as the linear context mechanism can work for the tasks its well suited for while allowing attention to play to its strengths.
The recent Nemotron 3 Nano and Super models from NVIDIA are hybrid architectures this way, with most of their context layers as Mamba while retaining enough attention to continue to be competitive on the more complex tasks that require the quadratic attention.
See https://magazine.sebastianraschka.com/i/168650848/18-nemotro... for some discussion on this architecture
Nemotron 3 Super doesn't perform quite as well on benchmarks as the similarly sized Qwen3.5 122B A10B model, but it goes faster and is cheaper to run.
https://artificialanalysis.ai/?models=gpt-oss-120b%2Cmistral...
Now, you're not exactly comparing apples to apples there, since the training process (mix of data for pre-training, and the fine tuning stages of instruction turning, RLVR, etc) could have as much or more impact on how well it does as the architecture itself. Nemotron 3 Super does get better scores on performance than GPT-OSS 120B and Mistral Small 4, both also similarly sized open weights models.
I know the step isn’t fixed, also not sure why that’s important. Is that the only reason? There also seems to be a parameterization advantage too with the continuous formulation.