M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models
34 points
4 days ago
| 2 comments
| arxiv.org
| HN
solomatov
4 days ago
[-]
Does anyone know if there were any attempts to test Mamba on really large scale? To me this model looks as the most promising successor to the transformer architecture. Does anyone know why it might not be the case or what are other alternatives?
reply
tangjurine
4 days ago
[-]
Tencent's 'Hunyuan-T1'–The First Mamba-Powered Ultra-Large Model: https://news.ycombinator.com/item?id=43447254
reply
ed
4 days ago
[-]
Interesting direction for research but not a model you’d want to use today. The paper looks at a 3b model built on llama3.2-3b, modified for mamba, and they’re comparing to a distilled version of r1 with 1.5b params.
reply