Almost a year ago, we first shared Mastra here (https://news.ycombinator.com/item?id=43103073). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.
Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.
If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.
Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.
Agent development is changing quickly, so we’ve added a lot since February:
- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.
- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.
- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.
- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.
(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)
Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.
We'll be around and happy to answer any questions!
shudders in vietnam war flashbacks congrats on launch guys!!!
for those who want an independent third party endorsement, here's Brex CTO talking about Mastra in their AI engineering stack http://latent.space/p/brex
One thing to consider is that it felt clunky working with workflows and branching logic with non LLM agents. I have a strong preference for using rules based logic and heuristics first. That way, if I do need to bring in the big gun LLM models, I already have the context engineering solved. To me, an agent means anything with agency. After a couple weeks of frustration, I started using my own custom branching workflows.
One reason to use rules, they are free and 10,000x faster, with an LLM agent fallback if validation rules were not passing. Instead of running an LLM agent to solve a problem every single time, I can have the LLM write the rules once. The whole thing got messy.
Otherwise, Mastra is best in class for working with TypeScript.
Do you have code snippets you can share about how you wanted to write the rules? Want to understand desired grammar / syntax better.
It's built on top of Vercel AI elements/SDK and it seems to me that was a good decision.
My mental heuristic is:
Vercel AI SDK = library, low level
Mastra = framework
Then Vercel AI Elements gives you an optional pre built UI.
However, I read the blog post for the upcoming AI SDK 6.0 release last week, and it seems like it's shifting more towards being a framework as well. What are your thoughts on this? Are these two tools going to align further in the future?
I see each of us having different architectures. AI SDK is more low-level, and Mastra is more integrated with storage powering our studio, evals, memory, workflow suspend/resume etc.
I wonder: Are there any large general purpose agent harnesses developed using Mastra? From what I can tell OpenCode chose not to use it.
A lot of people on here repeat that rolling your own is more powerful than using Langchain or other frameworks and I wonder how Mastra relates to this sentiment.
These days we see things going the other way, where teams that started rolling their own shift over to Mastra so they can focus on the agent vs having to maintain an internal framework.
The Latent Space article swyx linked earlier includes a quote from the Brex CTO talking about how they did that.
We're TypeScript-first, TypeScript-only so a lot of the teams who use us are full-stack TypeScript devs and want an agent framework that feels TS-native, easy to use, and feature-complete.
say more pls?
But people kept asking us for a multi-agent primitive out of the box so we shipped `agent.network()`, which is basically dynamic hierarchy decided at runtime, pass in an array of workflows and agents to the routing agent and let it decide what to do, how long to execute for, etc!
Another useful question to ask: since you’re likely using 1 of 3 frontier models anyway, do you believe Claude Agent SDK will increasingly become the workflow and runtime of agentic work? Or if not Claude itself, will that set the pattern for how the work is executed? If you do, why use a wrapper?
For any agent you're shipped to production though you probably want a harness that's open-source so you more fully control / can customize the experience.
But tons of other use cases too, eg dev teams at Workday and PayPal have built an agentic SRE to triage their alerts, etc etc