My team and I are building Tabstack to handle the "web layer" for AI agents. Launch Post: https://tabstack.ai/blog/intro-browsing-infrastructure-ai-ag...
Maintaining a complex infrastructure stack for web browsing is one of the biggest bottlenecks in building reliable agents. You start with a simple fetch, but quickly end up managing a complex stack of proxies, handling client-side hydration, and debugging brittle selectors. and writing custom parsing logic for every site.
Tabstack is an API that abstracts that infrastructure. You send a URL and an intent; we handle the rendering and return clean, structured data for the LLM.
How it works under the hood:
- Escalation Logic: We don't spin up a full browser instance for every request (which is slow and expensive). We attempt lightweight fetches first, escalating to full browser automation only when the site requires JS execution/hydration.
- Token Optimization: Raw HTML is noisy and burns context window tokens. We process the DOM to strip non-content elements and return a markdown-friendly structure that is optimized for LLM consumption.
- Infrastructure Stability: Scaling headless browsers is notoriously hard (zombie processes, memory leaks, crashing instances). We manage the fleet lifecycle and orchestration so you can run thousands of concurrent requests without maintaining the underlying grid.
On Ethics: Since we are backed by Mozilla, we are strict about how this interacts with the open web.
- We respect robots.txt rules.
- We identify our User Agent.
- We do not use requests/content to train models.
- Data is ephemeral and discarded after the task.
The linked post goes into more detail on the infrastructure and why we think browsing needs to be a distinct layer in the AI stack.
This is obviously a very new space and we're all learning together. There are plenty of known unknowns (and likely even more unknown unknowns) when it comes to agentic browsing, so we’d genuinely appreciate your feedback, questions, and tips.
Happy to answer questions about the stack, our architecture, or the challenges of building browser infrastructure.
We recognize that the balance between content owners and the users or developers accessing that content is delicate. Because of that, our initial stance is to default to respecting websites as much as possible.
That said, to be clear on our implementation: we currently only respond to explicit blocks directed at the Tabstack user agent. You can read more about how this works here: https://docs.tabstack.ai/trust/controlling-access
I think too often people fall completely on one side of this question or the other. I think it’s really complicated, and deserves a lot of nuance. I think it mostly comes down to having a right to exert control over how our data should be used, and I think most of it’s currently shaped by Section 230.
Generally speaking, platforms consider data to be owned by the platform. GDPR and CCPA/CPRA try to be the counter to that, but those are also too-crude a tool.
Let’s take an example: Reddit. Let’s say a user is asking for help and I post a solution that I’m proud of. In that act, I’m generally expecting to help the original person who asked the question, and since I’m aware that the post is public, I’m expecting it to help whoever comes next with the same question.
Now (correct me if I’m wrong, but) GDPR considers my public post to be my data. I’m allowed to request that Reddit return it to me or remove it from the website. But then with Reddit’s recent API policies, that data is also Reddit’s product. They’re selling access to it for … whatever purposes they outline in the use policy there. That’s pretty far outside what a user is thinking when they post on Reddit. And the other side of it as well — was my answer used to train a model that benefits from my writing and converts it into money for a model maker? (To name just an example).
I think ultimately, platforms have too much control, and users have too little specificity in declaring who should be allowed to use their content and for what purposes.
I'm afraid that Tabstack would be powerful enough to bypass some existing countermeasures against scrapers, and once allowed in its lightweight mode be used to scrape data it is not supposed to be allowed to. I'd bet that someone will at least try.
Then there is the issue of which actions and agent is allowed to do on behalf of a user. Many sites have in their Terms of Service that all actions must be by done directly by a human, or that all submitted content be human-generated and not from a bot. I'd suppose that an AI agent could find and interpret the ToS, but that is error-prone and not the proper level to do it at. Some kind of formal declaration of what is allowed is necessary: robots.txt is such a formal declaration, but very coarsely grained.
There have been several disparate proposals for formats and protocols that are "robots.txt but for AI". I've seen that at least one of them allow different rules for AI agents and machine learning. But these are too disparate, not widely known ... and completely ignored by scrapers anyway, so why bother.
If (for instance) my content changes often and I always want people to see an up-to-date version, the second option is clearly worse for me!
My apprehension is not with AI agents per se, it is the current, and likely future implementation: AI vendors selling the search and re-publication of other parties' content. In this relationship, neither option is great: either these providers are hammering your site on behalf of their subscribers' individual queries, or they are scraping and caching it, and reselling potentially stale information about you.
There are technical improvements to web standards that can and should be made that doesn't favor adtech and exploitative commercial interests over the functionality, freedom, and technically sound operation of the internet
I also wanted to see how/if it handled semantic data (schema.org and Wikidata ontologies), but the hidden pricing threw me off.
We are simply in an early stage and still finalizing our long-term subscription tiers. Currently, we use a simple credit model which is $1 per 10,000 credits. However, every account receives 50,000 credits for free every month ($5 value). We will have a dedicated public pricing page up as soon as our monthly plans are finalized.
Regarding semantic data, our JSON extraction endpoint is designed to extract any data on the page. That said, we would love to know your specific use cases for those ontologies to see if we can further improve our support for them.
there's really no excuse for not spinning up a browser every request. a Firecracker VM boots ~50ms nowadays
> We respect robots.txt rules.
you might, but most companies in the market for your service don't want this
For example, something we may consider for the future is balancing when to implement direct API access versus browser rendering. If a website offers the same information via an API, that would almost always be faster and lighter than spinning up a headless browser, regardless of how fast the VM boots. While we don't support that hybrid approach yet, it illustrates why we are optimizing for the best tool for the job rather than just defaulting to a full browser every time.
Regarding robots.txt: We agree. Not all potential customers are going to want a service that respects robots.txt or other content-owner-friendly policies. As I alluded to in another comment, we have a difficult task ahead of us to do our best by both the content owners and the developers trying to access that content.
As part of Mozilla, we have certain values that we work by and will remain true to. If that ultimately means some number of potential customers choose a competitor, that is a trade-off we are comfortable with.