Demo video: https://www.youtube.com/watch?v=2FitSggI7tg.
Right now, we have two main methods to interact with Datafruit:
(1) automated infrastructure audits— agents periodically scan your environment to find cost optimization opportunities, detect infrastructure drift, and validate your infra against compliance requirements.
(2) chat interface (available as a web UI and through slack) — ask the agent questions for real-time insights, or assign tasks directly, such as investigating spend anomalies, reviewing security posture, or applying changes to IaC resources.
Working at FAANG and various high-growth startups, we realized that infra work requires an enormous amount of context, often more than traditional software engineering. The business decisions, codebase, and cloud itself are all extremely important in any task that has been assigned. To maximize the success of the agents, we do a fair amount of context engineering. Not hallucinating is super important!
One thing which has worked incredibly well for us is a multi-agent system where we have specialized sub-agents with access to specific tool calls and documentation for their specialty. Agents choose to “handoff” to each other when they feel like another agent would be more specialized for the task. However, all agents share the same context (https://cognition.ai/blog/dont-build-multi-agents). We’re pretty happy with this approach, and believe it could work in other disciplines which require high amounts of specialized expertise.
Infrastructure is probably the most mission-critical part of any software organization, and needs extremely heavy guardrails to keep it safe. Language models are not yet at the point where they can be trusted to make changes (we’ve talked to a couple of startups where the Claude Code + AWS CLI combo has taken their infra down). Right now, Datafruit receives read-only access to your infrastructure and can only make changes through pull requests to your IaC repositories. The agent also operates in a sandboxed virtual environment so that it could not write cloud CLI commands if it wanted to!
Where LLMs can add significant value is in reducing the constant operational inefficiencies that eat up cloud spend and delay deadlines—the small-but-urgent ops work. Once Datafruit indexes your environment, you can ask it to do things like:
"Grant @User write access to analytics S3 bucket for 24 hours"
-> Creates temporary IAM role, sends least-privilege credentials, auto-revokes tomorrow
"Find where this secret is used so I can rotate it without downtime"
-> Discovers all instances of your secret, including old cron-jobs you might not know about, so you can safely rotate your keys
"Why did database costs spike yesterday?"
-> Identifies expensive queries, shows optimization options, implements fixes
We charge a straightforward subscription model for a managed version, but we also offer a bring-your-own-cloud model. All of Datafruit can be deployed on Kubernetes using Helm charts for enterprise customers where data can’t leave your VPC.
For the time being, we’re installing the product ourselves on customers' clouds. It doesn’t exist in a self-serve form yet. We’ll get there eventually, but in the meantime if you’re interested we’d love for you guys to email us at founders@datafruit.dev.We would love to hear your thoughts! If you work with cloud infra, we are especially interested in learning about what kinds of work you do which you wish could be offloaded onto an agent.
There is still benefit for non-Infra people. But non-Infra people don't understand system design, so the benefits are limited. Imagine a "mechanic AI". Yes, you could ask it all sorts of mechanic questions, and maybe it could even do some work on the car. But if you wanted to, say, replace the entire engine with a different one, that is a systemic change and has farther reaching implications than an AI will explain, much less perform competently. You need a mechanic to stop you and say, uh, no, please don't change the engine; explain to me what you're trying to do and I'll help you find a better solution. Then you need a real mechanic to manage changing the tires on the moving bus so it doesn't crash into the school. But having an AI could make the mechanic do all of that smoother.
Another thing I'd love to see more AI use of, is people asking the AI for advice. Most devs seem to avoid asking Infra people for architectural/design advice. This leads to them putting together a system using their limited knowledge, and it turns out to be an inferior design to what an Infra person would have suggested. Hopefully they will ask AI for advice in the future.
Something we’ve been dealing with is trying to get the agents to not over-complicate their designs, because they have a tendency to do so. But with good prompting they can be very helpful assistants!
Might be good to train multiple "personalities": one's a startup codebro that will tell you the easiest way to do anything; another will only give you the best practice and won't let you cheat yourself. Let the user decide who they want advice from.
Going further: input the business's requirements first, let that help decide? Just today I was on a call where somebody wants to manually deploy a single EC2 instance to run a big service. My first question is, if it goes down and it takes 2+ days to bring it back, is the business okay with that? That'll change my advice.
The personalities approach sounds fun to experiment with. I'm wondering if you could use SAEs to scan for a "startup codebro" feature in language models. Alas this is not something we get to look into until we think that fine-tuning our own models is the best way to make them better. For now we are betting on in-context learning.
Business requirements are also incredibly valuable. Notion, Slack, and Confluence hold a lot of context, but it can be hard to find. This is something that I think the subagents architecture is great for though.
Even if you manage to prompt an app, you'll still have no idea how the system works.
> Right now, Datafruit receives read-only access to your infrastructure
> "Grant @User write access to analytics S3 bucket for 24 hours" > -> Creates temporary IAM role, sends least-privilege credentials, auto-revokes tomorrow
These statements directly conflict with one another.
So it needs "iam:CreateRole," "iam:AttachPolicy," and other similar permissions. Those are not "read-only." And, they make it effectively admin in the account.
What safeguards are in place to make sure it doesn't delete other roles, or make production-impacting changes?
How is the auto-revoke handled? Will it require human intervention to merge a PR/apply the Terraform configuration, or will it do it automatically?
Also, auto-revoke right now can be handled by creating a role in Terraform that can be assumed and expires after a certain time. But we’re exploring deeper integrations with identity providers like Okta to handle this better.
I consulted for an early stage company that was trying to do this during the GPT-3 era. Despite the founders' stellar reputation and impressive startup pedigree, it was exceedingly difficult to get customers to provide meaningful read access to their AWS infrastructure, let alone the ability to make changes.
And yeah, we are noticing that it’s difficult to convince people to give us access to their infrastructure. I hope that a BYOC model will help with that.
> we’ve talked to a couple of startups where the Claude Code + AWS CLI combo has taken their infra down
Do you care to share what language model(s) you use?
It is workflow automation in the end of the day. I would rather pick SOAR or AI-SOC where automation like this is very common. For eg blinkops or torq.
We have not spent as much time working in the security space, and I do think that purpose-built solutions are better if you only care about security. We are purposefully trying to stay broad, which might mean that our agents lack depth in specific verticals.
Why does that need an AI? I’m pretty sure many tools for those things exist, and they predate LLMs.
I think the power language models introduce is being able to more tightly integrate app-code with the infrastructure. They can read YAML, shell scripts, or ad-hoc wiki policies and map them to compliance checks, for example.
BTW, your website is heavy, for a basic set of components it shouldn't be taking 100% CPU.
You need to be very clear about the persona who you're building for, what their pain point is, and why they're willing to spend money to solve it. So far it seems like you took an emerging technology (agentic workflows), applied it to a novel area (DevOps), built a UX around it, and tried to immediately start selling. This is the product trap of a solution in search of a problem.
Are you trying to sell to large companies? The problem that large companies have is cultural/organizational, not tooling. For any change, you need to get about a dozen people to review, understand, wait for people to come back from vacation, ping people because it fell off their desk, sign off, get them to prioritize, answer questions again from the engineer the task was assigned to, wait for another round of reviews and approvals, and maybe finally somebody will get the fix applied in production. DevOps is (or at least, it originally used to be) focused on finding and alleviating the bottlenecks; the actual process of finding data or applying changes is not the bottleneck in large companies and so therefore it is not a solution to the pain point that different folk in large companies have. If your value proposition is that large company executives could replace Infrastructure employee salaries with a cheaper agentic workflow, you need to re-read my prior point - if large companies have all this process and approvals for human beings making changes, why would they ever let an agentic workflow YOLO the changes without approval? And yes, I know, your agent proposes Terraform PRs for making changes to keep a human in the loop - but now you slayed one of the Hydra's heads and three more have popped up in its place: the customer needs the Terraform PR to be reviewed by a human committee, some of whose members are on vacation, some of whose members missed the PR request because they had other priorities and it fell off their desk, etc. etc. Doesn't really sound like you solved anything. The fundamental difference between what you built and something like Claude Code is that Claude Code doesn't need a human committee to review on every iteration it executes on an engineer's laptop, only the review of the One Benevolent Laptop User who is incentivized to get good output from Claude Code and provide human review as quickly as (literally) humanly possible.
Are you trying to sell to small companies that don't have DevOps Engineers? What's the competitive space here? The options usually look something like, (a) pay a premium for a PaaS, (b) spend on the salary for your first DevOps Engineer in the hopes that they will save more on low-level infra bills compared to their salary, so you're posing now (c) some kind of DevOps agentic workflow that is cheaper than a DevOps Engineer salary but will provide similar infra cost savings? So your agentic workflow will actually lift and shift to better/cheaper infra primitives and own day-to-day maintenance, responding to infra issues which your customers - who aren't DevOps Engineers, and don't know anything about infra, and are trying to outsource these concerns to you - which your customers don't know how to handle? I would argue that if you really did achieve that, then you should be building an agentic-workflow-maintained PaaS that, by virtue of using agents instead of humans, can undercut traditional PaaS on cost while offering a maybe better UX somehow. If you're asking your customers to review infra changes that they don't understand, then they need to hire a DevOps Engineer for the expertise to review it, and then you have a much less interesting value proposition.
Right now most of our value, as you said, is in augmenting an infra engineer at a growth stage company to limit some of the operational burdens they deal with. For the companies we’ve been selling to, the customers are SWEs who have been forced to learn infra when needs arise. But overall they are fairly competent and technical. And Claude code or other agentic coding tools are not always sufficient or safe to use. Our customers have told us anecdotally that Claude code gets stuck in a hallucination loop of nothingness on certain tasks and that Datafruit was able to solve them.
That being said, we have lost sales because people are content with Claude code. So this is something we are thinking about.
1. Infra engineers always want to apply changes by themselves, but tooling can always recommend changes
2. What are all the kinds of work that infra engineers would love to do, that *do* add value, but that they haven't built yet because they can't prioritize it?
3. How do you build an agent that:
a. Understands architectural context
b. Offers to set up (i.e. Terraform PR) value-adding infra
c. That the human infra engineer can easily maintain
d. That the human infra engineer will appreciate as being value-adding and not time-wasting or unnecessary-expense?
Maybe the key isn't to provide an agent that will power a PaaS; maybe the key is to give early infra engineers the productivity to build their own in-house PaaS. Then your value-add above Claude Code is clear, because Claude Code is a generic enough tool that it doesn't even make any recommendations; because a DevOps agent works within an axiomatic framework of improving visibility, reducing costs, improving release velocity, improving security, etc., so it can even start (after understanding the architecture, i.e. by hooking up MCP servers and writing an INFRA.md) by making recommendations and then just ask the customer if they like the PRs it is proposing. Does that resonate with you?I think in the near-term, however, the problem we have identified is that while developers at growth stage have been vastly accelerated, the infra engineers have not been. So our tool is almost helping them “catch up” to the new rapid pace of development. This is dangerous due to the complexity and need for infrastructure to be robust, hence why we are really focused on making it safe to use.
At larger enterprisey companies, AI has not yet been an extreme productivity boost for the developers like it has been for growth stage companies. But I do believe that an enterprise adoption wave is coming.
https://github.com/stakpak/agent/blob/v0.2.39/LICENSE (Apache 2)
YC, you want founders of this companies to have 10 years working at Ford Motor Company. It's all reasons I want to write my blog article of "FAANG, please STFU. I wish I could be focused on 100k Requests per Second but instead I'm dealing with engineers who has no idea why their ORM is creating terrible query. Please stop telling them about GraphQL."
"Grant @User write access to analytics S3 bucket for 24 hours" Can the user even have access to this? Do they need write access or can't understand why they are getting errors on read? What happens when they forget in 30 days they asked your LLM for access and now their application does not work because they decided to borrow this S3 bucket instead of asking for one of their own. Yes this happened.
"Find where this secret is used so I can rotate it without downtime" Well, unless you are scanning all our Github repos, Kubernetes secret and containers, you are going to miss the fact this secret was manually loaded into Kubernetes/loaded into flat file in Docker container or stored in some random secret manager none of us are even aware of.
""Why did database costs spike yesterday?" -> Identifies expensive queries, shows optimization options, implements fixes
How? Likely it's because bad schema or lack of understanding with ORMs. Fix is going to be some PR somewhere to Dev who probably does not understand what they are reviewing.
Most of our headaches is the fact that Devs almost never give a shit about Ops, their bosses don't give a shit about Ops and Ops is trying desperately to keep this train which is on fire from derailing. We don't need AI YOLOing more stuff into Prod, we need AI to tell their bosses what downtime they are causing is costing our company so maybe, just maybe, they will actually care.
We are always trying to learn more based on our customer's feedback. What we've learned so far is that infra setups are all extremely different, and what works for some companies don't work for others. There's also vastly different company cultures related to ops. Some companies value their ops team a lot, other companies burden them with way too much work. Our goal is to try to make that burden a little lighter :)
Writing Terraform is not hard part for this Ops person, if I wanted to use AI, Copilot can easily write it no problem but I'm pretty fast enough these days. Devs of course could use to write Terraform but we are back to the problem of they have no idea what they are asking for.
Maybe my larger organization is not your target market, maybe it's places without dedicated Ops person but at that point, AI that can manage Kubernetes/PaaS for them would be more useful than another TerraForm AI bot.
Also, as a daily AI user (claude code / codex subs), I'm not sure I want YOLO AIs anywhere near my infra.
I don't mind letting AI's help with infra, but it's with the configs and infra as code files and it will never have any form of access to anything outside it's little box. It's significantly faster at writing out the port ranges for an FTP (don't ask) ingress than I can by hand.
that's because infrastructure is complicated. the AWS console isn't that bad (it's not great, and you should just use terraform whenever possible because clickops is dull, error-prone work); there's just a lot to know in order to deploy infrastructure cost-effectively.
this is more like "we don't want to hire infra engineers who know what they're doing so here's a tool to make suggestions that a knowledgeable engineer would make, vet and apply. just Trust Us."
Or CloudFormation, cleaner, reusable, easy to read/write, less LOC than TF :)
I know dang is going to shake his finger at me for this, but come on.
Also:
> AWS emulator
isn't doing you any favors. I, too, have tried localstack and I can tell you first hand it is not an AWS emulator. That doesn't even get into the fact that AWS is not DevOps so what's up: is it AWS only or does it have GCP Emulation, too?
That's my whole point about the leading observation: without proper expectation management, how could anyone who spots this Launch HN possibly know if they should spend the time to book a call with you?
You're right that the bar is higher for Launch HNs (I wrote about this here: https://news.ycombinator.com/item?id=39633270) - but it's not uncommon for a startup to have a working product and real customers and yet have a home page that just says "book a call".
For some early-stage startups it makes sense to focus on iterating rapidly based on feedback from a few customers, and to defer building what used to be called the "whole product" (including self-serve features, a complete website, etc.) until later. It's simply about prioritizing higher-risk things and deferring lower-risk things.
I believe this is especially true for enterprise products, since deployment, onboarding, etc. are more complex and require personal interaction (at least in the early stages).
In such cases, a Launch HN still makes sense when the startup is real, the product is real, and there are real customers. But since the product can't be tried out publicly, I tell the founders they need a good demo video, and I usually tell them to add to their text an explanation of why the product isn't publicly available yet, as well as an invitation to contact them if people want to know more or might want to be an early adopter. (You'll notice that both of those things are present in the text above!)
However, I would not have learned that they seem to offer GCP, too, from the website; I only learned that from the video. That is my complaint: how hard would it have been to include "AWS or GCP" on the page? Or, if nextjs doesn't support arbitrary text for some reason, then including those 3 letters in the textarea here would have gone a long way, too
I know this is a ton of words, and that I'm barking up the wrong tree by telling you about my sadness about the lack of specificity from a Launch HN. But you asked to understand, and I knew from prior experience that either YC, or the mods, or both, are super sensitive to having any Lanuch HN criticized so I knew this conversation was coming. I do hope that I presented my observations in a factual and [mostly] neutral albeit frustrated tone (err, except for the "come on" ... I do regret that I included that part)
But based on your response I'm guessing I'm just not in the target demographic of what a Launch HN should offer to folks who read them. I can only say that I'll try harder to keep my feedback to myself, despite almost all of them ending with "We would love to hear your thoughts!" So, I guess it's closer to someone saying "how are you?" in the morning
https://www.uspto.gov/trademarks/search/likelihood-confusion
> Trademarks don’t have to be identical to be confusingly similar. Instead, they could just be similar in sound, appearance, or meaning, or could create a similar commercial impression.