Ask HN: How do you give AI agents access without over-permissioning?
5 points
6 hours ago
| 5 comments
| HN
To make AI agents more efficient, we need to build feedback loops with real systems: deployments, logs, configs, environments, dashboards.

But this is where things break down.

Most modern apps don’t have fine-grained permissions.

Concrete example: Vercel. If I want an agent to read logs or inspect env vars, I have to give it a token that also allows it to modify or delete things. There’s no clean read-only or capability-scoped access.

And this isn’t just Vercel. I see the same pattern across cloud dashboards, CI/CD systems, and SaaS APIs that were designed around trusted humans, not autonomous agents.

So the real question:

How are people actually restricting AI agents in production today?

Are you building proxy layers that enforce policy? Wrapping APIs with allowlists? Or just accepting the risk?

It feels like we’re trying to connect autonomous systems to infrastructure that was never designed for them.

Curious how others are handling this in real setups, not theory.

ninan980805
4 hours ago
[-]
I am surprised vercel doesn't have fine-grained control. Supabase for example allows developer to config IAM roles and which role has read-only or read-write access to which tables. And each IAM role comes with its own token. This way people can easily configure a set of permissions agent should have access to and give that token to agent.
reply
verdverm
6 hours ago
[-]
If you use a cloud like AWS, GCP, or Azure... you give it an SA and you give access with very fine grained permission controls

It's more about specific apps than modern apps and how your org puts their infra together.

I don't have your problem, I can give my agents all sorts of environments with a spectrum of access vs restrictions

reply
NBenkovich
6 hours ago
[-]
Agreed on cloud IAM. AWS, GCP, and Azure handle fine-grained access well.

The problem is higher-level platforms and SaaS. Once agents need feedback from deployment, CI, logs, or config tools, permissions often collapse into “full token or nothing”. Vercel is just one example.

That’s the gap I’m pointing at.

reply
verdverm
5 hours ago
[-]
Maybe the problem is your SaaS choices

I don't have problems with permissions in any of those things you listed. Do mainly k8s based infra

reply
vitramir
4 hours ago
[-]
terraform cloud, argocd, vercel and supabase (modern stack for micro apps), sentry (doesn't have per project permissions), sendgrid, etc...

What does your stack look like beyond Kubernetes and AWS? It’s hard to imagine everything there supports truly fine-grained permissions.

reply
verdverm
3 hours ago
[-]
Actually, almost everything stays within the private cloud, health care industry

GCP (main), AWS/Azure (b/c customers), Jenkins/Argo

TF/Helm are IaC and run from containers, no hashicorp services

CloudSQL, why are you sending your db queries to a SaaS?

LGTM for observability

The vendors we do have are WIF'd (i.e. code & secops scanning)

WIF is the key, mature vendors are supporting WIF, and amazingly the hyperscalers are supporting each others WIFs for cross-cloud, so we can give a GCP SA, AWS perms and vice versa

reply
vitramir
5 hours ago
[-]
There’s also a related issue: many services use per-project API tokens. When agents need access to multiple projects, you have to pass several tokens at once. Which often leads to confusion and erratic behavior, including severe hallucinations.
reply
NBenkovich
5 hours ago
[-]
Yeah, totally. Per-project tokens make it worse. Once you hand an agent multiple tokens, there’s no clean way to say “use this one vs that one”.
reply
imidov
6 hours ago
[-]
There’s no clean read-only or capability-scoped access.-> always found that to be a no brainer backend feature, somehow most platforms misses that
reply
NBenkovich
5 hours ago
[-]
Yeah, agreed. Read-only and capability-scoped access feels like a no-brainer.

Most platforms were built assuming a human behind the UI. Once you introduce AI agents, the missing permission layers start to show.

reply
fsflover
6 hours ago
[-]
Qubes OS allows to isolate any workflow with hardware-assisted virtualization.
reply
NBenkovich
6 hours ago
[-]
How can it help? Could you share more details please?
reply
fsflover
5 hours ago
[-]
On Qubes, all software runs in virtual machines, isolated with strong virtualization. Anything you do in one dedicated VM has no effect on all others, so any unrelated data will not be accessible by the AI agents.
reply
NBenkovich
5 hours ago
[-]
It’s great but how can it help with agent’s permissions for cloud services without fine grained tokens?
reply