It allows you to define different DNS-based rules that are resolved in a local daemon to IPs, then pushed to the eBPF filter to allow traffic. By doing it this way, we can still allow DNS-defined rules, but prevent contacting random IPs.
There's also no network performance penalty, since it's just DNS lookups and eBPF filters referencing memory.
It also means you don't have to tamper with the base image, which the agent could potentially manipulate to remove rules (unless you prevent root maybe).
It automatically manages the lifecycle of eBPF filters on cgroups and interfaces, so it works well for both containers and micro VMs (like Firecracker).
You implement a control plane, just like Envoy xDS, which you can manage the rules of each cgroup/interface. You can even manage DNS through the control plane to dynamically resolve records (which is helpful as a normal DNS server doesn't know which interface/cgroup a request might be coming from).
We specifically use this to allow our agents to only contact S3, pip, apt, and npm.
For the former you don't, it's just DNS. The local DNS server respects TTL, and is no more expensive than a normal DNS lookup. It just proxies it to take the resolved IPs and push them into the eBPF map.
For the latter, the default expectation is that you push the rules to the "Attachment", typically in the "SyncAck". If you need to make updates, you push down deltas (add/remove rule).
You _can_ do dynamic DNS resolution, and there you'll be paying either 1x or ~2x DNS depending on whether your control plane already knows the IPs.
Like Envoy xDS, but for eBPF filters.
Which would make the title make much more sense!I thought about putting xDS in, but I worried it might be confusing for people who might not know the xDS specifics of Envoy. But now I'm second guessing it lol.