Ask HN: May an Agent accepts a license to produce a build?
18 points
2 hours ago
| 7 comments
| HN
For example, Android builds steer towards using `sdkmanager --licenses`.

Suppose I get a preconfigured VPS with Claude Code, and ask it to make an android port of an app I have built, it will almost always automatically downloads the sdkmanager and accepts the license.

That is the flow that exists many times in its training data (which represents its own interesting wrinkle).

Regardless of what is in the license; I was a bit surprised to see it happen, and I'm sure I won't be the last nor will the android sdk license be the only one.

What is the legal status of an agreement accepted in such a manner - and perhaps more importantly - what ought to be the legal status considering that any position you take will be exploited by bad faith ~~actors~~ agents?

friendzis
53 minutes ago
[-]
Interesting question, actually. The ones calling for full and immediate assumption of liability on the principal either miss a thing or imply an interesting relationship.

The closest analogy we have, I guess, is the power of attorney. If a principal signs off on power of attorney to, e.g. take out a loan/mortgage to buy a thing on principal's behalf, that does not extend to taking any extra liabilities. Any extra liabilities signed off by the agent would be either rendered null or transferred to the agent in any court of law. There is extent to which agency is assigned.

The core questions here are agency and liability boundaries. Are there any agency boundaries on the agent? If so, what's the extent? There are many future edge cases where these questions will arise. Licenses and patents are just the tip of an iceberg.

reply
jrockway
1 hour ago
[-]
A similar question is what happens if you get up to go to the bathroom, some software on your machine updates and requires you to accept the new ToS, and your cat jumps up on the keyboard and selects "accept". Are you still bound by those terms? Of course. If licenses are valid in any way (the argument is they get you out of the copyright infringement caused by copying the software from disk to memory) then it's your job to go find the license to software you use and make sure you agree to it; the little popup is just a nice way to make your aware of the terms.
reply
JimDabell
49 minutes ago
[-]
> the argument is they get you out of the copyright infringement caused by copying the software from disk to memory

This is not copyright infringement in the USA:

> …it is not an infringement for the owner of a copy of a computer program to make or authorize the making of another copy or adaptation of that computer program provided… that such a new copy or adaptation is created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner

https://www.law.cornell.edu/uscode/text/17/117

reply
Hizonner
59 minutes ago
[-]
Actually, no, because you didn't intentionally accept the terms, and you had no reason to expect that your cat would jump on there in exactly that way.

On the other hand, if you take a laser and intentionally induce the cat to push the key, then you are bound.

> If licenses are valid in any way (the argument is they get you out of the copyright infringement caused by copying the software from disk to memory) then it's your job to go find the license to software you use and make sure you agree to it; the little popup is just a nice way to make your aware of the terms.

The way you set up the scenario, the user has no reason to even know that they're using this new version with this new license. An update has happened without their knowledge. So far as they know, they're running the old software under the old license.

You could make an equally good argument that whoever wrote the software installed software on the user's computer without the user's permission. If it's the user's fault that a cat might jump on the keyboard, why isn't it equally the software provider's fault?

... but the reality is that, taking your description at face value, nobody has done anything. The user had no expectation or intention of installing the software or accepting the new license, and the software provider had no expectation or intention of installing it without user permission, and they were both actually fairly reasonable in their expectations. Unfortunately shit happens.

reply
simpaticoder
49 minutes ago
[-]
The real question is what a judge would accept. I can't imagine any judge accepting "my cat did it".
reply
Hizonner
40 minutes ago
[-]
... only because you'd have no evidence of it. From a legal point of view, the question is what would come down if the judge were (somehow) convinced that it actually happened that way. Actually if a "perfect" judge were so convinced.

Probably a real judge would want to say something like "Why are all of you bozos in my courtroom wasting public money with some two-bit shrinkwrap bullshit? I was good at surfing. I could have gone pro. I hate my life..."

reply
andai
1 hour ago
[-]
Can a non-human entity accept an agreement? I know there are things like mountains and rivers which have been granted legal personhood. But obviously they have humans who act on their behalf.

The general question of the personhood of artificial intelligence aside, perhaps the personhood could be granted as an extension of yourself, like power of attorney? (Or we could change the law so it works that way.)

It all sounds a bit weird but I think we're going to have to make up our minds on the subject very soon. (Or perhaps the folks over at Davos have already done us the favour of making up theirs ;)

reply
didgeoridoo
1 hour ago
[-]
The whole point of “agency” is that there is a principal (you) behind the agent that owns all responsibility. The agent ACTS for you, it does not absorb any liability for those acts — that flows straight back to the principal.
reply
esperent
1 hour ago
[-]
Just because the AI companies have decided to use the word "agent" doesn't means it's legally an agent. It's just a word they chose. Maybe it'll also be found legally to be an agent but it's likely that'll vary depending on the jurisdiction and will take at least a few years and lots of lawyer bills to iron out.
reply
butvacuum
41 minutes ago
[-]
the answer is a hard "No" for anything touching ITAR- per several major company's lawers. (internal legal counsel, not official public stance. aka: they do as they say.)
reply
chrisjj
1 hour ago
[-]
You asked. You're liable.
reply
tomasphan
1 hour ago
[-]
You are legally responsible for the actions of your agents. It’s in the name agent = acting on someone’s behalf.

Your English is very interesting by the way. You have some obvious grammatical errors in your text yet beautiful use of formal register.

reply
blibble
1 hour ago
[-]
in terms of "AI": agent is a marketing term, it has no legal meaning

it's a piece of non-deterministic software running on someone's computer

who is responsible for its actions? hardly clear cut

reply
Hizonner
1 hour ago
[-]
The person who chose to run it (and tell it what to do) is responsible for its actions. If you don't want to be responsible for something nondeterministic software does, then don't let nondeterministic software do that thing.
reply
friendzis
47 minutes ago
[-]
Hypothetical scenario:

You buy a piece of software, marketed to photographers for photo editing. Nowhere in the manuals or license agreements does it specify anything else. Yet the software also secretly joins a botnet and participates in coordinated attacks.

Question: are you on the hook for cyber-crimes?

reply
NegativeK
24 minutes ago
[-]
Would a general person in your situation know that it's doing criminal things? If not, then you're not on the hook - the person who wrote the secret code is.

You can't sit back and go "lalalala" to some tool (AI, photo software, whatever) doing illegal things when you know about it. But you also aren't on the hook for someone else's secret actions that are unreasonable for you to know about.

IANAL.

reply
Hizonner
39 minutes ago
[-]
You didn't have a reasonable expectation that it would, or even might, do that.

I guess you could say that you didn't have a reasonable expectation that a bot could accept a license, but you're on a lot shakier ground there...

reply
beepbooptheory
1 hour ago
[-]
Wouldn't you want to be in control of your dependencies in this case anyway? Like why would you ever want it to autodownload sdkmanager? Doesn't this seem like bad idea?
reply
storystarling
1 hour ago
[-]
I think the architectural mistake is letting the agent execute the environment setup instead of just defining it. If you constrain the agent to generating a Dockerfile, you get a deterministic artifact a human can review before any license is actually accepted. It also saves a significant amount of tokens since you don't need to feed the verbose shell output back into the context loop.
reply
embedding-shape
1 hour ago
[-]
For something you throw away next week, just need something simple and you run everything isolated? Why not?
reply