What should matter is intent, the human that gives the orders.
I'd like to hear more nuance with regards to this line of reasoning. Can you conceive of a model that contains highly non-trivial representations of IP owned by others than yourself? Can you conceive that you might "order" the model to "produce" that IP? What happens then?
Try this both for "open source code" as the IP, and "the novel I wrote", and "latest Hollywood movie". The model does not have to be a real model currently available. It's just a thought experiment.
Try also to elaborate on the sliding scale between "an AI model" and "a compression system".
> The US Copyright Office confirmed this in January 2025, and the Supreme Court declined to disturb it in March 2026 when it turned away the Thaler appeal. Works predominantly generated by AI without meaningful human authorship are not eligible for copyright protection, and that rule is now settled at the highest judicial level available.
Misstates the law. Denial of certiorari can happen for many reasons unrelated to the merits and does not settle the issue nationwide.> When the Supreme Court declined to hear the Thaler appeal in March 2026, it did not endorse the lower court's reasoning or settle the question nationally. Cert denial means the Court chose not to hear the case, nothing more. What it does mean is that the DC Circuit's ruling stands, the Copyright Office's position is intact, and no court has yet gone the other way.
Your quoted text is no longer in TFA.
These sorts of simplistic loopholes rarely work. Imagine if you could get copyright for the linux kernel by just rearranging it and renaming a few variables.
While it's not code related, the copyright office's opinion is a good read and I don't see any reason to believe it's opinion is different for works of text vs works of physical art: https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
How is this defined? Is my code review "meaningful" ? Are my amendments and edits to the generated code "human authorship" ?
> Specifying an objective to the model is not enough. Directing how the work is constructed is what counts.
Unless you are running a local model, your prompts are almost certainly logged by your inference provider, and would only be a subpoena away?
So it’s not correct to say “because SCOTUS denied cert, Thaler is now binding national copyright law.”
Practically speaking, it is binding on the US Copyright office (one of the parties in the case) in CADC. And that’s important. But copyright litigation happens all across the country, while this ruling only directly constrains the relatively small number of cases within CADC.
There are some kinds of cases where the Court has "original jurisdiction," meaning they must hear them, but those are very rare.
Now different circuits can take a different view of the same issue. This is a common reason why the Supreme Court will grant cert: to resolve a circuit split. Appeals court judges know this and have at times (allegedly) intentnionally split to force an issue to the Supreme Court.
Even without settling the issue appeals courts will look at how other circuits have ruled and be guided by their reasoning, generally. The fact that the Supreme Court declined to grant cert actually carries weight.
> The Supreme Court declining to take up an issue is taking a position.
No it is not. > “The denial of a writ of certiorari imports no expression of opinion upon the merits of the case, as the bar has been told many times.”
United States v. Carver, 260 U. S. 482, 490 (1923).Moreover, SCOTUS does not decide issues, they decide cases.
> “We are acutely aware, however, that we sit to decide concrete cases, and not abstract propositions of law.”
Upjohn Co. v. United States, 449 U. S. 383, 386 (1981).When I'm feeding AI my code as input and it ends up producing new code which adheres to my architecture, my coding style and my detailed technical requirements, the copyright over the output should be mine since the code looks exactly like what I would have produced by hand, there is no creative input from the AI. It's just a code completion tool to save time.
I understand if someone leaves an LLM running as an agent for multiple days and it produces a whole bunch of code, then it's a very different process.
I'm concerned about the copyright 'washing' this enables though, especially in OSS, and I think the right thing for OSS devs to do is to try to publish resulting code with the strongest copyleft licensing that they are comfortable with - https://jackson.dev/post/moral-ai-licensing/
Dowling v. United States, 473 U.S. 207 (1985): The Supreme Court ruled that the unauthorized sale of phonorecords of copyrighted musical compositions does not constitute "stolen, converted or taken by fraud" goods under the National Stolen Property Act
It's perfectly reasonable to say it's okay for humans to do something but not okay for a computer program to do the same thing. We don't have to equate AI to humans, that's a choice and usually a bad one.
It would not be reasonable to allow machines to do that at unlimited scale without restrictions.
(Hopefully the fossil fuels industry won't draw inspiration from the legal arguments made by AI companies...)
Is there any line past which it becomes unreasonable?
> It would not be reasonable to allow machines to do that at unlimited scale without restrictions.
If the machines were a replacement for a damaged respiratory system in a human would it reasonable?
What about if the machine were being used by a human to do something else that was important?
Where is the line where it becomes reasonable?
Now, if you'll excuse me, I need to catch a metal shuttle that chucks itself through the air on wings.
The relevant extension of your analogy is should birds be required to obey FAA rules? Or should plane factories be protected as nesting sites?
The mental calisthenics required to justify this stuff must be exhausting.
It's only exhausting if you think copyright ever reasonably settled the matter of ownership of knowledge and want to morally justify an incoherent set of outcomes that they personally favor. In practice it's primarily been a tool for the powerful party in any dispute to hammer others for disrupting their business model. I think that's pretty much the only way attempting to apply ownership semantics to knowledge or information can end up.
Knowledge consists of, roughly speaking, thoughts.
(a "justified true belief" - per https://plato.stanford.edu/entries/knowledge-analysis/ - is a kind of thought)
The "thinking" part of a "thinking being" - that also consists of thoughts.
If your knowledges are someone's property, you are someone's property.
A society where all knowledge is proprietary, is a society of ubiquitous slavery.
Maybe multi-layered, maybe fractional, maybe with a smiley-face drawn on top.
Doesn't matter.
I mean I don't think think I could find a better description for following the derivatives of error in reproducing a set of works as creating a "derivative work".
I agree. However, the reverse is also likely true, i.e., it cannot currently be denied that learning in humans is different from learning in artificial neural networks from the point of view of production of works that mix ideas/memes from several works processed/read. Surely, as the article says, copyright law talks exclusively about humans, not machines, not animals.
Edit*: Or perhaps put more pseudo legally that the created works infringe on the copyrights of the original human creators.
The above does not follow from, imply or conclude anything about learning in artificial neural networks and humans being similar or dissimilar.
Copy/pasting at scale, yes
Code gets turned into tokens and then it learns the next most likely token.
The issue that I see most people talk about it the scale at which is learnt.
A human will learn from other people’s code but not from every persons code.
Copyright law is very clear that if a machine does it, the original copyright on the input is kept. This is why your distributed binaries are still copyrighted, because the machine transformed, very significantly, the source code into binary which maintains the copyright throughout.
It would be inconsistent for the courts to suddenly decide that "actually, this specific type of machine transformation is actually innovative."
I know this is generally really bad for the AI industry, so they just ignore it until a court tells them they can't anymore. And they might get away with it as I don't have faith that the courts will be consistent.
And the specifics of autoregressive pretraining is that it is lossy compression. Good luck finding which copyrighted materials have made it into the final weights.
Yup, it absolutely does. In fact, that's why you are still violating copyright law by using bittorrent even though each of the users is only giving out a small slice or shred of the original content.
The US has a granted defense in the case of something like shredding called "Fair Use" but that doesn't mean or imply that a copyright is void simply because of a fair use claim.
> And the specifics of autoregressive pretraining is that it is lossy compression.
That doesn't matter. Why would it? If I take a FLAC recording and change it to an MP3. The fact that it was a lossy transform doesn't suddenly give me the legal right to distribute the MP3.
> Good luck finding which copyrighted materials have made it into the final weights.
That's what the NYT v. OpenAI lawsuit is all about. And for earlier models they could, in fact, pull out full NYT articles which proved they made it into the final weights.
Further, the NYT is currently in discovery which means OpenAI must open up to the NYT what goes into their weights. A move that, if OpenAI loses, other litigants can also use because there's a real good shot that OpenAI also included their works in the dataset.
Well, it's not the first time when the law contradicts laws of nature (for the entertainment of the future generations). Bittorent is not a relevant example, because the system is designed to restore the work in its fullness.
> in fact, pull out full NYT articles
That's when they used their knowledge of the exact text they wanted to "retrieve" to get the text? It wouldn't be so efficient with a random number generator, but it's doable.
You can restore shredded documents with enough time and effort. And if you did that and started making photo copies, even if they are incomplete, you will run afoul of copyright law.
Bittorrent is a relevant example because it shows that shredding doesn't destroy copyright.
Remember, copyright is about the right to copy something. Simply shredding or destroying a thing isn't applicable to copyright. Nor is giving that thing away. What's applicable is when you start to actually copy the thing.
EDIT: I don't say that neural networks can't rote learn extensive passages (it's an effect of data duplication). I'm saying that they are not designed to do that and it's possible to prevent that (as demonstrated by the latest models).
The way I arrive at that is imagine you add just 1 pixel of static to a video, that'd still be a copyright violation. Now imagine you slowly keep adding those random pixels. Eventually you get to the point where the whole video is just static, but at some point it wasn't.
Now, would any media company or court sue over that? Probably not. But I believe that still falls under copy right (but maybe fair use?).
The issue with neural networks is they aren't people. Even when you point your LLM at a website and say "summarize this" the output of that summation would be owned by the website itself by nature of it being a machine transformed work.
Remembered, it's not just mere rote recitation which violates the law, any transformation counts as well. The fact that AI companies are preventing it doesn't really solve the problem that they are in fact transforming multiple copyrighted works into their responses.
What would violate copyright is if you took that rendered page, turned it into a jpeg, and then hosted that jpeg from your own servers. That's the copying that would run afowl of copyright law.
I have seen LLMs do all sorts of crap which was clearly reproduction of training material.
This is also why people are most impressed with how much better it is at reproducing boilerplate rather than, say, imaginative new ideas.
If the LLM generates output that a court decides is sufficiently derivative, and especially (but not necessarily) if the LLM was trained on the source material being infringed, then whoever redistributes the derivative output is going to be liable for copyright infringement.
Creation of the LLM itself is transformative, but LLM output which infringes is not.
The case Community for Creative Non Violence Vs Reid (https://en.wikipedia.org/wiki/Community_for_Creative_Non-Vio...) solidifies a supreme court opinion that someone contracting a work and directing an author does not grant authorship to the commissioner of the work, it grants authorship to the person actually doing the work.
The author can grant authorship and copyright to the commissioner with a contract, but the monkey picture (and others) have solidified that only humans can be granted copyright. Since LLMs aren't human they can't hold copyright, and if the LLM doesn't have legal copyright then they don't have legal rights to assign copyright to you.
Code is protected by copyright as a literary work. The method is not protected by copyright, that would be the domain of patents. What's protected are the words.
If you say "Claude, build me a website about X" then you do not have any creative control over the literary work Claude is producing. You just told a machine to write it for you. Nor, like a compiler, is it derivative of any other work that you wrote.
If, on the other hand, you are working jointly with Claude to make specific changes to the code on a line-by-line basis, then you will have no problem claiming copyright over the code. Claude in this case is acting as a tool, but there's still a human making decisions about the code.
In the case where you wrote a bunch of markdown and then told Claude to generate the corresponding code but didn't have any involvement in writing the code itself, you could perhaps claim that the code is a derivative work of the markdown, a court would have to handle that case-by-case basis and evaluate how much control you exerted over the work.
No, a copyright application can be filed with a corporation listed as the author. Watch for the copyright notice at the end of the next major movie you see.
In any case, the corporation did not create the product, people created it and their contractual relationship with the corporation defined how the ownership of that work was managed. So, I don't find it too unusual that this element of personhood is available to corporations.
Under at least EU AI Act, any work done by AI is not granted copyright. But it does not mean copyright does not apply, it means the amount of work credited to AI is set at 0% (simplification). A human working off another's work unless it's perfect copy will have "credit" for changes that are judged creative/transformative, meaning a human plagiarizing something still can claim to have some degree of authorship. An AI won't.
In a sense, the copyright status of final work is a sort of "sum with dilution" were each work involved adds to claims, but AI's output is set at 0 - the prompt or further rework by human is not.
As for employer, details vary but generally "work for hire" rules and contracts do reassignment of material rights (in EU and some other places you can not reassign moral rights which are a different thing).
I think what this means is that the employee may not be the copyright owner for multiple reasons, which are possibly applicable simultaneously. It does not imply that the employer owns copyright over the work that is in public domain, which would be a contradiction.
I honestly don't understand why the attitude that underlies this is so prevalent.
When I write code, what I write and how I write it is informed by having read countless source code files over my education and my career. Just as I ingest all that experience to fine-tune how my later code is written, so does the LLM from the code it's seen.
The immediate retort to that is that the LLM is looking at code that wasn't its to read. But I don't think that's a valid objection. Pretty much by definition, everything I've learned from has a copyright on it, and other than my own code on my own time, that copyright is owned by someone else. Much of the code that's built up my understanding has been protected by NDA, or even defense-department classifications: it wasn't mine in any way. But it still informs how I do all my future coding.
By analogy: I'm also an artist, especially since my retirement. My approach to photography was influenced by Ansel Adams, and countless other artists whose works I've seen displayed in museums, or in publications and online. My current approach to painting was inspired by Bob Ross and others, and the teachers who have helped me develop. I've taken pieces of what I've seen in all their work, and all of that comes out in my photos and paintings, to varying degrees.
I've taken ideas from others in code and in art, and produced something (hopefully!) different by combining those bits with my own perspective. I don't think anyone has a claim on my product because of this relationship.
Likewise, I know that many of my successors have learned from my code (heck, I led teams, wrote one book about software development!). And I hope that someday my artwork has developed to the point where there's something in it that's worth someone else's attention to assimilate. I've never for a minute - even decades before the advent of LLMs - hoped or even imagined that my work would remain locked up with me, and that the ideas would follow me to the grave.
As they say, we are all standing on the shoulders of giants. None of us would be able to achieve the tiniest fraction of what we have, without assimilating what has come before us. Through many layers of inheritance it's constantly being incorporated in subsequent works.
In a few decades at best, I'll be dead. It probably won't be very long after that when people even forget my name. But the idea that something I've done - my work in developing software systems, or in my photography and painting - will continue to have ripples through time, inspires me and gives me hope that I'll have some tiny shred of immortality beyond my personal demise.
I live in the UK, and most US law is based upon English common law, it's not some immutable code given to us from above. It's based upon assumptions and capabilities of the entities participating in the system at the time the law was codified. It can and should change to make more sense if those assumptions and capabilities shift massively.
If they have only the rights that their human creators have, then access to them cannot be sold, in the exact same way that I cannot sell you a database that I have collected filled with copyrighted material. The "humans do training too" argument only holds if you imbue LLMs with similar rights to humans.
I am allowed to sell myself (in a very limited capacity) to others for them to exploit my training, even if that training was on protected material, which is a privilege humans should have, but machines should not.
However, because it is an issue with (at least historical) goals of copyright law, the common pattern that is evolving is that AI is not granted copyright of any work it generates, making it a bit of poison pill for some of the egregious ideas of corporate abuse. Not sure if the weights will be considered copyrightable either.
The nature of the source material matters though. Training a model on open source software seems perfectly fair - it has explicitly been released to the public, and learning from the code has never been a contested use.
IMO the questions around coding models should be seen as less about LLMs and more as a subset of the conversation about large companies driving immense profits from the work of volunteers on open-source projects, i.e. it's more about open source than AI.
I can't imagine it really justifiable to say that training off data is the same as "stealing", when that same claim, that learned information that a person could retain and reproduce constitutes copyright infringement is the subject of many dystopian narratives, like this one, where once your brain is uploaded to the cloud you have to pay royalties based on every media product you remember.
When it picks out a rare bit of code, it will be simply copying that code, illegally, and presenting it without attribution or any licenses which is in fact breaking the law but AI companies are too important for the law to apply to them.
There's been instances where models have spat out comments in code that mention original authors, etc., effectively outing itself as a copyright thief.
There's nothing anyone can do about it, but the suspicion is that the big companies have taken everyone's code on GitHub, without consent, and trained on it.
And now are spitting out big chunks of copyrighted code and presented it as somehow transformed even though all they've actually done is change a few variable names.
It is copyright theft, but because programmers are little people, not Disney, we don't have any recourse.
It's pretty likely that I've done the same thing. I mean, I've written enough CRUD functions in my life, for example, that in all likelihood I'm regurgitating stuff that's a copy, for all practical purposes, of stuff I've done before as work-for-hire for my employer. I'm not stealing intentionally or consciously, but it seems quite likely that it's happening. And that's probably true for many of you, at least that have been in the industry for a while.
I asked agent X what is the source of training data it generated code from, it couldn’t say. Then I asked why the code implementation is exactly the same as the output of agent Y. It said they were trained on the same ‘high-quality library’, and still couldn’t say which one.
So I guess that’s fine because everyone is doing it.
https://www.npr.org/2025/09/05/g-s1-87367/anthropic-authors-...
When I write fizzbuzz do I owe royalties to the inventor of fizzbuzz? Is my brain copyright thieving because I can write out the song lyrics from memory?
It turns out that's false. We know that genes are patentable; remember back during the Human Genome Project, when there was such a rush to patent them? So genes are IP. (This seems bizarre to me, since they're patenting something that was found just sitting there, but this is what the system says right now.)
Well, two other humans (aka mom and dad) did create me, based on those patentable genes (and most likely including some genes that were, in fact, patented).
I'm not sure what to conclude from all of that, but I do think that it invalidates your argument.
Few people ever actually read open source code, but I'd like to think on the rare occasions they do, they share a connection with the author. I know when I read somebody else's code, for me to understand it I have to be thinking about the problem the same way they were when they wrote it. I feel empathy with them and can sometimes picture the struggle, backtracking, and eureka moments they went through to come up with their solution.
Somehow I don't get the same warm fuzzy feelings about a machine powered by investor money ingesting my work automatically, in milliseconds, and coldly compressing it down to a few nudges on a few weights out of trillions of parameters. All so the machine can produce outputs on-demand for lazy users who will never know of me or appreciate my little contribution, and ultimately for the financial benefit of some billionaires who see me as an obsolete waste of space.
I guess I'm just irrational that way.
And so does well-crafted bespoke software.
The engineers who built the foundation for the industrial expansion of our forefathers went through the same exact thing we're going through now. They look at what existed, and use it to inform their efforts. This is what LLMs do.
I'm not attempting to moralize here, just comment on the parallels. Do I agree that a craftman's work is consumed by the juggernauts and no second thought is given? No. I think its a shame. But I also think the output will never match the artisans that practice now. By the very nature of the machines we employ, we cannot match the skill or thought that goes into bespoke code.
If I spend 2 hours designing the domain model, 1 hour slopping out a rough implementation, and 5 hours polishing it with a combo of handwritten and vibed refactorings, I will get a better result than if I spent 8 hours writing everything by hand.
So my point is not that vibe software is lower quality, as my experience has shown the opposite. It is simply that the spirit of sharing my work was done with the idea that I was sharing it with others who toiled in the same craft, not sharing for consumption by machine. Not that I ever contributed anything very important to the open source world, that anybody depended on. Just personal projects I thought were neat or educational.
In hindsight I would probably still have open sourced what I did, because I think it's valuable to have on record that I competently programmed stuff before AI even existed, like pre-atomic steel. But I don't know if I will open source any personal code going forward.
====
To put it more succinctly: if somebody "ripped off" my open source code in 2018, I wasn't mad about that. Even if they didn't bother to attribute me, well, at least they saw my stuff, had a human brain cell light up appreciating it, and thought it was worth stealing. I'm flattered. But with LLMs my work can be reappropriated without a single human ever directly knowing or caring about it.
You are presumably human. We have granted humans specific exemptions in copyright law. We have not granted that to LLMs. Why are we so eager to?
We gave certain temporary monopoly on certain uses to humans under rules little understood by laymen even if their livelihood depends on it.
Are you telling me that I can use the thing, but I can't use it if I process it through an LLM? It get slippery, fast.
If I write a story, I can put it online. That doesn't mean it's ok to take that story and publish it in an anthology.
There's also a TON of irony here. What an about face it is, for the community at large* to switch from "information wants to be free, we support copyleft and FOSS" to leaning so heavily on an incredibly conservative reading of IP law.
It doesn't need to. Laws are for humans.
Laws don't give rights to chainsaws. Or lawnmowers. Or kitchen knives, hammers, screwdrivers, and spades.
You can't use any of those to commit a crime and then claim that the law specifically did not exclude those tools.
Why are you seemingly in favour of carving out an exemption for LLMs?
Laws are for humans.
Arguing that the law did not specifically address "intentionally killing a person by tickling them till they died" means that you found a loophole which can be used to kill people is...
well, it's in the "not even wrong" category...
If we take the point of view that LLMs are tools (I agree), then people need to be absolutely certain that these tools don't contain (compressed) representations of copyrighted works.
People seem not to want to do that. And they argue that the LLMs have "learned" or "been inspired" by the copyrighted works, which is OK for humans.
This is the problem. People can't even agree on which of two mutually exclusive defenses to appeal to! Are LLMs tools which we have to ensure aren't used to reproduce copyrighted work without permission, or are they entities that can be granted exemptions like humans can? It can't be both!
> There's also a TON of irony here. What an about face it is, for the community at large* to switch from "information wants to be free, we support copyleft and FOSS" to leaning so heavily on an incredibly conservative reading of IP law.
True. While IP-owning companies like Microsoft now say "it's online, so we can use it".
It's bizarre.
I'll tell you what: I'll drop my conservative stance in defense if FOSS when Windows and the latest Hollywood movie are "fair use" for consumption by whatever LLM I cook up.
Since this is a new language, and not documented on the web nor on Github, Claude's ability is not based off of stolen IP. At best it's trained on other language concepts, just like we can train ourselves on code on GitHub.
Maybe a good reason to create a new programming language?
Note: IANAL. The above is just from my current understanding.
I don't think there's even a valid argument for any other ownership model, or at least none that I can think of.
The primary issue being that it's all built on stolen data in the first place.
In order to have a sane conversation about this we have to all agree not to lie.
Compilation and translation happen in a generic manner and does not rely on a mountain of other IP, it is really just a transformative tool that happens to do something useful, someone constructed it to be a very precise translation to the point that any mistakes in it are called bugs and we fix them to ensure the process stays deterministic. Translators try hard to 'get it right' too: to affect the intentions of the original author as little as possible.
When you use a model loaded up with noise or that you have trained exclusively on code that you actually wrote I think a strong case could be made that you own the copyright on that work product. But when you train that model on other people's work, especially without their consent or use a model that has been trained in that way you lose your right to call the output of that model yours.
You did not write it, and the transformative process requires terabytes of other people's IP and only a little bit by you.
As soon as you can prove that your contribution substantially outweighs the amount of IP contributed in total you would have a much stronger case.
I think I may have misunderstood your original comment above. It seems intending to say:
No, that human owns the copyright on the prompt, not necessarily on the work product. The human may partially have copyright over the work product as well, "how much" being dependent on how much new creative expression from the human was involved vs that from others.
Both the compiler (in absence of inclusion of copyrighted libraries) and the LLM are considered to not add creative work and thus do not change copyright status of the works they transform.
You can consider the training set of the LLM or other AI model to be 3rd party libraries and the level of copyright from them applying to final output to be how much can be directly considered derivative, just as reading copyrighted code and being inspired by it does not pass that copyright to your work unless it's obviously derivative
I like this comparison -- training set as '3rd party libraries'. Except, of course, that the authors behind the training set may not have actually granted permission to use, whereas the 3rd party libraries usually have some permission by way of license.
Adding two subtle points:
>> Indeed a developer owns copyright over the source code and on the compiled binaries, because there is no expansion happening here but just a translation from one format into another ... does not rely on a mountain of other IP
... and, the license agreement of the compiler and libraries used / linked to practically always explicitly waive copyrights over the said non-mountain of IP.
>> As soon as you can prove that your contribution substantially outweighs the amount of IP contributed in total you would have a much stronger case.
... a much stronger case that you have a partial copyright over the work, which is now likely a derivative work. You still may not have a case that you own the copyright exclusively (or as the original article says, that your employer does).
If the compiled binaries (output) were produced by running the input (source code) over every program written, then sure.
But that's not what's happening with compilers, is it? The output of a prompt is dependent on copyrighted work of others every single time it is run.
The output of a compiler is not dependent on the copyright output of every other program.
However:
1. The "every"ies in your comment are not to be taken literally either. :-)
>> If the compiled binaries (output) were produced by running the input (source code) over every program written, then sure.
2. More importantly, the above seems cyclically dependent on whether output from generative AI is deemed to be in public domain or not, which I consider is an open-ended issue as of now. It is not so 'sure' as yet. :-)
See:
https://technophilosoph.com/en/2025/02/07/ai-prompts-and-out...
If you have a more recent citation referring to case law that states the opposite then that would be great but afaik this article reflects the current state of affairs.
The human using the tool creates a prompt, there is then an automatic transformation of the prompt into code. Such automatic transformation is generally accepted as not to create a new work (after all, anybody else inputting the same prompt would have a reasonable expectation of generating the same output modulo some noise due to versioning and possibly other local context).
Claud code and in general AI generated code does not at present create a new work. But the prompt, that part which you input may be sufficiently creative to warrant copyright protection.
Every developer I’ve seen use these tools has have engaged in a meaningful contribution: specific directions across multiple prompts, often (though not always) editing the code afterwards, manually running the code and promoting for changes, etc.
Until the courts, legislators, or the copyright office define something otherwise, I’m highly confident of my assertion. (Mostly because of the insane number of hours I’ve spent with counsel on this. And, as a disclaimer, since I am biased: I worked on Copilot and Google’s various AI assisted coding products as an SVP and VP.)
The fact that meaningful contribution has not been defined is a strong signal that things are not nearly as clear cut as you make them out to be. Until there is a ruling that clearly establishes that the person that generated the prompt owns the copyright on the code I think it is misleading to suggest that this is already the case, your lawyers are not the lawyers of the parties that will end up hurt if it ends up not being so.
For contrast: we have a very clear idea on what things are copyrighted and in general these things do not rest on a foundation of IP appropriated from others outside of the license terms. The fact that the infringement is fine grained and effectively harms the rights of 1000s or more individuals doesn't change the heart of the matter, whoever wrote the code: it wasn't you.
Given your bias I'm not surprised that this would be your argument though, effectively you have created a copyright laundromat using code that you were nominally the steward of and not the owner but whether it stands long term or not is not up to your lawyers.
You warrant you wrote the code yourself, then it is found your code infringes on code owned by other entities. Now you have a tough choice: admit you lied about writing your code yourself tainting all of the code you claim you wrote since these tools became available or stand and take the infringement penalty which could be very substantial.
Judges and courts don't like playing silly games like this.
I've sued two parties for copyright infringement and won and a third settled out of court for a substantial sum. You don't tell a judge you don't need to prove you wrote the code, that's an automatic loss. Then there are such things as expert witnesses who will interview you and check how much you know about the code you claim you wrote.
This doesn't really make sense; in no way can an "expert" interview definitively assert someone wrote a piece of code or not, especially if the person has access to the code beforehand.
I believe the standard can be as low as "more likely than not".
The humans at the bottom who were crushed should blame the boulder, which happened to be moving.
If you only get copyright for the prompt you make, but not the output, then it's like being responsible only for the prompt, but not the output.
Ie he's only responsible for pushing the boulder up the hill. The fact that it rolled down from the hill and crushed someone's house "isn't his fault" (he doesn't get copyright on it).
>The Office concludes that, given current generally available technology, prompts alone do not provide sufficient human control to make users of an AI system the authors of the output. Prompts essentially function as instructions that convey unprotectible ideas. While highly detailed prompts could contain the user’s desired expressive elements, at present they do not control how the AI system processes them in generating the output.
If you're not the author then why would you have to be liable for it?
If you do not understand this make sure that you always operate within a framework of people who do because this soft of misunderstanding can cause you a world of grief.
Because you are the person shipping it, and as such regular liability applies. If I'm not the author of a book, and make a lot of copies and distribute those I'm liable for the content of that book, regardless of whether or not I hold the copyright to it. Conversely, if the original author sues because they feel their work infringes then that too is a liability that stems from the distribution.
And 'distribution' is a pretty wide term, not unlike 'interstate commerce', lots of things that you might not consider to be distribution can be classified as such in court.
Different laws do not come in packages, they apply individually, and sometimes they apply collectively but it isn't a menu where you can pick the combination that you think makes the most sense.
Technically when you select "copy image" instead of "copy image url" and paste that to a friend you're often committing copyright infringement. Do I think this is reasonable? Absolutely not. The same goes for this - the author should hold liability, so make the person who ends up causing the work to exist the damn author.
But nooo, we can't have that. Instead we need to have these convoluted exceptions that don't at all work how the real world works, so that lawyers can have even more work.
Besides, if we go by "the law" then we already have a court case where training an AI model is protected by fair use. But obviously that isn't satisfying enough for people, so they keep talking about how it's stealing (refer to my first sentence).
Also, this situation is going to get funny when some country decides that AI generated content does get copyright protection.
You are completely misunderstanding GP's distinction between ownership and liability.
In short, if you use someone else's car to kill someone, you are still liable for killing that person even though you don't own the car.
Do you disagree with that statement?
> Besides, if we go by "the law" then we already have a court case where training an AI model is protected by fair use.
Yes, but training an AI is a completely different thing than distributing the work product generated by that AI.
Note that I don't agree with all aspects of copyright law either, but I'll be happy to play by the rules as set today simply because I can't afford to be wrong and held liable for infringement. For instance I strongly believe that the length of copyright is a problem (and don't get me started on patents, especially on software). I also believe that only the original author should have copyright, not the company they worked for, their heirs (see Ravel for a really nasty case) or anybody else. I believe they should not be transferable at all.
But because I'm a nobody and not wealthy enough to challenge the likes of Disney in court I play by the rules.
As for 'this situation is going to get funny when some country decides that AI generated content does get copyright protection':
Copyright is one of the most harmonized legislative constructs in the world. Almost every country has adopted it, often without meaningful change. In practice US courts are obviously a very important driver behind changes in copyright law. But in general these changes tend to lean towards more protection for copyright owners, not less. So far the Trump admin has not touched copyright law in their usual heavy handed manner. I'm not sure if this is by design or by accident but maybe there are lines that even they can not easily cross without massive consequences.
Some parties in the AI/Copyright debate are talking about two sides of their mouth, for instance, Microsoft is heavily relying on being able to infringe on copyright at will but at the same time they are jealously guarding their own code. Such hypocrisy is going to be the main wedge that those in favor of strong copyright are going to use to reduce the chances that AI work product deserves copyright, after all, if it is original and not transformative then Microsoft could (and should!) train their AI on their own confidential code. But they're not doing that, maybe they know something you and I do not...
Same point goes to if an animal takes a picture.
Also, when it comes to code, the case is even more damning because the vast majority of the code which LLMs are trained on was not only copyright but subject to an MIT license (at best) and even the MIT license, which is the most permissive license in existence, still says clearly:
"Permission is hereby granted, free of charge, to any person obtaining a copy of this software"
The word 'person' is used very intentionally here.
I think there should be several kinds of AI taxes which should be distributed to all copyright holders. There should be a tax to go to writers (and book authors), a tax to go to open source developers and a tax for the general population to distribute as UBI to account for small-form content like comments and photography...
People invested a lot of time building their entire careers around the assumption of copyright protection; so for it to be violated on such a scale would be a massive betrayal.
Copyrights already preclude short phrases for the same reason -- there are only so many ways in which short phrases could be produced. The moment a work becomes larger (large enough; AFAIK, the threshold is not precisely defined), the reasoning you applied fails to apply.
The Google-Oracle lawsuit did not decide whether APIs (when large in number) are copyrightable or not.
I can totally see this applying here as well.
Now this doesn't resolve the issue of AIs being trained on copyrighted works it had no rights to. The counterargument is that this is a derivative or transformative work but I don't believe that's settled law at all.
[1]: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
It doesn't seem like bad faith to think that copyright is stronger than the courts end up thinking, just being mistaken.
https://en.wikipedia.org/wiki/San_Francisco_Canyon_Company
LLMs are just code stealers, will gladly generate Carmacks inverse for you with original comments.
As a developer, the fact that my source code passed through a compiler - an automated tool - doesn't give the author of the compiler any claim on my executable code.
As an artist, the fact that I used, e.g., Rebelle to paint a digital painting, or that I used Lightroom (including generative AI to fill, or other ML/AI tools to de-noise and sharpen my image) in editing a photograph, doesn't give EscapeMotion, Adobe, or Topaz, any claims to my product.
Why, then, would there be any chance that use of a tool like Claude - a tool that's super-advanced to be sure, but at the end of the day operates by way of a mathematical algorithms - would confer any claims to Anthropic?
If a court later found the codebase was predominantly AI-authored and therefore not copyrightable
Is figuring out the appropriate prompts to use in directing Clause qualitatively different than using a (much) higher-level abstraction in coding? That is, there was never any talk as we climbed the abstraction layer from machine code to assembly to Fortran or C to 4GLs to Rust etc., that the assembler/compiler/IDE builder would have any ownership claim on the produced executable. In what sense can Anthropic et al assert that their tool, which just transforms our directives to some lower-level representation, creates ownership of that lower-level representation?
Sure the courts could mint a communist society with a few weird decisions about property rights, but this being the US do you really suppose that's likely?
There's really no legal question of any kind that models aren't people and therefore cannot own property (and also cannot enter into legal contract as would be required to reassign the intellectual property they don't and can't own)
That's why the intern signs an employment contract that reassigns their rights to their employer!!
Zarya of the Dawn already settled it for Midjourney output: human-written elements were protected, AI-generated images were not. The character design didn't get copyright even though the human picked, prompted, and curated. Code isn't different. Prompting Claude to produce a function is closer to prompting Midjourney to produce a frame than to writing the function yourself.
The reason it feels different to engineers is that we're used to thinking of the compiler as the analogy. But a compiler is deterministic — same input, same output. An LLM isn't. That's the line the Copyright Office is drawing, and image cases got there first.
They also mention in the same document that were LLMs to more closely approximate deterministic tools, they would be open to reevaluating. That is Requesting X gets X without substantial wiggle room.
I dont think that last part has been tested with an extremely large set of prompts and human generated input to create a more deterministic output. Even outside of code, where you see large prompts, creative writing LLM tools, NovelAI or Sudowrite for instance can have pages and pages of spec for the LLM, sometimes close to 50% of the size of the final output.
Then there's testing, review etc, human processes confirming that the output meets spec, updating it where needed intelligently.
There are also foreign courts, with similar rules about human intention, that have found in favor of prompts only, where it could be demonstrated that multiple rounds of prompts were used to refine the image.
I wouldnt call this settled at all tbh. And to be honest, a lot of this doesnt require exposure. you dont need to own up to LLM use in a lot of settings, proving LLM use is so difficult its easy to jump up the ladder from LLM (100%) to LLM (50%) and ultimately claim ownership.
The people who will get busted for this are basically just super lazy leaving ChatGPT responses in, failing to pay an editor, failing to modify images for anything more than layouts.
It does not have to be substantial transformation.
Temperature 0 determinism is subject to active research. NVIDIA tried but failed so far, DeepSeek V4 seems to have done it. I hope judges won't be swayed by this an AI generated code will classified as uncopyrightable, just like Images are.
It'd be a form of plagiarism, just with different consequences to the most common form.
Copyright Office requires you to disclose AI involvement and disclaim the AI-generated parts. Zarya of the Dawn is the example — applicant filed for the whole graphic novel, got partial registration on the human-written text, refused on the Midjourney images. The reproducibility of the prompt isn't really the test. The test is whether a human made the expressive choices.
LLMs are amazing of course and we use them heavily ourselves - but not for modifying text that is to be posted to HN. Doing so leaves imprints on the language that readers are increasingly becoming allergic to, and we want HN to be a place human conversation.
Not really. Copyright registration is pretty much automatic. The Copyright Office does not check for duplicates. Patent registration involves actual examination for patentability. Issued patents are presumed valid (less so than they used to be), but issued copyrights are not. You have to litigate.
The US does not have "sweat of the brow" copyrights. It's the "spark" that creates the originality, not the work. Which is why you can't copyright a telephone directory (Feist vs. Rural Telephone) or a copy of an uncopyrighted image (Bridgeman vs. Corel) or a scan of a 3D object (Meshwerks vs. Toyota). Or the contents of a database as a collective work. Note that some EU countries do allow database copyright.
Interestingly, a corporation can be an author for copyright purposes. The movie industry pushed for that. We may in time see AI corporate personhood for IP purposes.
I think that the gold rush approach happening right now around me (my company EMs forcing me to work with claude as fast as possible) show really short-sight of all the management people.
First - I lose my understanding of the code base by relying too much on claude code.
Second - we drop all the good coding practices (like XP, code review etc.) because claude is reviewing claude's code.
Third - we just take a big smelly dump on the teamwork - it's easier and cheaper to let one developer drive the whole change from backend to frontend, despite there are (or were) two different teams - one for FE, one for BE.
Fourth - code commenting was passe, as the code is documentation itself... Unless... there is a problem with the context (which is). So when the people were writing the code, they would not understand the over-engineered code because of their fault. But now we make a step back for our beloved claude because it has small context... It's unfair treatment.
I could go on and on. And all those cultural changes are because of money. So I dub this "goldrush", open my popcorn and see what happens next.
Agree with your other points, but IMO this one has always been better. You often need to design the backend and frontend to work with each other, and that requires a lot more coordination when it's separate teams.
Claude code itself is a trade secret, and it is not open source, so its own copyrightability is moot till you get your hands on a copy of it with clean hands.
Recipes cannot be copyrighted because they are not expressions of human creativity. Software written by AIs are also not expressions of human creativity, so the balance is tilted in favor of AI generated copy not being copyrightable.
The Supreme Court or legislation could change this, and I'd guess there will be a movement to go in that direction, but till something like that succeeded it's not so.
Trade secrets aren't very well protected, though.
You can sue the person who leaked/stole your secret, but if others keep sharing it once it is leaked you can do nothing to them.
I mean I'm not the biggest fan of AI on the planet by any means (which I think my post history would prove, lol), but isn't prompt design and steering the AI "human creativity"? In one of my AI-assisted projects I spent like a week in unending threads of posts trying to make the AI do stuff the way I wanted, testing the output, finding a bazillion of bugs and "basic bitch" solutions, asking for more robust this and edge case that. It felt like I wrote a novel. How is that not creativity (Crayon-eater or Picasso, creativity is creativity)?
This particular AI-ism really encapsulates what annoys me about some AI-isms. I don't mind the delves and the em-dashes that just give away the AI source of what otherwise might be good text. But these structural pieces just feel fundamentally not for the reader. Part of it is blatant pick-me language for the human feedback ("hey look you wanted plain language I did that") and part of it feels like it's just helping the future token stream (thinking-like tokens polluting the actual text).
The not-this-but-that, the sycophancy, the symbolizing-vague-significance, they all have this flavor of serving a process that's no longer there as I now need to read it. It gives a similar sickening feeling to the one I get seeing something designed by committee.
This comes up in a few places as a kind of vindictive battle. One example is Oracle suing Google for too closely mimicking their API in Android. Here is an example:
> private static void rangeCheck(int arrayLen, int fromIndex, int toIndex) {
if (fromIndex > toIndex)
throw new IllegalArgumentException("fromIndex(" +
fromIndex + ") > toIndex(" +
toIndex + ")"); if (fromIndex < 0)
throw new ArrayIndexOutOfBoundsException(fromIndex);
if (toIndex > arrayLen)
throw new ArrayIndexOutOfBoundsException(toIndex);
}And it was deemed fair use by the Supreme Court. Other times high frequency hedge funds sued exiting employees, sometimes successfully. In America, anyone can sue you for any reason, so sure, you'll have Ellison take a feud up with Page and Brin all the way up to the Supreme Court.
In 99.9% of instances none of this matter. Sure there's the technical letter of the law but in practice, and especially now, none of this matters.
You'd be surprised! Among non-software management types, they often think of the code as extremely valuable IP and a trade secret. I'm a CTO and I've made comments before to non/less technical peers about how the code (generally speaking) isn't that big of a secret, and I routinely get shocked expressions. In one case the company almost passed on a big contract because it required disclosure of the source code (with an NDA). When I told them that was a silly reason and explained why, they got it, but the old way of thinking still permeates and is a hard habit to break.
Edit: Fixed errant copy pasta error. Glad that wasn't a password :-)
I work in M&A. Nearly every lawyer, accountant, investor, and software business owner thinks their code is solely valuable and a trade secret. I find it hilarious and try to be as diplomatic as possible about why it's not. They also willfully will give their client list to a potential acquirer but get super cagey they moment a third party provider asks for their code to be scanned.
This argument easily gets shut down when I asked why, Twitch, a $1B business didn't crater to their competition when their full codebase was leaked.
So these two things are squarely at odds with eachother...meaning, I don't know any PE acquirers who are actively terminating deals because the target acquisition's code is generating by an LLM even if the lawyers try to get a rep about it in the purchase agreement.
For the record, I still have yet to have an M&A lawyer explain to me unilaterally that AI generated code is an infringement...hence the question "who owns the code Claude Code writes" is still open.
Assuming it ever does...first, GPL is hardly enforced and second, I feel like there is going to be enough money (e.g. Anthropic's own code it uses for the harness) that pushes back against it being problematic. We'll see.
Every open source license is built on the premise that code is copyrightable.
It is based on the premise that if the proprietary licenses are valid, then also the open source licenses are valid.
So what is held as true is only the implication stated above and not the truth value of the claims that either kind of licenses are valid.
If the proprietary licenses are not valid, then it does not matter that also the open source licenses are not valid.
The open source licenses are intended as defenses against the people who would otherwise attempt to claim ownership of that code and apply a proprietary license to the code, i.e. exactly what now Anthropic and the like have done, together with their corporate customers.
Of course, if it is accepted that the code generated by an AI coding assistant is not copyrightable, then using it would not really be a violation of the original open source licenses. The problem is that even if this principle is the one accepted legally, at least for now, both Anthropic and their corporate customers appear to assume that they own the copyright for this code that should have been either non-copyrightable or governed by the original licenses of the code used for training.
“ Copyright <YEAR> <COPYRIGHT HOLDER>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.”
The copyright assertion is the very first line of the MIT license, and the right to copy the code is granted. Clearly a reasonable person would affirm that that license (and all similar licenses) are based on a premise that code can be copyrighted.
> It is based on the premise that if the proprietary licenses are valid, then also the open source licenses are valid.
>If the proprietary licenses are not valid, then it does not matter that also the open source licenses are not valid.
That’s not true. Imagine a world where proprietary licenses are made invalid.
In such a world a company could take open source code compile it and distribute it (or build a SaaS) without the source code.
Even if you only focus on licenses that don’t prohibit this, most of those licenses require attribution.
So even in a world where propriety licenses were invalid the majority of open source licenses would still have a purpose.
You’re attempting to split hairs to argue on a very subtle technicality, but you’re not even technically right.
You, right now, are taking about convergence.
If there is no artwork, there can be no copyright. If every character of the code to write is basically predetermined by the APIs you need to call, there is no artwork and no copyright.
Build a novel new API, and you'll be protected though.
Then why does reverse engineered code need to be a clean room implementation?
Ask any emulator developer or the developers of ReactOS
I think this is an unusual opinion.
Code may not be copyrightable in as small chunks as you put there, but in terms of larger pieces I think companies and individuals very often labour under the belief that code is intellectual property under copyright law.
If code isn't copyrightable, from where comes the GPL?
And why does anyone care if (for instance) some Microsoft code might have accidentally ended up in ReactOS, causing that project to need to go into a locked-down review mode for months or years? For that matter why do employers assert that they own the copyright in contracts?
I think it's the opposite - almost everyone thinks their code is copyrightable, outside of APIs and interop stuff, or things so simple as to be trivial.
After all, is this not what happens with compilers as well? LLM agents are just quite advanced compilers that don't require the specification to be as detailed as with traditional compilers.
If you provided a human contractor with the specifications for the code you want, the courts have repeatedly made clear you have not provided the creative input from a copyright perspective, and the contractor needs to explicitly assign those rights to you if want to own the copyright on the code.
- Specifiers, who make the specification for the system
- Programmers, who write C code
- Machine encoders, that take that C code and write machine code for a CPU
Would it be that the copyright would then belong to programmers, if no other explicit assignments would be made?
---
Thinking about it, probably yes: copyright of the spec belongs to specifies, copyright of the C belong to programmers, and copyright of machine code to machine encoders. Or would it depend on the amount of optimizations the machine encoders would do, i.e. is it creative or not? And then does this relate to the task and copyrightability of C compiler output, where optimizations can sometimes surprise the developer?
Compilers are different in that the resulting binaries are not separately copyrighted. They are the same object to the Copyright Office because one produces the other, in the same way that converting an image to a PDF is still the same copyright.
LLMs don’t do that. The stuff coming in may not be copyrighted, and may not be copyrightable. The stuff that comes out is not a rote series of transformations, there are decisions being made. In common use, running a prompt 10 times might yield 10 meaningfully different results.
I’m dubious the outcome will be “any level of prompting is enough creativity”.
If I make the LLM generate code that follows my own code architecture and style, that should be enough creative input
> Works predominantly generated by AI without meaningful human authorship are not eligible for copyright protection
Note the word "predominantly", and the discussion that follows in the article about what the courts and the copyright office said.
Nor does it give a single answer.
Mere prompting is still not enough for copyright, and the problem is unsolved on how much contribution a human needs to make to the generated code.
In the case for generated images copyright has been assigned only to the human-modified parts.
Even worse, it will be slightly different in other nations.
The only one that accepts copyright for the unchanged output of a prompt is China.
There are far more characters protected by copyright than trademark.
Plus what if Anna Karenina was GPL?
AI to review - shallow minutia and bikeshedding
AI to edit - wrote duplicated functions that already existed
AI to test - special casing and disabling code to pass the narrow tests it wrote
AI report - "Everything looks good, ship it!"
How much code do you need to change in order for it to be original? One line? 10%? More than 50%?
That's arbitrary and quite unproductive convo to be honest.
Yeah but that’s what the legal system ostensibly does. Splitting fine hairs over whether a derived work is “transformative” is something lawyers and judges have been arguing and deciding for centuries. Just because it’s hard to define a bright red line, doesn’t mean the decision is arbitrary. Courts will mull over whether a dotted quarter note on the fourth bar of a melody constitutes an independent work all day long. It seems absurd, but deciding blurry lines are what courts are built to handle.
That makes no sense because what if you refactor your code ad infinitum using AI? You spin up a working implementation, then read through the code, catalog the changes like interface, docs, code quality and patterns and delegate to the AI to write what you would.
It's 100% AI code and it's 100% human code. That distinction is what's counterproductive.
As the article says in the Tl;DR at the top the code may be contaminated by open source licenses
> Agentic coding tools like Claude Code, Cursor, and Codex generate code that may be uncopyrightable, owned by your employer, or contaminated by open source licenses you cannot see
That's not how copyright works. The modified version is derivative. You can't just take the Linux kernel, make some changes, and slap a new license on it.
There’s a very accessible summary of the United States rules here:
1/ Was the pork in my sausage reared on a farm that meets agricultural standards?
2/ Was the food handled safely by the kitchen that cooked my food?
3/ Does the owner of the diner pay kitchen wages in accordance with labor law?
By contrast, I have no idea what went into the models I use, what system prompts have prejudiced it, and whose IP has been exploited in pursuit of my answer.
That’s being charitable, really. In practice the open secret of the AI industry is that the vast majority of training data, for want of a better word even if it is likely to be the most precise description, is stolen data.
I'm already glad some companies have the guts to open their models because proving it for open models is probably a lot easier than for a model behind a service.
Can someone put a rough estimate on potential revenue loss (direct and incidental) from training AI with industry wise breakup.
If the data is proprietary (eg Meta’s stash of FB comments) then I am satisfied to be told it’s private and I can’t see it. If, however, the works were public then give me a URL if it’s live or a cached copy if it isn’t.
Or is it still IP even if it is not copyrightable? That would feel weird: if it's in the public domain, then it's not IP, is it?
If you generate the same code with AI, now it does not have a copyright. If it depends on an MIT library, then the MIT library has a copyright and you have to honour the licence. But the code you produced does not have a copyright (because it was generated by an AI). And therefore nobody "owns" it. My question is: can your employer prevent you from distributing something they don't own?
CC0 came about in part because of this ambiguity. To deal with it, part of CC0 basically says - even if there would still be restrictions to this if it were only in the public domain, I renounce those theoretical rights.
Outside the underdeveloped legal framework, I believe knowledge and truth is like life, and human society has some continued philosophical growth required here.
The answer is probably "Nobody"!
Ah, here we go, courtesy of google-ml: '"Human Resources" by Adrian Tchaikovsky, published on Reactor[...] https://reactormag.com/human-resources-adrian-tchaikovsky/ '
And yet that was the state of software at every company I worked at before FAANG, and even a good amount there...
> The second commit message versus the first is the difference between a defensible authorship claim and a clean “Claude wrote this” record.
That makes no sense to me, as the commit message is probably LLM generated as well. (and even easier to generate as it doesn't have to compile or pass automated tests).
Inadvertent copyleft license violations: probably 0 lawsuits
Competitor copied your software, you could not defend your rights in court because it was made with AI: probably also 0
Users of agentic AI for software development: >10 million
The thinking here seems pretty clear to me.
I mean if the code is not copyrighteable that does not mean anything; it's just public domain code except that meta will just use good old security by obscurity to protect it. If somehow a meta programmer vibes code, say, VVVVVV, and Terry Cavanagh recognizes it on his facebook feed and sues meta, and wins, all that will happen is that meta will take down the copy of VVVVVV, will fire and sue the engineer that vibe coded it and call it a day.
Anthropic "solved" this by intermingling the texts extracted from pirated books (illegal) with texts extracted from the physical books they bought and destroyed (legal), so no one can clearly say if the copyrighted material it spits out came from a legal source or not. Everyone rejoiced.
For example:
No Generative AI Training Use
For avoidance of doubt, Author reserves the rights, and grants no rights to, reproduce and/or otherwise use the Work in any manner for purposes of training artificial intelligence or machine learning technologies to generate text, text to speech, voice, or audio including without limitation, technologies that are capable of generating works in the same style or genre as the Work, unless individual or entity obtains Author’s specific and express permission to do so. Nor does any individual or entity have the right to sublicense others to reproduce and/or otherwise use the Work in any manner for the purposes of training artificial intelligence or machine learning technologies to generate text, text to speech, voice, or audio without Author’s specific and express permission.
They're only legal if training is fair use - and even I don't think it's immediately clear what would be the legal status of verbatim regurgitation of code in copyright, or code protected by patents?
AFAIK I (as a human developer) can't assume that I can go and copy code out of a text book, and then assume copyright and charge for a license to it?
The judge seems to have said it's because they "transformed" the books (destroying them after digitalizing) in the process, that made it legal.
> Ultimately, Judge William Alsup ruled that this destructive scanning operation qualified as fair use—but only because Anthropic had legally purchased the books first, destroyed each print copy after scanning, and kept the digital files internally rather than distributing them. The judge compared the process to “conserv[ing] space” through format conversion and found it transformative. - https://arstechnica.com/ai/2025/06/anthropic-destroyed-milli...
LLMs don't make decisions. Their output is completely determined by an algorithm using the human prompt, fixed weights, and a random seed. No different than the many effects humans use in image or audio editors. Nobody ever questioned whether art made using only those effects on a blank canvas was subject to copyright.
The fact that it inferred those basis functions from studying copyrighted works doesn't seem relevant. Nor does the fact that the "Fourier sums" sometimes coincide with larger fragments of works that are copyrighted. How weird would it be if that didn't happen?
If I painstakingly recreate A New Hope frame by frame, pixel by pixel, that's infringement. Even if I technically used 0 content from the original.
In any case, if the copyright mafia insists on butting heads with AI, they'll find that the fight doesn't quite play out the way it has in the past.
Is there any citation for this "legal consensus"? I was not aware there was any evidence backed stances on this topic as of yet
CC does not need LGPL code. There's more than enough BSD and Apache code to go around.
And they can generate synthetic data that is better than LGPL for their training.
It's also a problem that does not seem feasible to meaningfully enforce.
It's easy to generate CC code and lie and say you didn't. It would be hard to prove that you did, especially if you took any precautions to make it even slightly difficult that you did.
However, even if the BSD/Apache/MIT licensed code can be incorporated freely in your application, you still have no right to remove the copyright notices from it and/or to claim that you own the copyright for it.
Therefore, unless the AI model has been trained only on non-copyrighted public-domain code, incorporating the generated code in your application means that you have removed the copyright notices from it, which is not allowed by the original licenses.
There is absolutely no doubt that using an AI coding assistant works around the copyright laws, but it is still equivalent with doing copy and paste with fragments from copyrighted works into your source code.
I consider that copyright should not be applicable to program sources, at least not in its current form, so reusing parts from other programs should be fair use, but only if human programmers would be allowed to do the same.
I can't speak for all licenses, but I'm familiar with at least one BSD license. That's almost the entire point of it...
You cannot take their literal code and call it your own. You can derive code from it and call it your own. That's what LLMs primarily do.
If some GPL-licensed group were to sue some commercial software project that they do not have the source code for, what would even give it away? But they throw $1 million at a lawyer who can at least get it to the discovery phase somehow, and the source code is provided. It looks to be shit, but maybe an expert witness would come along and say "that looks inspired by the open source project". Where does it go from there? The model is a black box, but maybe you've got a superhero lawyer who manages to rope in Anthropic or OpenAI, and you can see how it produced the code given those prompts. What now? Are there any expert witnesses who both could say and would say that it was "bulk copying-pasting code". And if it were, what jury is going to go for that theory of the crime? Copying-and-pasting, but the code doesn't match, except in short little strings that any code might match. This isn't a slamdunk, and it's not going to proceed very far unless it's another Google-vs-Oracle shitfest.
LLMs really change nothing about this.
The logging point is sharper than it might appear. In a copyright dispute over AI-assisted code, interaction logs could cut both ways. A plaintiff trying to establish human authorship would want the logs to show substantial architectural redirection, multiple rejections of Claude output, and documented reasoning for structural decisions. A defendant challenging that authorship claim would subpoena the same logs to show verbatim acceptance of output without modification.
The practical implication i guess here,that the developers who want to preserve a copyright claim over AI-assisted code should treat their prompt history as a legal document from the start. It seems all over the world the logs are the evidence. Whether they help or hurt depends entirely on what they show.
Except if it happens to regurgitate a significant excerpt of some existing work, then the authors of that can assert their copyright; i.e. claim that it infringes.
I use my own computer, I pay for my own subscription and I build my open source projects then the code belongs to me.
If I use my company's computer, they pay for my subscription and we work on the company's projects then the code belongs to the company.
In any step of the way if some copy-left or any other form of exotic open source license is violated, who pays for discovery? Is it someone in Russia who created a popular OSS library that is now owed? How will it be enforced?
Twice in my career the owners of a company have wanted to sue competitors for stealing their "product" after poaching our staff.
Each time, the lawyers came in and basically told us that suing them for copyright is suicide, will inevitably be nearly impossible to prove, and money would be better spent in many other areas.
In fact, we ended up suing them (and they settled) for stealing our copyrighted clinical content, which they copied so blatantly they left our own typos and customer support phone number in it.
Go ahead, try to sue over your copyrighted code, 10 years and 100M later you will end up like Google v Oracle. What if the code is even 5% different? What about elements dictated by external constraints; hardware, industry standards, common programming practices, these aren't copyrightable.
Then you have merger doctrine, how many ways can we really represent the same basic functions?
Same goes with the copyleft argument, "code resembling copyleft" is incredibly vague, it would need to be verbatim the code, not resembling. Then you have the history of copyleft, there have been many abuses of copyleft and only ~10 notable lawsuits. Now because AI wrote it (which makes it _even harder_ to enforce), we will see a sudden outburst of copyleft cases? I doubt it.
Ultimately anyone can sue you for any reason, nothing is stopping anyone right now from suing you claiming AI stole their copyleft code.
https://www.vice.com/en/article/musicians-algorithmically-ge...
"Who owns the text microsoft word helped you write?"
Claude code is a software tool not a legal entity.
Anything else would be completely ridiculous given current laws in most countries.
It would be as ridiculous as blaming the car in a car accident where you drove over someone.
Learning from licensed material is generally accepted in humans, you may learn from something and then create something else and the new thing is not considered legally problematic with the exception of patents i guess.
Whether the same thing holds true for electronic systems is where people disagree if you look at the problem space in its essence. I land on the side that it is the same thing(humans and electronic systems learning), some seam to think it is a different thing.
And?
>It would be as ridiculous as blaming the car in a car accident where you drove over someone.
No more ridiculous than you posting something you know nothing about.
Just because you don't get the copyright doesn't mean claude does. The fact that claude is not a legal entity has no bearing on whether or not you are entitled to a copyright for a work you did not create.
But something that is overlooked is that the world is bigger than the US and it's an absolute zoo out there in terms of copyright laws in different countries. Anything you think you might understand about this topic goes out of the window if you have international customers or provide software services outside the US. Or are not actually based there to begin with. And there are treaties between countries to consider as well.
Courts tend to try to be consistent with previous rulings, interpretations, etc. When it comes to copyright, there are a few centuries of such rulings. The commonly held opinions among developers that aren't lawyers are that AI is somehow different. And of course since the law hasn't actually changed, the simple legal question then becomes "How?". And the answer to that seems to involve a lot of different notions.
For example, "AIs are not people, and therefore any content produced by them isn't covered by copyright to begin with" is one of the notions brought up in the article. A lawyer might have some legal nits to pick with that one but it seems to broadly be the common interpretation. So AI's don't violate copyright by doing what they do. In the same way you can't charge a Xerox machine with copyright infringements. Or Xerox. But you could go after a person using one.
And another notion is that any content distributed by a human can be infringing on somebody else's copyright and that party can try to argue their case in a court and ask for compensation. Note that that sentence doesn't involve the word AI in any way. How the infringing party creates/copies the content is actually irrelevant. Either it infringes or it doesn't. You could be using AI, a Tibetan Monk copying things by hand, trained monkeys hitting the keyboard randomly, a photo copier, or whatever. It does not really matter from a legal point of view. All that matters is that you somehow obtained a copy of an apparently copyrighted work. AI is just yet another way to create copies and not in any way special here.
There are of course lots of legal fine points to make to how models are trained, how training data is handled, etc. But if you break each of those down it boils down to "this large blob of random numbers doesn't really resemble the shape or form of some copyrighted thing" and "Anthropic used dodgy means to get their hands on copies of copyrighted work". I actually received a letter inviting me to claim some money back from them recently, like many other copyright holders.
Who coded the code Claude Code code?
If computer generated code is not copyrightable, ownership cannot be reassigned either.
If vibe coded work is not copyrightable, it cannot be reassigned to the employer and become copyright protected.
And I'm worried that once that has been sufficiently normalized, laws and interpretations of them will adapt to whatever best suits those users. Which will mean copyrightwashing of FOSS. My only hope then is that surely if free software can be copyright-washed by the big guys, then so can the little guy copyright-wash the big guys' blockbuster movies or whatever, which might lead to some sort of reckoning.
There's obviously a huge issue with the legitimacy and ownership of training data being fed to LLMs. That seems like an issue between the owners of that IP and the people training the models and selling them as services more than the people using the tool. Isn't this just another flavor of SCO trying to extort money out of companies using Linux?
But AI might in fact do the exact opposite and reverse the privatization trend that the West has been going through for the last 400 years. All of our copyright laws rely on the idea that there is a human consciousness behind the copyright. The more AI has input, the less we can claim ownership. If AI returns everything to the commons, then it results in a much more egalitarian world.
Hilariously, many people, especially artists, see the return of the commons as an assault against them. They’re so captured by copyright that they assume any infringement on their copyright is inherently fascist. It’s ridiculous. Copyright is a corporations number 1 weapon when it comes to creating a moat and keeping the masses out.
The original intent of copyright, in fact, was an incentive to return an idea to the commons. Experts used to hide their discoveries in order to keep them for themselves. Copyright provided an opportunity to release this knowledge and still profit. There were even several cases where it was established that those who claimed copyright could retain copyright even if the idea had been previously discovered. This created a huge incentive: release the knowledge or risk having your process copyrighted by the opposition. But that system worked because copyright could only exist for so long (14 years, doubled if they filed again.)
Now copyright is a lifelong sentence at almost 100 years. The entire purpose of it has been undermined. Corporations own all your childhood and by the time you can profit off of it, it’s outdated.
A world where the mainstream is primarily a commons seems to me like an egalitarian world. I’d like to live in that world.
It’ll happen by evolution. Just complex systems trending the way they trend.
Part of the problem with generated works is that it is lower effort like the person copying something. It’s not an activity that demands special protection like original authorship. I believe this is a large part of the reasoning.
First, its creation is (claimed to be) extremely useful for society, but in order to be created it requires ignoring copyright for pretty much everything ever written. Something we kinda shrugged under the table.
Then, it introduces an extreme jump down in creation effort - so if the focus is protection of effortful creation, nothing with AI use qualifies. But of course, you'd want society to benefit from effortlessness in general, spending more effort than needed in a task is the opposite of efficiency.
Anything else is just bullshit equivocation.
Or were you planning to reproduce the (say) Ford Motor Company's trademarked symbol in wood? If so, you're right back in the stinkin' swamp.
This is like a machine you ask for timber and you get timber but you didn’t need to provide any wood
-Claude
(Of course, there's no way to be certain of this, but it's what our software thinks, and the overall pattern is pretty convincing.)
See https://news.ycombinator.com/newsguidelines.html#generated and https://news.ycombinator.com/item?id=47340079
There are so many submissions where most of the discussion is about whether the content has any human effort behind, or the LLM was just a purely assistive role like translating. It's really devaluing hn, IMO. Not sure how much an AI flag would help, or introduce new issues, given how difficult the problem is, though.
Even steering it with prompts isn't enough. The guy couldn't copyright the image he made with ai, code is no different.
Maybe prompts written by humans are copyrightable.
Can't wait for the Billionaires to entrench in court they can steal everything for these machines and claim it as their own and maybe even reach for anything that it helps produce. Fuck that