So: once it's not "hard" any more, does IP even make sense at all? Why grant monopoly rights to something that required little to no investment in the first place? Even with vestigial IP law - let's say, patents: it just becomes and input parameter that the AI needs to work around the patents like any other constraints.
Our foreparents fought for the right to implement works-a-like to corporate software packages, even if the so-called owners did not like it. We're ready to throw it all away, and let intellectual property owners get so much more control.
The implications will not end up being anti-large-corporation or pro-sharing. If you can prevent someone from re-implementing a spec or building a client that speaks your API or building a work-a-like, it will be the large corporations that exersize this power as usual.
Nor should we be treating AI models themselves as respected IP. They're built on everyone else's data. Throw away this whole class of law, it's irrelevant in this new world.
It would be interesting to see a court ruling that the output of LLMs trained on copyleft code are licensed under the GPL ... and all other viral licenses simultaneously
It is quantum legality, to use copyright input is legal or illegal depending on the observer.
So now consider two questions:
1. You actually didn't use an LLM, but they believe & claim you did. Who has the burden of proof to show that you actually own the copyright, and how do they do so?
2. They write new code that you feel is based on yours. They claim they washed it through an LLM, but you don't believe so. Who has the burden of proof here and how do they do so?
The occasional piece of software might be a trade secret, but a person downloading a preexisting leak isn't affected by those laws.
I think 18 U.S.C. § 1832 (a) (3) might answer your question? https://www.law.cornell.edu/uscode/text/18/1832
Well we could try fixing the forever part. Copyright is out of control. I’d like to see a world with much less power given to IP. Sometimes I even say I want it eradicated entirely. But realistically we should start by cutting things back. Maybe give software an especially short copyright period.
There's always going to be downsides and edgecases when granting any party a monopoly over anything. At least if it's limited to 2 decades any unintended consequences, philosophical objections, and etc are hopefully kept within reason.
Meanwhile, there are cases where copyright of more than 2 years is overkill.
I don't know what, but it seems like we need some sort of mechanism for variable-length IP duration is needed.
I could understand for medical devices maybe but even then it seems like the software is a tiny part of the overall cost of a given design. A competitor could already do a clean room reimplementation in that case.
But I guess it wouldn't be all that bad if there were a carefully crafted extension for government certified software that was explicitly tied to the length of the certification process.
If we remove IP laws, we should remove all private property laws!
What is the difference between an "agent" and a "compiler"?
For that matter, what is the difference between "I got an agent to provide a high level description" and a decompiler?
What is the difference between ["decompiling" a binary, editing the resulting source, recompiling, and redistributing] and [analyzing the behavior of a binary, feeding that description into an LLM, generating source code that replicates that behavior, editing that, recompiling and redistributing]?
Takeaway: we are now in a world where software tools can climb up and down the abstraction stack willy nilly and independently of human effort. Legal tools that attempt to track the "provenance" of "source code" were already shaky but are now crumbling entirely.
I am for keeping the licenses in place, as long as there is any copyright at all on software. If we get rid of that, then we can get rid of copyleft licenses and all others too. But of course businesses and greedy people want to have their cake and eat it too. They want copyleft to disappear, but _their_ software, oh no, no one may copy that! Double standards at their best.
(the paradox of copyleft is that it does tend to push free software advocates in a direction of copyright maximalism)
(side bar: the phrase "anti-<whatever> luddites" is way, way overused, especially here. Let's get more creative, people!)
There's also some environmentalist concerns which the term luddite again fits perfectly. You just have to generalize, transferring laterally from economic wellbeing to environmental wellbeing.
So I don't think GP qualified as an ad hominem dismissal but rather an accurate description of the situation. Take what's being discussed (restrictions on specifications and interoperability), project it backwards in history, and imagine what an alternate present day would look like. I think it would be pretty bad.
Pffft no. Most of us think that AI is being used as a political trick - like firing unionized workers "to replace them with AI" and then hiring new un-unionized workers to replace them, 2 weeks later. Replace the AI with an empty cardboard box labeled "AI" in black marker, and nothing changes.
See also: using AI to launder pirated material, for big businesses.
There isn’t much of a middle ground anymore.
Although I think the chance of that happening is effectively zero.
Our "foreparents" weren't competing with corporations with unlimited access to generative AI trained on their work. The times, they're-a-changin'.
You're rehashing the argument made in one of the articles which this piece criticizes and directly addresses, while ignoring the entirety of what was written before the conclusion that you quoted.
If anyone finds themselves agreeing with the comment I'm responding to, please, do yourself a favor and read the linked article.
I would do no justice to it by reiterating its points here.
It seems like the answer is to adjust IP owner rights very carefully, if that's possible. It sounds very hard, though.
The point the author was making was that the intent of GPL is to shift the balance of power from wealthy corporations to the commons, and that the spirit is to make contributing to the commons an activity where you feel safe in knowing that your contributions won't be exploited.
The corporations today have the resources to purchase AI compute to produce AI-laundered work, which wouldn't be possible without the commons the AI it got its training data from, and give nothing back to the commons.
This state of things disincentivizes contributing to the FOSS ecosystem, as your work will be taken advantage of while the commons gets nothing.
Share-alike clause of the GPL was the price that was set for benefitting from the commons.
Using LLMs trained on GPL code to x "reimplement" it creates a legal (but not a moral!) workaround to circumvent GPL and avoid paying the price for participation.
This means that the current iteration of GPL isn't doing its intended job.
GPL had to grow and evolve. The Internet services using GPL code to provide access to software without, technically, distributing it was a similar legal (but not moral) workaround which was addressed with an update in GPL.
The author argues that we have reached another such point. They don't argue what exactly needs to be updated, or how.
They bring up a suggestion to make copyrightable the input to the LLM which is sufficient to create a piece of software, because in the current legal landscape, creating the prompt is deemed equivalent to creating the output.
You can't have your cake and eat it too.
A vibe-coded API implementation created by an LLM trained on open source, GPL licensed code can only be considered one of two things:
— Derivative work, and therefore, subject to the requirement to be shared under the GPL license (something the legal system disagrees with)
— An original work of the person who entered the prompt into the LLM, which is a transformative fair use of the training set (the current position of the legal system).
In the later case, the input to the LLM (which must include a reference to the API) is effectively deemed to be equivalent to the output.
The vibe-coded app, the reasoning goes, isn't a photocopy of the training data, but a rendition of the prompt (even though the transformativeness came entirely from the machine and not the "author").
Personally, I don't see a difference between making a photocopy by scanning and printing, and by "reimplementing" API by vibe coding. A photocopy looks different under a microscope too, and is clearly distinguishable from the original. It can be made better by turning the contrast up, and by shuffling the colors around. It can be printed on glossy paper.
But the courts see it differently.
Consequently, the legal system currently decided that writing the prompt is where all the originality and creative value is.
Consequently, de facto, the API is the only part of an open source program that has can be protected by copyright.
The author argues that perhaps it should be — to start a conversation.
As for who the benefactors are from a change like that — that, too, is not clear-cut.
The entities that benefit the most from LLM use are the corporations which can afford the compute.
It isn't that cheap.
What has changed since the first days of GPL is precisely this: the cost of implementing an API has gone down asymmetrically.
The importance of having an open-source compiler was that it put corporations and contributors the commons on equal footing when it came to implementation.
It would take an engineer the same amount of time to implement an API whether they do it for their employer or themselves. And whether they write a piece of code for work or for an open-source project, the expenses are the same.
Without an open compiler, that's not possible. The engineer having access to the compiler at work would have an infinite advantage over an engineer who doesn't have it at home.
The LLM-driven AI today takes the same spot. It's become the tool that software engineers can and do use to produce work.
And the LLMs are neither open nor cheap. Both creating them as well as using them at scale is a privilege that only wealthy corporations can afford.
So we're back to the days before the GNU C compiler toolchain was written: the tools aren't free, and the corporations have effectively unlimited access to them compared to enthusiasts.
Consequently, locking down the implementation of public APIs will asymmetrically hurt the corporations more than it does the commons.
This asymmetry is at the core of GPL: being forced to share something for free doesn't at all hurt the developer who's doing it willingly in the first place.
Finally, looking back at the old days ignores the reality. Back in the day, the proprietary software established the APIs, and the commons grew by reimplementing them to produce viable substitutes.
The commons did not even have its own APIs worth talking about in the early 1990s. But the commons grew way, way past that point since then.
And the value of the open source software is currently not in the fact that you can hot-swap UNIX components with open source equivalents, but in the entire interoperable ecosystem existing.
The APIs of open source programs are where the design of this enormous ecosystem is encoded.
We can talk about possible negative outcomes from pricing it.
Meanwhile, the already happening outcome is that a large corporation like Microsoft can throw a billion dollars of compute on "creating" MSLinux and refabricating the entire FOSS ecosystem under a proprietary license, enacting the Embrace, Extend, Extinguish strategy they never quite abandoned.
It simply didn't make sense for a large corporation to do that earlier, because it's very hard to compete with free labor of open source contributors on cost. It would not be a justifiable expenditure.
What GPL had accomplished in the past was ensuring that Embracing the commons led to Extending it without Extinguishing, by a Midas touch clause. Once you embrace open source, you are it.
The author of the article asks us to think about how GPL needs to be modified so that today, embracing and extending open-source solutions wouldn't lead to commons being extinguished.
Which is exactly what happened in the case of the formerly-GPL library in question.
If you want to build a new world with out this, we can't do it while we are supporting the very companies that are creating the problem. The more power you give them, the strong they get and the weaker we become.
I think focus needs to shift completely off of for-profit companies. Although, not sure how that is going to happen..lol
[citation needed]
Where does your confidence come from?
GPL itself was precisely the "intellectual property nonsense" adding which made FOSS (free as in freedom) software possible.
The copyright law was awfully broken in the 1980s too. Adding "nonsense" then was the only solution that proved viable.
Historically, nothing but adding "more IP nonsense" has ever worked.
>The real solution is to force AI companies to open up their models to all.
Sure. Pray tell how you would do that without some "intellectual property nonsense".
We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.
>We need free as in freedom LLMs that we can run locally on our own computers
Oh, on that note.
LLMs take a fuckton of compute to train and to even run.
Even if all models were open, we're not at the point where it would create an equal playing field.
My home computer and my dev machine at work have the same specs. But I don't have a compute farm to run a ChatGPT on.
From the fact that copyright infringement is trivial and done at massive scales by pretty much everyone on a daily basis without people even realizing it. You infringe copyright every time you download a picture off of a website. You infringe copyright every time you share it with a friend. Everybody does stuff like this every single day. Nobody cares. It is natural.
> GPL itself was precisely the "intellectual property nonsense"
Yes. In response to copyright protection being extended towards software. It's a legal hack, nothing more. The ideal situation would have been to have no copyright to begin with. The corporation can copy your code but you can copy theirs too. Fair.
> Pray tell how you would do that without some "intellectual property nonsense".
Intellectual property is irrelevant to AI companies.
Intellectual property is built on top of a fundamental delusion: the idea that you can publish information and simultaneously control what people do with it. It's quite simply delusional to believe you can control what people do with information once it's out there and circulating. The tyranny required to implement this amounts to totalitarian dictatorships.
If you want to control information, then your only hope is to not publish it. Like cryptographic keys, the ideal situation is the one where only a single copy of the information exists in the entire universe.
AI companies are not publishing any information. They are keeping their models secret, under lock and key. They need exactly zero intellectual property protection. In fact such protections have negative value to them since it restricts the training of their models.
> We don't exactly get to hold Sam Altman at gunpoint to dictate our terms.
Sure you do. The whole point of government is to do just that. Literally pass some kind of law that forces the corporations to publish the model weights. And if the government refuses to do it, people can always rise up.
> Even if all models were open, we're not at the point where it would create an equal playing field.
Hopefully we will be, in the future.
respectfully yoy have no idea what you are talking about here.
If "more freedom" is your goal, then this rewrite is inherently in that direction. It didn't "close" the old library down. The LGPL version remains under its license, for anyone to use and redistribute exactly as it always has. There is just now also an alternative that one can exercise different rights with. And that doesn't even get into the fact that "increased freedom" was never a condition of being allowed to clone a system from its interfaces in the first place. It might have been a fig leaf, but some major events in the legal landscape of all this came from closed reimplementations. Sony v. Connectix is arguably the defining case for dealing with cloning from public interfaces and behavior as it applies to emulators of all kinds, and Connectix Virtual Gamestation was very much NOT an open source or free product.
But to go a step further, the larger idea of AI assisted re-writes being "good", even if the human developers may have seen the original code seems to broadly increase freedoms overall. Imagine how much faster WINE development can go now that everyone that has seen any Microsoft source code can just direct Claude to implement an API. Retro gaming and the emulation scene is sure to see a boost from people pointing AIs at ay tests in source leaks and letting them go to town. No our "foreparents" weren't competing with corporations with unlimited access to AI trained on their work, they were competing with corporations with unlimited access to the real hardware and schematics and specifications. The playing field has always been un-level which was why fighting for the right to re-implement what you can see with your own eyes and measure with your own instruments was so important. And with the right AI tools, scrappy and small teams of developers can compete on that playing field in a way that previous developers could only dream of.
So no, I agree with the comment that you're responding to. The incredible mad dash to suddenly find strong IP rights very very important now that it's the open source community's turn to see their work commoditized and used in ways they don't approve of is off-putting and in my opinion a dangerous road to tread that will hand back years of hard fought battles in an attempt to stop the tides. In the end it will leave all of us in a weaker position while solidifying the hold large corporations have on IP in ways we will regret in the years to come.
Pretty sure no one, (but me anyway) saw overt theft of IP by ignoring IP law through redefinition coming. Admittedly I couldn't articulate for you capital would skill transfer and commoditize it in the form of pay to play data centers, but give me a break, I was a teenager/twenty something at the time.