An interesting aspect of this, especially their blog post (https://malus.sh/blog.html ), is that it acknowledges a strain in our legal system I've been observing for decades, but don't think the legal system or people in general have dealt with, which is that generally costs matter.
A favorite example of mine is speed limits. There is a difference between "putting up a sign that says 55 mph and walking away", "putting up a sign that says 55 mph and occasionally enforcing it with expensive humans when they get around to it", and "putting up a sign that says 55 mph and rigidly enforcing it to the exact mph through a robot". Nominally, the law is "don't go faster than 55 mph". Realistically, those are three completely different policies in every way that matters.
We are all making a continual and ongoing grave error thinking that taking what were previously de jure policies that were de facto quite different in the real world, and thoughtlessly "upgrading" the de jure policies directly into de facto policies without realizing that that is in fact a huge change in policy. One that nobody voted for, one that no regulator even really thought about, one that we are just thoughtlessly putting into place because "well, the law is, 55 mph" without realizing that, no, in fact that never was the law before. That's what the law said, not what it was. In the past those could never really be the same thing. Now, more and more, they can.
This is a big change!
Cost of enforcement matters. The exact same nominal law that is very costly to enforce has completely different costs and benefits then that same law becoming all but free to rigidly enforce.
And without very many people consciously realizing it, we have centuries of laws that were written with the subconscious realization that enforcement is difficult and expensive, and that the discretion of that enforcement is part of the power of the government. Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
Yet we still have almost no recognition that that is an issue. This could, perhaps surprisingly, be one of the first places we directly grapple with this in a legal case someday soon, that the legality of something may be at least partially influenced by the expense of the operation.
We should welcome more precise law enforcement. Imperfect enforcement is too easy for law enforcement officers to turn into selective enforcement. By choosing who to go after, law enforcement gets the unearned power to change the law however they want, enforcing unwritten rules of their choosing. Having law enforcement make the laws is bad.
The big caveat, though, is that when enforcement becomes more accurate, the rules and penalties need to change. As you point out, a rigidly enforced law is very different from one that is less rigorously enforced. You are right that there is very little recognition of this. The law is difficult to change by design, but it may soon have to change faster than it has in the past, and it's not clear how or if that can happen. Historically, it seems like the only way rapid governmental change happens is by violent revolution, and I would rather not live in a time of violent revolution...
The problem with precise law enforcement is that the legal system is incredibly complex. There's a tagline that ‘everybody's a criminal’; I don't know if that's necessarily true but I do definitely believe that a large number of ‘innocent’ people are criminals (by the letter of the law) without their knowledge. Because we usually only bother to prosecute crimes if some obvious harm has been done this doesn't cause a lot of damage in practice (though it can be abused), but if you start enforcing the letter of every law precisely it suddenly becomes the obligation of every citizen to know every law — in a de facto way, rather than just the de jure way we currently have as a consequence of ‘ignorance of the law is no excuse’. So an increase of precision in law enforcement must be preceded by a drastic simplification of the law itself — not a bad thing by any means, but also not an easy (or, perhaps, possible) task.
The reason speed limits make such a great example for these arguments is because they're a preemptive law. Technically, nobody is directly harmed by speeding. We outlaw speeding on the belief that it statistically leads to and/or is correlated with other harms. Contrast this to a law against assault or theft: in those kinds of cases, the law makes the direct harm itself illegal.
Increasing the precision of enforcement makes a lot more sense for direct-harm laws. You won't find anyone seriously arguing that full 100% enforcement of murder laws is a bad idea. It's the preemptive laws, which were often lazily enforced, especially when no real harm resulted from the action, where this all gets complicated. Maybe this is the distinction to focus on.
This unwritten distinction exists only to allow targeted enforcement in service of harassment and oppression. There is no upside (even if getting away with speeding feels good). We should strive to enforce all laws 100% of the time as that is the only fair option.
If a law being enforced 100% of the time causes problems then rethink the law (i.e. raise the speed limit, or design the road slower).
> If a law being enforced 100% of the time causes problems then rethink the law (i.e. raise the speed limit, or design the road slower).
Isn't this the point of the whole conversation we are having here?
Laws on copyright were not created for current AI usage on open source project replication.
They need to change, because if they are perfectly enforced by the letter, they result in actions that are clearly against the intent of the law itself.
The underlying problem is that the world changes too fast for the laws so be fair immediately
If speed limits were automated rigidly enforced 100% of the time, it would be impossible to drive.
>only to allow targeted enforcement in service of harassment and oppression
That's absurd hyperbole. A competent policeman will recognise the difference between me driving 90 km/h on a 80 km/h road because I didn't notice the sign. And me driving 120 km/h out of complete disregard for human life. Should I get a fine for driving 90? Yea, probably. Is it a first time offence? Was anyone else on the road? Did the sign get knocked down? Is it day or night? Have I done this 15 times before? Is my wife in labour in the passenger seat? None of those are excuses, but could be grounds for a warning instead.
Precise law enforcement would motivate political will to proactively law change to be more precise and appropriate, or tuned, to the public sentiment.
Imprecise law enforcement enables political office holders to arbitrarily leverage the law to arrest people they label as a political enemy, e.g. Aaron Swartz.
If everyone that ever shared publications outside the legal subscriber base was precisely arrested, charged, and punished, I dont think the punishment amd current legal terrain regarding the charges leveraged against him would have lasted.
"Law is about justice" is one of those things a good professor gets every 1L to raise their hands in agreement to before spending the next semester proving why that's 100% not the case.
The existing laws are rarely well specified enough for precise enforcement, often on purpose.
You cannot have precise enforcement with imprecise laws. It’s as simple as that.
The HN favorite in this respect is “fair use” under copyright. It isn’t well specified enough for “precise enforcement”. How do you suggest we approach that one?
Imperfect enforcement is a feature as often as it is a bug. You can't make "antisocial behavior" in general illegal but you can make certain behaviors (loitering, public intoxication) illegal and selectively enforce against only those who are behaving in an antisocial manner. Of course the other edge of this sword is using this discretion to blanket discriminate against racial or class groups.
Dean Ball made this exact point on the Ezra Klein show a few days ago. I always thought laws would get more just with perfect enforcement -- the people passing mandatory sentencing laws for minor drug offenses would think twice if their own children, and not just minorities and unfavourable groups, were subject to the same consequences (instead of rehab or community service).
But if I've learned anything in 20 years of software eng, it's that migration plans matter. The perfect system is irrelevant if you can't figure out how to transition to it. AI is dangling a beautiful future in front of us, but the transition looks... Very challenging
> I always thought laws would get more just with perfect enforcement
As Edward Snowden once argued in an AMA on Reddit, a zero crime rate is undesirable for democratic society because it very likely implies that it's impossible to evade law enforcement. The latter, however, means that people won't be able to do much if the laws ever become tyrannic, e.g. due to a change in power. In other words, in a well-functioning democratic society it must always be possible (in principle) to commit a crime and get away.
> Dean Ball made this exact point on the Ezra Klein show a few days ago. I always thought laws would get more just with perfect enforcement -- the people passing mandatory sentencing laws for minor drug offenses would think twice if their own children, and not just minorities and unfavourable groups, were subject to the same consequences (instead of rehab or community service).
The problem with perfect enforcement is it requires the same kind of forethought as waterfall development. You rigidly design the specification (law) at the start, then persist with it without deviation from the original plan (at least for a long time). In your example, the lawmakers may still pass the law because they don't think of their kids as drug users, and are distracted by some outrage in some other area.
They do, but letting mob rule decide criminal sanction is beyond fucked. See: Any discussion thread of literally any criminal being sentenced, receiving parole, or better yet, committing any crime after being released for serving a different one.
This is of course assuming that politicians aren't largely duplicitious and actually believe in a word they say. I grew up in Indonesia, and the number of politicians who were extremely anti-porn getting caught watching porn in parliament is frankly staggering, yet alone the ones who are pro death penalty for drugs caught as being part of massive drug smuggling rings.
You raise an interesting point: One question that I think about developing countries: Most of them have higher perception of corruption compared to highly developed (OECD) nations. How do countries realistically reduce corruption? Korea went from an incredibly poor country in 1960 to a wealthy country in 2010. I am sure they dramatically reduced corruption over this time period... but how? Another example, in the 1960s/1970s, Hongkong dramatically increased the pay for civil servants (including police officers) to reduce corruption. (It worked, mostly.)
I live in a developing country. What I find is that the corruption is generally easier to navigate here that it was in the USA. The corruption in the USA is much more entrenched, in the form of regulatory capture. At the local level this can look like a local ordinance where “only a contractor with xy and z (only one of which is needed for the job) can bid, favoring a specific contractor. Here you just figure out compliance with the person in charge.
Corruption is eliminated by properly aligning incentives. Capitalism is also all about properly aligning incentives. Moving to a more capitalism-heavy system usually causes countries to get much richer.
Eastern Europe went through a similar transition. Before the iron curtain fell, the eastern bloc operated on favors more than it operated on money. This definitely isn't the case any more.
How many times have we seen politicians advocate for laws against something, then do a 180 when one of their kids does it? Even if you had that system, I don't think it would work the way you say. People are dumb and politicians are no exception.
Many governments around the world have entities to which you can write a letter, and those entities are frequently obligated to respond to that letter within a specific time frame. Those laws have been written with the understanding that most people don't know how to write letters, and those who do, will not write them unless absolutely necessary.
This allows the regulators to be slow and operate by shuffling around inefficient paper forms, instead of keeping things in an efficient ticket tracking system.
LLMs make it much, much easier to write letters, even if you don't speak the language and can only communicate at the level of a sixth-grader. Imagine what happens when the worst kind of "can I talk to your supervisor" Karen gets access to a sycophantic LLM, which tells her that she's "absolutely right, this is absolutely unacceptable behavior, I will help you write a letter to your regulator, who should help you out in this situation."
Privacy protection has the exact same issue. Wiretapping laws were created at the time there was literally a detective listening to a private phone conversation as it was happening. Now we record almost everything online, and processing it is trivial and essentially free. The safeguards are the same but the scale of privacy invasion is many orders of magnitude different.
> Realistically, those are three completely different policies in every way that matters.
I think that the failure to distinguish them is due to a really childish outlook on law and government that is encouraged by people who are simple-minded (because it is easy and moralistic) and by people who are in control of law and government (because it extends their control to social enforcement.)
I don't think any discussion about government, law, or democracy is worth anything without an analysis of government that actually looks at it - through seeing where decisions are made, how those decisions are disseminated, what obligations the people who receive those decisions have to follow them and what latitude they have to change them, and ultimately how they are carried out: the endpoint of government is the application of threats, physical restraint, pain, or death in order to prevent people from doing something they wish to do or force them to do something they do not wish to do, and the means to discover where those methods should be applied. The police officer, the federal agent, the private individual given indemnity from police officers and federal agencies under particular circumstances, the networked cameras pointed into the streets are government. Government has a physical, material existence, a reach.
Democracy is simpler to explain under that premise. It's the degree to which the people that this system controls control the decisions that this system carries out. The degree to which the people who control the system are indemnified from its effects is the degree of authoritarianism. Rule by the ungoverned.
It's also why the biggest sign of political childishness for me are these sort of simple ideas of "international law." International law is a bunch of understandings between nations that any one of them can back out of or simply ignore at any time for any reason, if they are willing to accept the calculated risk of consequences from the nations on the other side of the agreement. It's like national law in quality, but absolutely unlike it in quantity. Even Costa Rica has a far better chance of ignoring, without any long-term cost, the mighty US trying to enforce some treaty regulation than you as an individual have to ignore the police department.
Laws were constructed under this reality. If we hypothetically programmed those laws into unstoppable Terminator-like robots and told them to enforce them without question it would just be a completely different circumstance. If those unstoppable robots had already existed with absolute enforcement, we would have constructed the laws with more precision and absolute limitations. We wouldn't have been able to avoid it, because after a law was set the consequences would have almost instantly become apparent.
With no fuzziness, there's no selective enforcement, but also no discretion (what people call selective enforcement they agree with.) If enforcement has blanket access and reach, there's also no need to make an example or deter. Laws were explicitly formulated around these purposes, especially the penalties set. If every crime was caught current penalties would be draconian, because they implicitly assume that everyone who got caught doing one thing got away with three others.
which, as I recall it, suggested that the copyright law effectively considered that it was good that there was a way around copyright (with reverse engineering and clean-room implementation), and also good that the way around copyright required some investment in its own right, rather than being free, easy, and automatic.
I think Samuelson and Scotchmer thought that, as you say, costs matter, and that the legal system was recognizing this, but in a kind of indirect way, not overtly.
> Cost of enforcement matters. The exact same nominal law that is very costly to enforce has completely different costs and benefits then that same law becoming all but free to rigidly enforce.
Hey, I really like this framing. This is a topic that I've thought about from a different perspective.
We have all kinds of 18th and 19th century legal precedents about search, subpoenas, plain sight, surveillance in public spaces, etc... that really took for granted that police effort was limited and that enforcement would be imperfect.
But they break down when you read all the license plates, or you can subpoena anyone's email, or... whatever.
Making the laws rigid and having perfect enforcement has a cost-- but just the baseline cost to privacy and the squashing of innocent transgression is a cost.
(A counterpoint: a lot of selective law enforcement came down to whether you were unpopular or unprivileged in some way... cheaper and automated enforcement may take some of these effects away and make things more fair. Discretion in enforcement can lead to both more and less just outcomes).
This is my problem with Americans and their "but the constitution" arguments.
The U.S. constitution has been written in an age before phones, automatic and semi-automatic rifles (at least in common use), nuclear weapons, high-bandwidth communications networks that operate at lightning speed, mass media, unbreakable encryption and CCTV cameras.
The problem is that "all sides" agree that if the constitution was written today, surprise, surprise, it'd totally agree with them; the gun control people are sure that the 2nd wouldn't cover military weapons, the gun lovers are sure that it would mandate tanks for everyone.
But since having 300 million people have a detailed, nuanced discussion about anything is impossible, everyone works at the edges.
"The future of software is not open. It is not closed. It is liberated, freed from the constraints of licenses written for a world in which reproduction required effort, maintained by a generation of developers who believed that sharing code was its own reward and have been comprehensively proven right about the sharing and wrong about the reward."
This applies to open-source but also very well to proprietary software too ;) Reversing your competitors' software has never been easier!
I think this distinction also gets at some issue with things like privacy and facial recognition.
There’s the old approach of hanging a wanted poster and asking people to “call us if you see this guy”. Then there’s the new approach matching faces in a comprehensive database and camera networks.
The later is just the perfect, efficient implementation of the former. But it’s… different somehow.
This has also been a common theme in recent decades with respect to privacy.
In the US, the police do not generally need a warrant to tail you as you go around town, but it is phenomenally expensive and difficult to do so. Cellphone location records, despite largely providing the same information, do require warrants because it provides extremely cheap, scalable tracking of anyone. In other words, we allow the government to acquire certain information through difficult means in hopes that it forces them to be very selective about how they use it. When the costs changed, what was allowed also had to change.
I think of this in reverse. It's legal for the government to track mail - who sent a message, and who it's going to. They have access to the "outside of the envelope". But it's not legal for them to read the message inside.
And this same principle allows them to build massive friend/connection networks of everyone electronically. The government knows every single person you've communicated with and how often you communicate with them.
The answer to this is just changing the law as enforcement becomes different, instead of leaning on the rule of a few people to determine what the appropriate level of enforcement is.
To do this, though, you're going to have to get rid of veto points! A bit hard in our disastrously constitutional system.
Seconded, thirded, fourthed. I spend a lot of time thinking about how laws, in practice, are not actually intended to be perfectly enforced, and not even in the usual selective-enforcement way, just in the pragmatic sense.
Absolutely! We're not all making that error, I've been venting about it for years.
"Costs matter" is one way to say it, probably a lot easier to digest and more popular than the "Quantity has a quality all it's own" quote I've been using, which is generally attributed to Stalin which is a little bit of a problem.
But it's absolutely true! Flock ALPRs are equivalent to a police officer with binoculars and a post-it for a wanted vehicle's make, model, and license plate, except we can put hundreds of them on the major intersections throughout a city 24/7 for $20k instead of multiplying the police budget by 20x.
A warrant to gather gigabytes of data from an ISP or email provider is equivalent to a literal wiretap and tape recorder on a suspect's phone line, except the former costs pennies to implement and the later requires a human to actually move wires and then listen for the duration.
Speed cameras are another excellent example.
Technology that changes the cost of enforcement changes the character of the law. I don't think that no one realizes this. I think many in office, many implementing the changes, and many supporting or voting for those groups are acutely aware and greedy for the increased authoritarian control but blind to the human rights harms they're causing.
> We are all making a continual and ongoing grave error
> Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
I understand your point that changing the enforcement changes how the law is "felt" even though on the paper the law has not changed. And I think it makes sense to review and potentially revise the laws when enforcement methods change. But in the specific case of the 55 mph limit, would the consequences really be grave and terrible if the enforcement was enforced by a robot, but the law remained the same?
OK, but that would be a consequence of the specific enforcement method, not a consequence the law becoming de facto stricter due to stricter enforcement.
For one thing, the speed limit is intentionally set 5-10mph too low, specifically to make it easier to prove guilt when someone breaks the "real" speed limit.
Anyway. I come from the UK where we've had camera based enforcement for aeons. This of course actually results in people speeding and braking down to the limit as they approach the camera (which is of course announced loudly by their sat nav). The driving quality is frankly worse because of this, not better, and it certainly doesn't reduce incidence of speeding.
Of course the inevitable car tracker (or average speed cameras) resolve this pretty well.
The issue with strictly enforcing the speed limit on roads is that sometimes, people must speed. They must break the law. Wife giving birth, rushing a wounded person to the ER, speeding to avoid a collision, etc.
If we wanted to strictly enforce speed limits, we would put governors on engines. However, doing that would cause a lot of harm to normal people. That's why we don't do it.
Stop and think about what it means to be human. We use judgement and decide when we must break the laws. And that is OK and indeed... expected.
> sometimes, people must speed. They must break the law. Wife giving birth, rushing a wounded person to the ER, speeding to avoid a collision
I would argue that only the last one is a valid reason because it's the only one where it's clear that not speeding leads to direct worse consequences.
Speed limits don't exist just to annoy people. Speeding increases the risk of accident and especially the consequences of an accident.
I don't trust people to drive well in a stressful situation, so why would it be a good idea to let them increase the risk by speeding.
The worst part is that it's not even all that likely that the time saved by speeding ends up mattering.
The “wife giving birth” exception for speeding is always so amusing to me.
In the U.S., the average distance from a hospital is 10 miles (in a rural area). Assuming 55 mph speed limits, that means most people are 11 minutes from a hospital. Realistically, “speeding” in this scenario probably means something like 80 mph, so you cut your travel time to 7.5 minutes.
In other words, you just significantly increased your chances of killing your about to be born kid, your wife, yourself, and innocent bystanders just to potentially arrive at a hospital 210 seconds sooner.
Edit: the rushing someone to an ER scenario is possibly more ridiculous, since you can’t teleport yourself, and if the 3.5 minutes in the above scenario would make a difference, then driving someone to the ER is a significantly worse option than starting first aid while waiting for EMTs to arrive.
E(accident due to going faster) vs E(worse outcome due to waiting)
Your argument only makes sense if the only possible bad thing is a car accident -- to make my point clearer, would you take a 1% chance of losing 100$ to avoid a 50% chance of losing 10$?
Depends how much money you have, but it can be a perfectly rational decision.
No, that's not the reason why people speed. True emergencies are a rounding error.
The real reason is that speed limits are generally lower than the safe speed of traffic, and enforcement begins at about 10mph over the stated limits.
People know they can get away with it.
If limits were raised 15% and strictly enforced, it would probably be better for society. Getting a ticket for a valid emergency would be easy to have reversed.
The answer is not a governor but a speed camera, they have them all over in Brazil and they send you a ticket if you speed through them. Put an exception in the law for emergencies, provide an appeal process, and voila.
Not exactly the same but at least in Spain, the cost of constructing a new building subject to all the regulations makes them completely unafforfable for low salaries.
(There are other problems, I know, but the regulations are crazy).
An interesting read, however I'd like to know how to stop websites from screwing around with my scrollbars. In this case it's hidden entirely. Why is this even a thing websites are allowed to do - to change and remove browser UI elements? It makes no sense even, because I have no idea where I am on the page, or how long it is, without scrolling to the bottom to check. God I miss 2005.
It took me a minute to recognize this as satire (thank you HN comments). However it does actually make sense - maybe this could be a way for OSS devs to get paid.
What if we did build a clean room as a service but the proceeds from that didn't go to the "Malus.sh" corporation, but to the owners / maintainers of the OSS being implemented. Maybe all OSS repos should switch to AGPL or some viral license with link to pay-me-to-implement.com. Companies that want to use that package go get their own custom implementation that is under a license strictly for that company and the OSS maintainer gets paid.
I wonder what the MVP for such a thing would look like.
edit: If anyone wants to brainstorm about this with me drop me a note (email in profile)
I am only 50% certain that your idea is expanding on the satire, if not: project owners can provide dual licensing. I'm sorry if you are serious and didn't understand you.
LOL. Same here. But the footer disclaimer and testimonials gave it away immediately:
> "We had 847 AGPL dependencies blocking our acquisition. MalusCorp liberated them all in 3 weeks. The due diligence team found zero license issues. We closed at $2.3B." - Marcus Wellington III, Former CTO, Definitely Real Corp (Acquired)
> This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services.
"I used to feel guilty about not attributing open source maintainers. Then I remembered that guilt doesn't show up on quarterly reports. Thank you, MalusCorp."
◆
Chad Stockholder
Engineering Director, Profit First LLC
Certain views of OSS and its relation to commercial software always seemed to be fraught with highly voluntarist and moralizing attitudes and an intellectual naivete.
Don't believe in hell but I were I hope they'd be a special place for them.
It's like... revert patent troll? I'm not even sure I get it but the wording "liberation from open source license obligations." just wants to make me puke. I also doubt it's legit but I'm not a lawyer. I hope somebody at the FSF or Apache foundation or ... whomever who is though will clarify.
"Our proprietary AI systems have never seen" how can they prove that? Independent audit? Whom? How often?
This is satire, but the very notion of open source license obligations is meaningless in context. FLOSS licenses do not require you to publish your purely internal changes to the code; any publication happens by your choice, and given that AI can now supposedly engineer a clean-room reimplementation of any published program whatsoever, publishing your software with a proprietary copyright isn't going to exactly save you either.
No, no, some open source licenses require you to publish internal changes. Eg some are explicitly written that you have to publish even when you 'only' use the changes on your own servers. (Not having to publish that was seen as a loophole for cloud companies to exploit.)
Those clauses exclude those licenses from some very important definitions of free/open-source software. For example they would fail the Desert Island Test for the Debian Free Software Guidelines.
The Debian project guidelines are not the ultimate arbiter of what is and isn't free software, they are just some of many useful guidelines to consider. Another useful guideline is that the user shall have freedom.
You are either talking about a license nobody is using (at least I've never heard of it) or misconstruing what the AGPL obligates you to do.
I am going to assume it's the latter.
If you in your house take an AGPL program, host it for yourself, and use it yourself, nothing in the AGPL obligates you to publish the source changes.
In fact, even if you take AGPL software and put it behind a paywall and modify it, the only people who the license mandates you to provide the source code for are the people paying.
The AGPL is basically the GPL with the definition of "user" broadened to include people interacting with the software over the network.
And the GPL, again, only requires you to provide the source code, upon request, to users. If you only distribute GPL software behind a paywall, you personally only need to give the source to people paying.
Although in both these cases, nothing stops the person receiving that source code from publishing it under its own terms.
The point he's making is that who is going to actually enforce that? If I take something that has that license and make changes to it, who is going to know? That's the underlying premise here.
"given that AI can now supposedly engineer a clean-room reimplementation of any published program whatsoever"
I'm missing something there, that's precisely what I'm arguing again. How can it do a clean-room reimplementation when the open source code is most likely in the training data? That only works if you would train on everything BUT the implementation you want. It's definitely feasible but wouldn't that be prohibitively expensive for most, if not all, projects?
If I hired a human to write a clone of GNU grep to be released under a MIT license, and he wrote one that was performed exactly the same as GNU grep, it would be impossible for me to prove that the guy I hired didn't look at the GNU code.
But we'd be able to look at his clone code and see it's different, with different algorithms, etc. We could do a compare and see if there are any parts that were copied. It's certainly possible to clone GNU grep without copying any code and I don't think it would fail any copyright claims just because the GNU grep code is in the wild.
If that was the case, the moment any code is written under the GPL, it could never be reimplemented with a different license.
So instead of a human cloner, I use AI. Sure, the AI has access to the GPL code - every intelligence on the planet does. But does that mean that it's impossible to reimplement an idea? I don't think so.
What you argue is a non-sequitur and regardless of case law really makes no sense when the spirit of the action is to replicate something. Reasonable people would say that replicating and disseminating code with the express purpose of avoiding copyright is a violation of copyright and why it exists in the first place.
Just because something is trivial enough to copy does not mean it was trivial to conceive of and codify. Mens rea really does matter when we are talking about defrauding intellectual property holders and stealing their opportunity.
"Reasonable people would say that replicating and disseminating code with the express purpose of avoiding copyright is a violation of copyright and why it exists in the first place."
But then how can the FSF reimplement AT&T utilities? The FSF didn't invent grep. They wrote a new version of it from scratch under a different license.
Am I right in thinking that is not even "clean room" in the way people usually think of it, e.g. Compaq?
The "clean room" aspect for that came in the way that the people writing the new implementation had no knowledge of the original source material, they were just given a specification to implement (see also Oracle v. Google).
If you're feeding an LLM GPL'd code and it "creates" something "new" from it, that's not "clean room", right?
At the end of the day the supposed reimplementation that the LLM generates isn't copyrightable either so maybe this is all moot.
> If you're feeding an LLM GPL'd code and it "creates" something "new" from it, that's not "clean room", right?
I didn’t RTFA but I suppose that by clean room here they mean you feed the code to ”one” LLM and tell it to write a specification. Then you give the specification to ”another” LLM and tell it to implement the specification.
Satire is too dangerous to be presented outside of its community. This honestly should've been left within FOSDEM.
It's great within the context of people who understand it, enlightening even. Sparks conversations and debates. But outside of it ignorance wields it like a bludgeon and dangerous to everyone around them. Look at all the satirical media around fascism, if you knew to criticize you could laugh, but for fascists it's a call to arms.
Try to take the stance of someone who doesn't really know too much about open source other than it's a nuisance to use, this is a great idea! I wanted to use this tool that corporate said we couldn't touch, but now I can!
Maybe I’m missing something but big corps do this, right? I legitimately expect folks like Musk and Zuckerberg to say these things. I get why that’s exactly the reason it’s satire but it’s a little too close to the truth for me to chuckle about it.
If people lack sense of humor or satire, even if pathologically, well, too bad for them. Why should the rest be denied of that satire? It's not harming anyone at all.
Unfortunately it's not too bad for them, it's too bad for everyone they're around. They aren't the ones that lose out when we start dismantling open source communities.
I didn't see it was satire (having only skimmed the site) until scrolling through the comments and seeing this fake review being quoted. That's when I went "surely not", checked the site, saw it was really there, and was quite relieved this is not yet an actual thing!
As any etymology/Latin nerd will tell you, "this name" (MalusCorp) literally translates to EvilCorp, everything about the site is over the top satire. I know Poe's law and all that, but I'm looking askew at commenters in this thread who fail to realize it as either only reading the headline, or are AI-controlled.
E.g. Palantir, the surveillance analytics company named after the magic orb that purports to let you remotely view anything you want, but actually allows its creator to view you while manipulating you by selectively showing some things and not others.
I really got fooled here for a second, but the unfortunate reality is that people will try this soon, and someone will have to litigate this, if open source is to survive, which will take years and millions of dollars to resolve
Not just "tried"; the current state is that they've done so and are ignoring people telling them they cannot. The "destroy as an example to others" phase hasn't finished yet, but hopefully they'll get sufficient backlash from the projects they supposedly did this to work with to deter future attempts. e.g. they supposedly did this in order to make it part of the Python standard library, so hopefully the response from Python is a massive WTF and "nope".
In fairness to the original mythos that that particular family of awful companies has misused: the palantiri were in fact designed purely for far-seeing, and Sauron wasn't the creator of them, he just got his hands on one and corrupted it into a tool for manipulation.
It also shows why this approach is questionable. Opus 4.6 without tool use or web access can provide chardets source code in full from memory/training data (ironically, including the licensing header): https://gist.github.com/yannleretaille/1ce99e1872e5f3b7b133e...
This is extremely good satire. Question is, why hasn't anyone done this for real? There's enough people with the right knowledge and who would love to destroy open source for personal gain. Is it that this kind of service would be so open to litigation that it would need a lot of money upfront? Or is someone already working on this, and we're just living out the last good days of OSS?
What would be the incentive for someone to do this for real?
We all have access to SOTA LLMs. If I want a "clean room" implementation of some OSS library, and I can choose between paying a third party to run a script to have AI rebuild the whole library for me and just asking Claude to generate the bits of the library I need, why would I choose to pay?
I think this argument applies to most straightforward "AI generated product" business ideas. Any dev can access a SOTA coding model for $20p/m. The value-add isn't "we used AI to do the thing fast", it's the wrapping around it.
Maybe in this case the "wrapping" is that some other company is taking on the legal risk?
There's a lot of things you could do to be malicious towards other people with minimal effort, yet strangely few people do it. Virtually everyone has morals, and most people's are quite compatible with society (hence we have a society) even if small perturbations in foundational morals sometimes lead to seemingly large discrepancies in resultant actions
You need the right kind of person, in the right life circumstances, to have this idea before it happens for real. By having publicity, it becomes vastly more likely that it finds someone who meets the former two criteria, like how it works with other crime (https://en.wikipedia.org/wiki/Copycat_crime). So thanks, Malus :P
Also, there's a difference between "willing to do a bad thing for money" and "actively searching out a bad thing, then proactively building a whole company out of it in the hopes of making money."
It's the difference between a developer taking a job at Palantir out of college because nobody had a better offer, and a guy spending years in his basement designing "Immigrant Spotter+" in the hopes of selling it to the government. Sure, they're both evil, but lots of people pick the first thing, and hardly anybody does the second.
It's an inevitable outcome of automatic code generation that people will do this all the time without thinking about it.
Example: you want a feature in your project, and you know this github repo implements it, so you tell an AI agent to implement the feature and link to the github repo just for reference.
You didn't tell the agent to maliciously reimplement it, but the end result might be the same - you just did it earnestly.
The bottleneck is trust and security. I'd rather defenestrate 3rd party libraries with a local instance of copilot than send all my secret sauce to some cloud/SaaS system.
Put differently, this system already exists and is in heavy use today.
For each project you want to rip off, you'd have to first train an entirely new LLM on all sources except for the target project. Prohibitively expensive.
Most LLMs are trained on a lot of the source code for many open-source projects. This 'project' has the whole song-and-dance about never seeing the source code and separating the system to skirt around legal trouble. Why didn't anyone do that yet?
If you’re referring to Thaler v. Perlmutter, that is not binding precedent nationwide, only in courts under the D.C. Circuit. And it only applies to “pure” AI-generated works; it did not address AI-assisted works, which seem very likely to be copyrightable.
If I want to clone some GPL clone into a MIT license, if it ends up in the public domain because it can't be copyrighted, what do I care? I've still got the code I want without the GPL.
> If any of our liberated code is found to infringe on the original license, we'll provide a full refund and relocate our corporate headquarters to international waters.*
I love it. Brilliant satire that foreshadows the future.
On a quick glance, or skim read, you could be excused for believing this is real, but they drop just enough nuggets throughout that by the end there is no ambiguity.
Really helps illustrates how realistic this could be.
I first encountered the concept of "clean room" in the context of Sean Lahman's free baseball stats database. While technically baseball stats are free, their compiling and manner of presentation in any given format may be claimed as proprietary by any particular provider. And so there's an extensive volunteer effort from baseball fans to "clean room" source them from independent sources such that they are verifying the stats independently of their provenance as a legally permitted basis for building out the database.
I even recall Baseball Mogul relied on the Lahman DB for a period of time. It does make me wonder if we'll see more of that.
There are two teenagers who learned about Malus in the last hour and have started figuring out how to actually build it, right now. They will not cite their source in their IPO statements.
it is straightforward to build this for real, here is my nearly one-shotted tldraw clone from a couple of weeks ago, https://x.com/c_pick/status/2028669568403578931 - the implementation side never saw the code, only the spec (in reality it did see the tldraw code in its training data, but you can't escape that anymore)
I wonder about this training data. There's so much profit from open source code in training data, actually the most of the code it was taught was open source, shouldn't it be then free? Or at least open weight?
At least you think that this is satire, until the author receives a DMCA from one of the big corps saying that he leaked the transcript of their last meeting
I don't know - if you upload a package.json with any dependencies that map to real npmjs.com packages, it does lead you to a Stripe payment page which appears to be real... and it appears you'd be sending real money.
W.r.t. intent, yes. But w.r.t. content, we are long past a situation where it is unrealistic enough to function as satire.
While such tactics would render certain OSS software licenses absurd, the tactic itself, as a means to get around them, is entirely sound. It just reveals the flawed presupposition of such licenses. And I'm not sure there is really any way to patch them up now.
It would also entirely obviate the need for those very same OSS licenses, if LLMs can simply do a clean-room reimplementation of any copywritten software whatsoever.
This is satire but this is where things are heading. The impact on the OSS ecosystem is probably not a net positive overall, but don't forget that this also applies to commercial software as well.
There will be many questions asked, like why buy some SaaS with way too many features when you can just reimplement the parts you need? Why buy some expensive software package when you can point the LLM into the binary with Ghidra or IDA or whatever then spend a few weeks to reverse it?
I was discussing that very point yesterday with a colleague after telling him of recent events. I pointed out that leaning on copyright/copyleft for software has always been a risky move.
Sounds like my CTO. Overuse of LLMs in c-suites is like overuse of weed by teenagers - it may not cause delusions, but it sure seems to make them worse.
Actually I have been told that replacements to (restricted subsets of) open source libraries, generated by LLM’s, vendored next to our code using the dependency, cannot be vulnerable since they don’t have cve’s, and therefore they don’t ever have to be maintained.
That’s how deep we are in neoliberal single truth shit now
> Our proprietary AI systems have never seen the original source code.
For this to be plausible satire, they need to show how they've trained their models to code, without mit, apache, bsd or GPL/agpl code being in the training set...
I know this is satire, but I have an adjacent problem I could use help with. In my company, we have some legacy apps that run, but we no longer have the source, any everyone that worked on them has probably left the planet.
We need to replatform them at some point, and ideally I'd like to let some agents "use" the apps as a means to copy them / rebuild. Most of these are desktop apps, but some have browser interfaces. Has anyone tried something like this or can recommend a service that's worked for them?
I have actually very convincingly recreated a moderately complex 70s-era mainframe app by having an LLM reimplement it based on existing documentation and by accessing the textual user interface.
The biggest trick is that you need to spend 75% of your time designing and building very good verification tools (which you can do with help from the LLM), and having the LLM carefully trace as many paths as possible through the original application. This will be considerably harder for desktop apps unless you have access to something like an accessibility API that can faithfully capture and operate a GUI.
But in general, LLM performance is limited by how good your validation suite is, and whether you have scalable ways to convince yourself the software is correct.
Interested to keep updated on this point. As a consultant, I've worked on transformation of legacy applications so this would help me greatly as well. We've worked on pretty archaic systems where no one knows how the system works even if we have the source code.
I've done a little bit of this and Claude is pretty great. Take the app and let Claude run wild with it. It does require you to be relatively familiar with the app as you may need to guide it in the right direction.
I was able to get it to rebuild and hack together a .NET application that we don't have source for. This was done in a Linux VM and it gave me a version that I could build and run on Windows.
We're past the point of legacy blackbox apps being a mystery. Happy to talk more, my e-mail is available on my profile.
Good idea, but as several comments here suggest, the time when this sort of thing could be taken as satire is gone. I promise you there are multiple people here thinking that this is a good idea. I predict that within a year we will see a service that does exactly this.
This is essentially 'License Laundering as a Service.' The 'Firewall' they describe is an illusion because the contamination happens at the training phase, not the inference phase. You can't claim independent creation when your 'independent developer' (the commercial LLM) already has the original implementation's patterns and edge cases baked into its weights.
In order to really do this, they would need to train LLMs from scratch that had no exposure whatsoever to open source code which they may be asked to reproduce. Those models in turn would be terrible at coding given how much of the training corpus is open source code.
This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services.
"Our lawyers estimated $4M in compliance costs. MalusCorp's Total Liberation package was $50K. The board was thrilled. The open source maintainers were not, but who cares?"
The solution here seems to be to impose some constraint or requirement which means that literal copying is impossible (remember, copyright governs copies, it doesn't govern ideas or algorithms - that would be 'patents', which essentially no open source software has) or where any 'copying' from vaguely remembered pretraining code is on such an abstract indirect level that it is 'transformative' and thus safe.
For example, the Anthropic Rust C compiler could hardly have copied GCC or any of the many C compilers it surely trained on, because then it wouldn't have spat out reasonably idiomatic and natural looking Rust in a differently organized codebase.
Good news for Rust and Lean, I guess, as it seems like everyone these days is looking for an excuse to rewrite everything into those for either speed or safety or both.
> copyright governs copies, it doesn't govern ideas or algorithms
The second part is true. The first is a little trickier. The copyright applies to some fixed media (text in this case) rather than the idea expressed, but the protections extend well beyond copies. For example, in fiction, the narrative arc and "arrangement" is also protected, as are adaptations and translations.
If you were to try and write The Catcher in the Rye in Italian completely from memory (however well you remember it) I believe that would be protected by copyright even if not a single sentence were copied verbatim.
Obviously satire, but it will clearly be what happens in the future (predicting here, I'm not endorsing this practice). We can scratch train a new LLM on code generated from "contaminated" LLMs. We can then audit all the training data used and demonstrate that the original source wasn't in the training data. Therefore the cleanroom implementation holds. Current LLM training is relying less and less on human generated code. Just look at the open source models from China. They rely heavily on distilling from other models. One additional point. Exposure to the original source isn't enough to show infringement. Linus looked at UNIX source before writing linux.
I think this site is either satire, or serious but with a certain kind of humor in which both they and the reader know they're lying (but it's in everyone's interest to play along).
They do say this:
> Is this legal? / our clean room process is based on well-established legal precedent. The robots performing reconstruction have provably never accessed the original source code. We maintain detailed audit logs that definitely exist and are available upon request to courts in select jurisdictions.
Unless they're rejecting almost all of open source packages submitted by the customer, due to those packages being in the training set of the foundation model that they use, this is really the opposite of cleanroom.
This time it's satire, but I bet someone will offer exactly that for real in the next few days. The idea is unethical but far too lucrative from a business perspective.
Often OSS is used not because you want the software, but the software and the upkeep. So even with such a service, you're now just taking code in-house that you have to maintain as well.
…scanning… …fuming… …blood pressure rising… sees a quote attributed to “Chad Stockholder
Engineering Director, Profit First LLC” …oh phew, thank god for that. I actually believed this could be real for a moment!
I find surprising that the polemic I heard more talking, seems to be in the open source to close source direction.
It seems to me, that the more relevant part of this new development, for the software industry, it's a teenager working in the weekend with a LLM and making a functional clone of Autocad, for instance.
> You have been so generous, so unreasonably, almost suspiciously generous, that you have made it possible for an entire global economy to run on software that nobody technically owns, maintained by people that nobody technically employs, governed by licenses that nobody technically reads. It is a miracle of human cooperation. It is also, from a fiduciary standpoint, completely insane.
The issue is how do you interact with other industries/trades who protect their profit making potential.
Ok great - all software and networks are "free." How do you pay for Doctors and Plumbers and Electricians whose earnings are legally protected by the state but whose skill bases are also freely available to be used within the margin of error of a professional or a layman?
Issues like this are great to have conversations about, but if people don't start broadening the scope very quickly, it just turns into the IT/CS worker's worth going to 0 in a world where others worth are protected. And history states, if only 1 group sees the threat, the remaining trades/industries will let it die.
It's not clear to me what your argument has to do with the license laundering service that Malus (Malice?) is offering. Their stealing from the digital commons does nothing to address paying Doctors and Plumbers and Electricians.
It's directed at the person I replied to. It's not directed at the top level OP or Malus which is hilarious, monetized satire.
Focusing overly on corporate structures or specific skills tends to miss the point of how value is assigned in a capitalistic structure when knowledge is cheap. Knowledge has been the capital used by the labor force for hundreds of years. The reason some jobs are resistant is 100% the result of legislation at that point, not anything unique about the job.
"The Trades" seems to be the sales pitch used on the public. In the end they're just labor at that point since I can pump a 20 year old with a master electricians knowledge, keep one master on staff and fire every other person who hits that level when their earnings demand it in the same way we're firing many mid/upper level people in their 30's and 40's now instead of 50's and 60's which is the scenario in Tech today.
Software/IT is just the quickest to be absorbed. Many other industries are just in the slow boil, not seeing it yet.
The value from FOSS is the collaboration between all parties.
There is a mutual agreement between all collaborating parties that "hey we ALL need these core fundamental building blocks of software. why dont we all collaborate in this open space?" And everyone wins.
There is tremendous value in the Linux kernel, and these large open source programs. And this is basically an attack by corporations to attempt to privatize it all.
It's nothing new. This is simply the latest example of capitalist "growth at any cost". We sailed past any immorality hazards a LONG time ago.
What's this 'fun' you mention? As far as the incentives in our systems are concerned, anything that's not done in pursuit of monetary gain is certifiably insane. What really matters in life is using all the tricks, manipulation, abuse and loopholes to attain the biggest number in your asset counter. Anyone who doesn't follow the only thing that matters in life is alien, inhuman even. How do they not see it?
* Many of the people maintaining FOSS are paid to do so; and if we counted 'significance' of maintained FOSS, I would not be surprised if most FOSS of critical significance is maintained for-pay (although I'm not sure).
* Publishing software without a restrictive license is not 'generous', it's the trivial and obvious thing to do. It is the restriction of copying and of source access that is convoluted, anti-social, and if you will, "insane".
* Similarly, FOSS is not a "miracle" of human cooperation, and it what you get when it is difficult to sabotage human cooperation. The situation with physical objects - machines, consumables - is more of a nightmare than the FOSS situation is a miracle. (IIRC, an economist named Veblen wrote about the sabotaging role of pecuniary interests on collaborative industrial processes, about a century ago; but I'm not sure about the details.)
* Many people read licenses, and for the short, paragraph-long licenses, I would even say that most developers read them.
* It is not insane to use FOSS from a "fiduciary standpoint".
The law should be updated to limit clean room reimplementation to a strictly human endeavor. Person, in a faraday cage room, with a machine that is too underpowered to run local LLMs. Reference material (stack overflow archives, language docs, specs, etc) are permitted.
You take Wikipedia, an LLM rewrites every single article giving them your preferred political spin and generates many more pictures for it. You make it sleeker, and price it at 4.99$ per month.
EDIT: That's crazy. They already did that. Waiting for the torment nexus now I guess.
Look, outside of your corner, a world is much much bigger and every nation and every political leaning has rights to have their own POV(for better or worse), as quite frankly this style of thinking on enforcing what others should do is really irritating.
Wikipedia for a time being had already different POVs and it was great for that time period, but as someone that does not have English as first language, I don't dream of a world, where everybody uniformly think the same - because that place already exists where that is a case and that is a graveyard.
I have a feeling this will lead to huge interoperability and ecosystem fragmentation issues.
Well, there is one way... You can have a government steal all open source code and force its citizens to only use proprietary hardware and proprietary code, all government sanctioned btw. I wonder if we're headed this way.
> MalusCorp International Holdings Ltd. is not responsible for any moral implications, existential crises, or late-night guilt spirals resulting from the use of our services.
I feel like we live in an interesting time, where you have to second guess whether someone would actually build something like this. Like, the language is very tongue in cheek, but given how messed up copyright law is, you'd think that by now someone would be doing this, and proudly.
You'll find all the answers if you read more carefully:
> Through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright
> If any of our liberated code is found to infringe on the original license, we'll provide a full refund and relocate our corporate headquarters to international waters.
> "Our lawyers estimated $4M in compliance costs. MalusCorp's Total Liberation package was $50K. The board was thrilled. The open source maintainers were not, but who cares?" - Patricia Bottomline, VP of Legal, MegaSoft Industries
Couldn't this be done on proprietary software as well? Have an agent fuzz an interface (any type) for every bit of functionality and document it. Then have it build based on the document?
>Our proprietary AI robots independently recreate any open source project from scratch.
Fact that this is satire aside, why would a company like this limit this methodology to only open source? Since they can make a "dirty room" AI that uses computer-use models, plays with an app, observes how it looks from the outside (UI) and inside (with debug tools), creates a spec sheet of how the app functions, and then sends those specs to the "clean room" AI.
> observes how it looks from the outside (UI) and inside (with debug tools), creates a spec sheet of how the app functions, and then sends those specs to the "clean room" AI.
and tbh, i cannot see any issues if this is how it is done - you just have to prove that the clean room ai has never been exposed to the source code of the app you're trying to clone.
Not sure their attempted point lands the way they think it will. I view this as an unmitigated good. Open source every damn thing. Open the floodgates. Break the system.
I'd cheer for a company like this.
It seems to dance just on the other side of what's legal, though.
This entire software ecosystem depends on volunteering and cooperation. It demands respect of the people doing the work. Adhering to their licensing terms is the payment they demand for the work they do.
If you steal their social currency, they may just walk away for good, and nobody will pick up the slack for you. And if you're a whole society of greedy little thieves, the future of software will be everyone preciously guarding and hiding their changes to the last open versions of software from some decades ago.
You should read Bruce Perens' testimony in the Jacobsen v. Katzer case that explained all this (and determined that licensing terms are enforceable, and you can't just say "his is open mine is open what's the difference?")
I mean in the context of AI - we're already seeing the conflagration of SAAS, and software jobs are going kaput. It's my deeply considered opinion that the faster this happens, the better, because it'll force a reckoning with impending AI job loss across the board.
We need to deal with the issues now. The worst possible outcome is a gradual drip-drip-drip of incremental job losses, people shuffling from job to job, taking financial hits, some companies pretending everything is fine, other companies embracing full-bore zero employee work. The longer it goes on, the more wealth and power gets siphoned up by corporations and individuals who already have significant wealth, the bigger the inequality, and the bigger the social turmoil.
Software, graphics design, music, and video (even studio level movies) should cope with this now. It's not going to stop, AI isn't going to get worse, there's not going to be some special human only domain carved out. The sooner we cope with this the better, because it'll set the foundation for the rest of the job loss barreling down on us like the Chicxulub asteroid.
It sounds like you'd advocate for accelerationism (by which I mean "to worsen capitalism to promote revolution against it")
The end result could well be the people bringing out the guillotines for tech executives, or even the Butlerian Jihad.
But I'm not sure everyone would agree we need to race to those dystopian futures. They might prefer a more conservative future where they nip the scamming / copyright infringement at scale / "disruption" in the bud.
The trouble seems to revolve mainly around money. Give enough of it to someone, or even promise it, and so many people just lose their minds and their moral backbone. Politicians in charge of regulating these shenanigans especially so, I'm not sure they had moral backbones to begin with.
It's not naked accelerationism, I just don't want to see years and years of suffering and exploitation and chaos giving a permanent advantage to those already in a position to take that advantage. One significant industry is all it will take; light a fire under the ass of congress and the general public, get people motivated to start taking sensible steps to move towards UBI or some sort of Coasean scheme with nationalized shares distributed to people, or whatever. Doing anything is extraordinarily more effective than doing nothing as this plays out.
Open sourcing all the things sounds fun right up until you hit the point where clean room claims collapse under real legal cross-examination. If you think companies with money on the line are just going to roll over and accept it all as fair play I'd like to introduce you to the concept of discovery at $900/hr. If your business model is a legal speedrun you better budget harder than you code.
The frustrating thing is I also thought about this as a natural conclusion - but as a natural workflow that corporations will do when they see AGPL dependencies they want to use. (I also think there's a world where we start tightening our software bill of materials anyway.)
I do not believe it will ever again make sense to build open source for business. the era of OSS as a business model will be very limited going forward. As sad and frustrating as it is, we did it to ourselves.
I'd have mined the copied libraries with something that makes it possible to later change terms and extract fees, as it'd be expected that nobody reads the terms for such service
The smells suspiciously like a well positioned gag that is secretly seeking VC attention. The emotional reaction turned attention seeking feels a bit like having ulterior motives... or maybe Moltbook has made me paranoid?
This is quite literally the end of open source. projects will find themselves in the position of making their test suites private to avoid being sherlocked like this
Some parties wouldn't be thrilled about their "source available" getting cleaned this way. So when this gets completed it would only "clean" real open source that can't afford legal trouble. Satirically structured LLM text is not a defence.
Let’s say instead it consolidated a few packages into 1. This might even be a good idea for security reasons.
Then it offered a mandatory 15% revenue tip to the original projects.
So far GPL enforcement usually comes down to “umm, try and sue us lol”.
How much human intervention is needed for it to be a real innovation and not llm generated. Can I someone to watch Claude do its thing and press enter 3 times ?
interesting name. The opposite of a bonus. So what is, the fact that your fork looses the thousands of eyes (meat and ai) that spot and fix bugs and security leaks?
I did try to upload a requirements.txt with "chardet < 7.0" in it ("Copyright (C) 2024 Dan Blanchard"? I don't think so buddy, it's mine now), but despite claiming otherwise, the satirical site only takes package.json so I uploaded the one from https://github.com/prokopschield/require-gpl/
It does actually generate a price (which is suspiciously like a fixed rate of $1 per megabyte), and does actually lead you to Stripe. What happens if someone actually pays? Are they going to be refunding everything, or are they actually going to file the serial numbers off for you?
It's interesting that the focus is just on open source licenses. If one can strip licenses from source code using LLMs, then surely a Microsoft employee could do the same with the Windows source code!
I was on this talk expecting to hear about MongoDB abusing open source (as you could guess from my profile, that’s a topic dear to my heart). Instead, I saw the most entertaining talk in my life.
This is satire, but I actually have built something that can do this extremely well as an unintentional side effect. I will not be building my business around this capability however
Edit: I did it. Paid them $0.51 to clean room `copyleft`, just to see what would happen. A clean package is now sitting on my desktop, custom-built (I presume) and fully documented. Deleting it now, for obvious reasons. But is it still satire if they actually provide the literal service they're satirizing?
How far do they take the satire? If you pay them do they actually generate output?
It's an interesting word in Latin, because depending on the phonetic length of the vowel and gender it vary greatly in meaning. The word 'malus' (short a, masculine adjective) means wicked, the word 'mālus' (long ā, feminine noun) means apple tree, and 'mālus' (long ā, masculine noun) means the mast of a ship.
> 2010, Jordan Peterson: clean your room
> 2026, Malus: Clean Room as a Service
> 2026, Jordan Peterson: how could I have missed this business opportunity
Presumably this is a joke, based on the "Success Reports" and the footer, among other things.
"This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services."
The name was too much of a giveaway. I just hope that somebody who inevitably builds this for real is self-aware enough to name themselves so transparently.
About the only reason nobody would actually build this is there's no money in it. Who'd pay for a CRaaS version when they're not even paying for the original open source version?
I do think somebody will eventually vibe-code it for the lulz.
if it were true that indeed was legal to rewrite and relicense open source code, would that also be true for non-open source code? as in, could someone do a similar rewrite of their employers proprietary code and release it publicly?
>*Full legal indemnification: *Through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright*
Heh, ok. So, the thinking is:
1. You contract them.
2. The actual Copyright infringement is done by an __offshore__ company.
3. If you get sued by the original software devs, you seek indemnification from the offshore subsidiary.
4. That offshore subsidiary is in a country without copyright laws or with weak laws so "you're good!"
...
5. Profit.
This is a ridiculous legal defense since this "one-way-street" legal process will almost certainly result in you being sued first... the company actually using the infringing code.
The indemnification is likely worthless since the offshore company won't have any assets anyway and will dissolve once there's a lawsuit and legal process is established.
The "guarantee" is absurd: Their "MalusCorp Guarantee" promises a refund and moving headquarters to international waters if infringement is found. This is not a real legal remedy and is written to sound like a joke, which is telling about their seriousness...
This whole "clean room as a service" concept is a legal gray area at best. In practice, it's extremely difficult to prove tha ta "clean room" process was truly clean, especially with AI models that have been trained on vast amounts of existing code (including the very projects they are "recreating").
The indemnification is a marketing gimmick to make a legally dangerous service seem safe. It creates a facade of protection while ensuring that any financial liability stays with you, the customer who wants to avoid infringement .
1. Best part of this (satirical) post is, the service they offer isn't really needed. LLM's can do this already for small projects, and soon likely will for large ones too. You don't need a company to do this, we all have the LLM tooling to do it. Critical we're all spending time thinking about what that means in a thoughtful way.
2. For the sake of argument assume 1 is completely true and feasible now and / or in the near term. If LLM generated code is also non copyrightable... but even if it is... if you can just make a copyleft version via the same manner... what will the licenses even mean any longer?
I wish we'd distinguish between bullshit and clearly identified things that _may_ be future threats.
The linked post contains a whopping lie - "What does it mean for the open source ecosystem that 90% of our open source supply chain can currently be recreated in seconds with today's AI agents"
It can't. Not even close. Please, do show a working clean-room implementation of a major opensource package. (Not left-pad)
We really need to stop hyperventilating and get back to reality.
I think we've already seen this with "AI writes a web-browser" type PR. I guess we can still look forward to when they make license evasion an explicit part of their marketing. Then I can wryly laugh when somebody robo-whitewashes leaked commercial software, knowing that they'll get sued anyways.
blegh, i like the motivation but why again and again do you need to write the content of the page with Slop-LLM-GPT? Your motive and points are valid, why waste it on a word filter that cannot capture it?
I unironically want this service to exist. The GNU GPL "is a tumor on the programming community, in that not only is it completely braindead, but the people who use it go on to infect other people who can't think for themselves."
Historically, it was a good license, and was able to keep Microsoft and Apple in check, in certain respects. But it's too played out now. In the past, a lot of its value came from it being not fully understood. Now it's a known quantity. You will never have a situation where NeXT is forced to open source their Objective-C frontend, for example
edit: it's satire. but likely not too far off from the reality in 6 months.
> Our process is deliberately, provably, almost tediously legal. One set of AI agents analyzes only public documentation: README files, API specifications, type definitions.
since nearly all open source dependencies couple the implementation with type definitions, I'm curious how this could pass the legal bar of the clean room.
Even if they claim to strip the implementation during their clean room process -- their own staff & services have access to the implementation during the stripping process.
I know this is satire but we're in the process of rewriting the .NET Mediatr library because ... it's nothing but a simple design pattern packaged as a paid nuget package. We don't even need LLMs to reprogram it.
So the need is real, at least for enshittified libraries.
I am blown away. Just 16 days ago, we were discussing this HN post: "FreeBSD doesn't have Wi-Fi driver for my old MacBook, so AI built one for me": https://news.ycombinator.com/item?id=47129361
In this post that I wrote: https://news.ycombinator.com/item?id=47131572 ... I theorised about how a company could reuse a similar technique to re-implement an open source project to change its license. In short: (1) Use an LLM to write a "perfect" spec from an existing open source project. (2) Use a different LLM to implement a functionally identical project in same/different programming language then select any license that you wish. Honestly, this is a terrifying reality if you can pay some service to do it on your behalf.
An interesting aspect of this, especially their blog post (https://malus.sh/blog.html ), is that it acknowledges a strain in our legal system I've been observing for decades, but don't think the legal system or people in general have dealt with, which is that generally costs matter.
A favorite example of mine is speed limits. There is a difference between "putting up a sign that says 55 mph and walking away", "putting up a sign that says 55 mph and occasionally enforcing it with expensive humans when they get around to it", and "putting up a sign that says 55 mph and rigidly enforcing it to the exact mph through a robot". Nominally, the law is "don't go faster than 55 mph". Realistically, those are three completely different policies in every way that matters.
We are all making a continual and ongoing grave error thinking that taking what were previously de jure policies that were de facto quite different in the real world, and thoughtlessly "upgrading" the de jure policies directly into de facto policies without realizing that that is in fact a huge change in policy. One that nobody voted for, one that no regulator even really thought about, one that we are just thoughtlessly putting into place because "well, the law is, 55 mph" without realizing that, no, in fact that never was the law before. That's what the law said, not what it was. In the past those could never really be the same thing. Now, more and more, they can.
This is a big change!
Cost of enforcement matters. The exact same nominal law that is very costly to enforce has completely different costs and benefits then that same law becoming all but free to rigidly enforce.
And without very many people consciously realizing it, we have centuries of laws that were written with the subconscious realization that enforcement is difficult and expensive, and that the discretion of that enforcement is part of the power of the government. Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
Yet we still have almost no recognition that that is an issue. This could, perhaps surprisingly, be one of the first places we directly grapple with this in a legal case someday soon, that the legality of something may be at least partially influenced by the expense of the operation.
We should welcome more precise law enforcement. Imperfect enforcement is too easy for law enforcement officers to turn into selective enforcement. By choosing who to go after, law enforcement gets the unearned power to change the law however they want, enforcing unwritten rules of their choosing. Having law enforcement make the laws is bad.
The big caveat, though, is that when enforcement becomes more accurate, the rules and penalties need to change. As you point out, a rigidly enforced law is very different from one that is less rigorously enforced. You are right that there is very little recognition of this. The law is difficult to change by design, but it may soon have to change faster than it has in the past, and it's not clear how or if that can happen. Historically, it seems like the only way rapid governmental change happens is by violent revolution, and I would rather not live in a time of violent revolution...
The problem with precise law enforcement is that the legal system is incredibly complex. There's a tagline that ‘everybody's a criminal’; I don't know if that's necessarily true but I do definitely believe that a large number of ‘innocent’ people are criminals (by the letter of the law) without their knowledge. Because we usually only bother to prosecute crimes if some obvious harm has been done this doesn't cause a lot of damage in practice (though it can be abused), but if you start enforcing the letter of every law precisely it suddenly becomes the obligation of every citizen to know every law — in a de facto way, rather than just the de jure way we currently have as a consequence of ‘ignorance of the law is no excuse’. So an increase of precision in law enforcement must be preceded by a drastic simplification of the law itself — not a bad thing by any means, but also not an easy (or, perhaps, possible) task.
The reason speed limits make such a great example for these arguments is because they're a preemptive law. Technically, nobody is directly harmed by speeding. We outlaw speeding on the belief that it statistically leads to and/or is correlated with other harms. Contrast this to a law against assault or theft: in those kinds of cases, the law makes the direct harm itself illegal.
Increasing the precision of enforcement makes a lot more sense for direct-harm laws. You won't find anyone seriously arguing that full 100% enforcement of murder laws is a bad idea. It's the preemptive laws, which were often lazily enforced, especially when no real harm resulted from the action, where this all gets complicated. Maybe this is the distinction to focus on.
This unwritten distinction exists only to allow targeted enforcement in service of harassment and oppression. There is no upside (even if getting away with speeding feels good). We should strive to enforce all laws 100% of the time as that is the only fair option.
If a law being enforced 100% of the time causes problems then rethink the law (i.e. raise the speed limit, or design the road slower).
> If a law being enforced 100% of the time causes problems then rethink the law (i.e. raise the speed limit, or design the road slower).
Isn't this the point of the whole conversation we are having here?
Laws on copyright were not created for current AI usage on open source project replication.
They need to change, because if they are perfectly enforced by the letter, they result in actions that are clearly against the intent of the law itself.
The underlying problem is that the world changes too fast for the laws so be fair immediately
There is an upside: oppressing people who consistently engage in antisocial behavior is good and necessary.
A system that solves for absolute compliance in every individual case does not result in the emergence of a fairer society.
There are numerous cases, both in history and in fiction, that demonstrate as much.
If speed limits were automated rigidly enforced 100% of the time, it would be impossible to drive.
>only to allow targeted enforcement in service of harassment and oppression
That's absurd hyperbole. A competent policeman will recognise the difference between me driving 90 km/h on a 80 km/h road because I didn't notice the sign. And me driving 120 km/h out of complete disregard for human life. Should I get a fine for driving 90? Yea, probably. Is it a first time offence? Was anyone else on the road? Did the sign get knocked down? Is it day or night? Have I done this 15 times before? Is my wife in labour in the passenger seat? None of those are excuses, but could be grounds for a warning instead.
Precise law enforcement would motivate political will to proactively law change to be more precise and appropriate, or tuned, to the public sentiment.
Imprecise law enforcement enables political office holders to arbitrarily leverage the law to arrest people they label as a political enemy, e.g. Aaron Swartz.
If everyone that ever shared publications outside the legal subscriber base was precisely arrested, charged, and punished, I dont think the punishment amd current legal terrain regarding the charges leveraged against him would have lasted.
But this is a feature, not a bug.
Code is Law is pretty much demonstrates that it is not possible to precisely define law.
https://www.fxleaders.com/news/2025/10/29/code-is-law-sparks...
Additionally, law is not logical. Law is about justice and justice is not logical.
I hold the opinion that law is not about justice.
"Law is about justice" is one of those things a good professor gets every 1L to raise their hands in agreement to before spending the next semester proving why that's 100% not the case.
The existing laws are rarely well specified enough for precise enforcement, often on purpose.
You cannot have precise enforcement with imprecise laws. It’s as simple as that.
The HN favorite in this respect is “fair use” under copyright. It isn’t well specified enough for “precise enforcement”. How do you suggest we approach that one?
Imperfect enforcement is a feature as often as it is a bug. You can't make "antisocial behavior" in general illegal but you can make certain behaviors (loitering, public intoxication) illegal and selectively enforce against only those who are behaving in an antisocial manner. Of course the other edge of this sword is using this discretion to blanket discriminate against racial or class groups.
Dean Ball made this exact point on the Ezra Klein show a few days ago. I always thought laws would get more just with perfect enforcement -- the people passing mandatory sentencing laws for minor drug offenses would think twice if their own children, and not just minorities and unfavourable groups, were subject to the same consequences (instead of rehab or community service).
But if I've learned anything in 20 years of software eng, it's that migration plans matter. The perfect system is irrelevant if you can't figure out how to transition to it. AI is dangling a beautiful future in front of us, but the transition looks... Very challenging
> I always thought laws would get more just with perfect enforcement
As Edward Snowden once argued in an AMA on Reddit, a zero crime rate is undesirable for democratic society because it very likely implies that it's impossible to evade law enforcement. The latter, however, means that people won't be able to do much if the laws ever become tyrannic, e.g. due to a change in power. In other words, in a well-functioning democratic society it must always be possible (in principle) to commit a crime and get away.
> Dean Ball made this exact point on the Ezra Klein show a few days ago. I always thought laws would get more just with perfect enforcement -- the people passing mandatory sentencing laws for minor drug offenses would think twice if their own children, and not just minorities and unfavourable groups, were subject to the same consequences (instead of rehab or community service).
The problem with perfect enforcement is it requires the same kind of forethought as waterfall development. You rigidly design the specification (law) at the start, then persist with it without deviation from the original plan (at least for a long time). In your example, the lawmakers may still pass the law because they don't think of their kids as drug users, and are distracted by some outrage in some other area.
Hmm, the problem is that judges and even police officers are generally saner than voters.
Giving the former discretion was a way to sneakily contain the worst excesses of the latter.
Alas, self-interest isn't really something voters seem to really take into account.
Judges and police officers have their own massive "worst excesses".
They do, but letting mob rule decide criminal sanction is beyond fucked. See: Any discussion thread of literally any criminal being sentenced, receiving parole, or better yet, committing any crime after being released for serving a different one.
This is of course assuming that politicians aren't largely duplicitious and actually believe in a word they say. I grew up in Indonesia, and the number of politicians who were extremely anti-porn getting caught watching porn in parliament is frankly staggering, yet alone the ones who are pro death penalty for drugs caught as being part of massive drug smuggling rings.
You raise an interesting point: One question that I think about developing countries: Most of them have higher perception of corruption compared to highly developed (OECD) nations. How do countries realistically reduce corruption? Korea went from an incredibly poor country in 1960 to a wealthy country in 2010. I am sure they dramatically reduced corruption over this time period... but how? Another example, in the 1960s/1970s, Hongkong dramatically increased the pay for civil servants (including police officers) to reduce corruption. (It worked, mostly.)
I live in a developing country. What I find is that the corruption is generally easier to navigate here that it was in the USA. The corruption in the USA is much more entrenched, in the form of regulatory capture. At the local level this can look like a local ordinance where “only a contractor with xy and z (only one of which is needed for the job) can bid, favoring a specific contractor. Here you just figure out compliance with the person in charge.
Corruption is eliminated by properly aligning incentives. Capitalism is also all about properly aligning incentives. Moving to a more capitalism-heavy system usually causes countries to get much richer.
Eastern Europe went through a similar transition. Before the iron curtain fell, the eastern bloc operated on favors more than it operated on money. This definitely isn't the case any more.
How many times have we seen politicians advocate for laws against something, then do a 180 when one of their kids does it? Even if you had that system, I don't think it would work the way you say. People are dumb and politicians are no exception.
And this goes both ways.
Many governments around the world have entities to which you can write a letter, and those entities are frequently obligated to respond to that letter within a specific time frame. Those laws have been written with the understanding that most people don't know how to write letters, and those who do, will not write them unless absolutely necessary.
This allows the regulators to be slow and operate by shuffling around inefficient paper forms, instead of keeping things in an efficient ticket tracking system.
LLMs make it much, much easier to write letters, even if you don't speak the language and can only communicate at the level of a sixth-grader. Imagine what happens when the worst kind of "can I talk to your supervisor" Karen gets access to a sycophantic LLM, which tells her that she's "absolutely right, this is absolutely unacceptable behavior, I will help you write a letter to your regulator, who should help you out in this situation."
Privacy protection has the exact same issue. Wiretapping laws were created at the time there was literally a detective listening to a private phone conversation as it was happening. Now we record almost everything online, and processing it is trivial and essentially free. The safeguards are the same but the scale of privacy invasion is many orders of magnitude different.
> Realistically, those are three completely different policies in every way that matters.
I think that the failure to distinguish them is due to a really childish outlook on law and government that is encouraged by people who are simple-minded (because it is easy and moralistic) and by people who are in control of law and government (because it extends their control to social enforcement.)
I don't think any discussion about government, law, or democracy is worth anything without an analysis of government that actually looks at it - through seeing where decisions are made, how those decisions are disseminated, what obligations the people who receive those decisions have to follow them and what latitude they have to change them, and ultimately how they are carried out: the endpoint of government is the application of threats, physical restraint, pain, or death in order to prevent people from doing something they wish to do or force them to do something they do not wish to do, and the means to discover where those methods should be applied. The police officer, the federal agent, the private individual given indemnity from police officers and federal agencies under particular circumstances, the networked cameras pointed into the streets are government. Government has a physical, material existence, a reach.
Democracy is simpler to explain under that premise. It's the degree to which the people that this system controls control the decisions that this system carries out. The degree to which the people who control the system are indemnified from its effects is the degree of authoritarianism. Rule by the ungoverned.
It's also why the biggest sign of political childishness for me are these sort of simple ideas of "international law." International law is a bunch of understandings between nations that any one of them can back out of or simply ignore at any time for any reason, if they are willing to accept the calculated risk of consequences from the nations on the other side of the agreement. It's like national law in quality, but absolutely unlike it in quantity. Even Costa Rica has a far better chance of ignoring, without any long-term cost, the mighty US trying to enforce some treaty regulation than you as an individual have to ignore the police department.
Laws were constructed under this reality. If we hypothetically programmed those laws into unstoppable Terminator-like robots and told them to enforce them without question it would just be a completely different circumstance. If those unstoppable robots had already existed with absolute enforcement, we would have constructed the laws with more precision and absolute limitations. We wouldn't have been able to avoid it, because after a law was set the consequences would have almost instantly become apparent.
With no fuzziness, there's no selective enforcement, but also no discretion (what people call selective enforcement they agree with.) If enforcement has blanket access and reach, there's also no need to make an example or deter. Laws were explicitly formulated around these purposes, especially the penalties set. If every crime was caught current penalties would be draconian, because they implicitly assume that everyone who got caught doing one thing got away with three others.
There was this scholarly article from Pamela Samuelson and Suzanne Scotchmer
https://yalelawjournal.org/pdf/200_ay258cck.pdf
which, as I recall it, suggested that the copyright law effectively considered that it was good that there was a way around copyright (with reverse engineering and clean-room implementation), and also good that the way around copyright required some investment in its own right, rather than being free, easy, and automatic.
I think Samuelson and Scotchmer thought that, as you say, costs matter, and that the legal system was recognizing this, but in a kind of indirect way, not overtly.
> Cost of enforcement matters. The exact same nominal law that is very costly to enforce has completely different costs and benefits then that same law becoming all but free to rigidly enforce.
Hey, I really like this framing. This is a topic that I've thought about from a different perspective.
We have all kinds of 18th and 19th century legal precedents about search, subpoenas, plain sight, surveillance in public spaces, etc... that really took for granted that police effort was limited and that enforcement would be imperfect.
But they break down when you read all the license plates, or you can subpoena anyone's email, or... whatever.
Making the laws rigid and having perfect enforcement has a cost-- but just the baseline cost to privacy and the squashing of innocent transgression is a cost.
(A counterpoint: a lot of selective law enforcement came down to whether you were unpopular or unprivileged in some way... cheaper and automated enforcement may take some of these effects away and make things more fair. Discretion in enforcement can lead to both more and less just outcomes).
This is my problem with Americans and their "but the constitution" arguments.
The U.S. constitution has been written in an age before phones, automatic and semi-automatic rifles (at least in common use), nuclear weapons, high-bandwidth communications networks that operate at lightning speed, mass media, unbreakable encryption and CCTV cameras.
The problem is that "all sides" agree that if the constitution was written today, surprise, surprise, it'd totally agree with them; the gun control people are sure that the 2nd wouldn't cover military weapons, the gun lovers are sure that it would mandate tanks for everyone.
But since having 300 million people have a detailed, nuanced discussion about anything is impossible, everyone works at the edges.
I think the fundamental issue is that a form of equality where everyone gets what was previously the worst outcome is... probably worse.
Many times when politicians get to suffer the full effects of their laws, the laws quickly change for the better.
Yup :P
As in their post:
"The future of software is not open. It is not closed. It is liberated, freed from the constraints of licenses written for a world in which reproduction required effort, maintained by a generation of developers who believed that sharing code was its own reward and have been comprehensively proven right about the sharing and wrong about the reward."
This applies to open-source but also very well to proprietary software too ;) Reversing your competitors' software has never been easier!
I think this distinction also gets at some issue with things like privacy and facial recognition.
There’s the old approach of hanging a wanted poster and asking people to “call us if you see this guy”. Then there’s the new approach matching faces in a comprehensive database and camera networks.
The later is just the perfect, efficient implementation of the former. But it’s… different somehow.
This has also been a common theme in recent decades with respect to privacy.
In the US, the police do not generally need a warrant to tail you as you go around town, but it is phenomenally expensive and difficult to do so. Cellphone location records, despite largely providing the same information, do require warrants because it provides extremely cheap, scalable tracking of anyone. In other words, we allow the government to acquire certain information through difficult means in hopes that it forces them to be very selective about how they use it. When the costs changed, what was allowed also had to change.
I think of this in reverse. It's legal for the government to track mail - who sent a message, and who it's going to. They have access to the "outside of the envelope". But it's not legal for them to read the message inside.
And this same principle allows them to build massive friend/connection networks of everyone electronically. The government knows every single person you've communicated with and how often you communicate with them.
It was never designed for this originally.
The answer to this is just changing the law as enforcement becomes different, instead of leaning on the rule of a few people to determine what the appropriate level of enforcement is.
To do this, though, you're going to have to get rid of veto points! A bit hard in our disastrously constitutional system.
Seconded, thirded, fourthed. I spend a lot of time thinking about how laws, in practice, are not actually intended to be perfectly enforced, and not even in the usual selective-enforcement way, just in the pragmatic sense.
Absolutely! We're not all making that error, I've been venting about it for years.
"Costs matter" is one way to say it, probably a lot easier to digest and more popular than the "Quantity has a quality all it's own" quote I've been using, which is generally attributed to Stalin which is a little bit of a problem.
But it's absolutely true! Flock ALPRs are equivalent to a police officer with binoculars and a post-it for a wanted vehicle's make, model, and license plate, except we can put hundreds of them on the major intersections throughout a city 24/7 for $20k instead of multiplying the police budget by 20x.
A warrant to gather gigabytes of data from an ISP or email provider is equivalent to a literal wiretap and tape recorder on a suspect's phone line, except the former costs pennies to implement and the later requires a human to actually move wires and then listen for the duration.
Speed cameras are another excellent example.
Technology that changes the cost of enforcement changes the character of the law. I don't think that no one realizes this. I think many in office, many implementing the changes, and many supporting or voting for those groups are acutely aware and greedy for the increased authoritarian control but blind to the human rights harms they're causing.
> We are all making a continual and ongoing grave error
> Blindly translating those centuries of laws into rigid, free enforcement is a terrible idea for everyone.
I understand your point that changing the enforcement changes how the law is "felt" even though on the paper the law has not changed. And I think it makes sense to review and potentially revise the laws when enforcement methods change. But in the specific case of the 55 mph limit, would the consequences really be grave and terrible if the enforcement was enforced by a robot, but the law remained the same?
> would the consequences really be grave and terrible if the enforcement was enforced by a robot
The potential consequences of mass surveillance come to mind.
OK, but that would be a consequence of the specific enforcement method, not a consequence the law becoming de facto stricter due to stricter enforcement.
For one thing, the speed limit is intentionally set 5-10mph too low, specifically to make it easier to prove guilt when someone breaks the "real" speed limit.
https://en.wikipedia.org/wiki/Normalization_of_deviance
While it is true that many people do speed, that doesn't make their speeding "the real speed limit".
Yeah, I'd have to go slower????
Anyway. I come from the UK where we've had camera based enforcement for aeons. This of course actually results in people speeding and braking down to the limit as they approach the camera (which is of course announced loudly by their sat nav). The driving quality is frankly worse because of this, not better, and it certainly doesn't reduce incidence of speeding.
Of course the inevitable car tracker (or average speed cameras) resolve this pretty well.
The issue with strictly enforcing the speed limit on roads is that sometimes, people must speed. They must break the law. Wife giving birth, rushing a wounded person to the ER, speeding to avoid a collision, etc.
If we wanted to strictly enforce speed limits, we would put governors on engines. However, doing that would cause a lot of harm to normal people. That's why we don't do it.
Stop and think about what it means to be human. We use judgement and decide when we must break the laws. And that is OK and indeed... expected.
> sometimes, people must speed. They must break the law. Wife giving birth, rushing a wounded person to the ER, speeding to avoid a collision
I would argue that only the last one is a valid reason because it's the only one where it's clear that not speeding leads to direct worse consequences.
Speed limits don't exist just to annoy people. Speeding increases the risk of accident and especially the consequences of an accident.
I don't trust people to drive well in a stressful situation, so why would it be a good idea to let them increase the risk by speeding.
The worst part is that it's not even all that likely that the time saved by speeding ends up mattering.
The “wife giving birth” exception for speeding is always so amusing to me.
In the U.S., the average distance from a hospital is 10 miles (in a rural area). Assuming 55 mph speed limits, that means most people are 11 minutes from a hospital. Realistically, “speeding” in this scenario probably means something like 80 mph, so you cut your travel time to 7.5 minutes.
In other words, you just significantly increased your chances of killing your about to be born kid, your wife, yourself, and innocent bystanders just to potentially arrive at a hospital 210 seconds sooner.
Edit: the rushing someone to an ER scenario is possibly more ridiculous, since you can’t teleport yourself, and if the 3.5 minutes in the above scenario would make a difference, then driving someone to the ER is a significantly worse option than starting first aid while waiting for EMTs to arrive.
E(accident due to going faster) vs E(worse outcome due to waiting)
Your argument only makes sense if the only possible bad thing is a car accident -- to make my point clearer, would you take a 1% chance of losing 100$ to avoid a 50% chance of losing 10$?
Depends how much money you have, but it can be a perfectly rational decision.
No, that's not the reason why people speed. True emergencies are a rounding error.
The real reason is that speed limits are generally lower than the safe speed of traffic, and enforcement begins at about 10mph over the stated limits.
People know they can get away with it.
If limits were raised 15% and strictly enforced, it would probably be better for society. Getting a ticket for a valid emergency would be easy to have reversed.
The answer is not a governor but a speed camera, they have them all over in Brazil and they send you a ticket if you speed through them. Put an exception in the law for emergencies, provide an appeal process, and voila.
Not exactly the same but at least in Spain, the cost of constructing a new building subject to all the regulations makes them completely unafforfable for low salaries.
(There are other problems, I know, but the regulations are crazy).
De jure, there is no difference between de facto and de jure. De facto there is.
If you had to put a name to this phenomenon, what would it be?
>https://malus.sh/blog.html
An interesting read, however I'd like to know how to stop websites from screwing around with my scrollbars. In this case it's hidden entirely. Why is this even a thing websites are allowed to do - to change and remove browser UI elements? It makes no sense even, because I have no idea where I am on the page, or how long it is, without scrolling to the bottom to check. God I miss 2005.
It took me a minute to recognize this as satire (thank you HN comments). However it does actually make sense - maybe this could be a way for OSS devs to get paid.
What if we did build a clean room as a service but the proceeds from that didn't go to the "Malus.sh" corporation, but to the owners / maintainers of the OSS being implemented. Maybe all OSS repos should switch to AGPL or some viral license with link to pay-me-to-implement.com. Companies that want to use that package go get their own custom implementation that is under a license strictly for that company and the OSS maintainer gets paid.
I wonder what the MVP for such a thing would look like.
edit: If anyone wants to brainstorm about this with me drop me a note (email in profile)
I am only 50% certain that your idea is expanding on the satire, if not: project owners can provide dual licensing. I'm sorry if you are serious and didn't understand you.
You need a legal contract with every contributor to be able to offer dual licensing. That's impractical for some types of projects
If you don't have any contributors, you could just directly relicense without rewriting the whole codebase. If you do, it would be rude to do this.
LOL. Same here. But the footer disclaimer and testimonials gave it away immediately:
> "We had 847 AGPL dependencies blocking our acquisition. MalusCorp liberated them all in 3 weeks. The due diligence team found zero license issues. We closed at $2.3B." - Marcus Wellington III, Former CTO, Definitely Real Corp (Acquired)
> © 2024 MalusCorp International Holdings Ltd. Registered in [JURISDICTION WITHHELD].
> This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services.
I almost lost it, didn't realize it was satire until I came back to these comments
"I used to feel guilty about not attributing open source maintainers. Then I remembered that guilt doesn't show up on quarterly reports. Thank you, MalusCorp." ◆ Chad Stockholder Engineering Director, Profit First LLC
Certain views of OSS and its relation to commercial software always seemed to be fraught with highly voluntarist and moralizing attitudes and an intellectual naivete.
Don't believe in hell but I were I hope they'd be a special place for them.
It's like... revert patent troll? I'm not even sure I get it but the wording "liberation from open source license obligations." just wants to make me puke. I also doubt it's legit but I'm not a lawyer. I hope somebody at the FSF or Apache foundation or ... whomever who is though will clarify.
"Our proprietary AI systems have never seen" how can they prove that? Independent audit? Whom? How often?
Satire... yes but my blood pressure?!
This is satire, but the very notion of open source license obligations is meaningless in context. FLOSS licenses do not require you to publish your purely internal changes to the code; any publication happens by your choice, and given that AI can now supposedly engineer a clean-room reimplementation of any published program whatsoever, publishing your software with a proprietary copyright isn't going to exactly save you either.
No, no, some open source licenses require you to publish internal changes. Eg some are explicitly written that you have to publish even when you 'only' use the changes on your own servers. (Not having to publish that was seen as a loophole for cloud companies to exploit.)
Those clauses exclude those licenses from some very important definitions of free/open-source software. For example they would fail the Desert Island Test for the Debian Free Software Guidelines.
The Debian project guidelines are not the ultimate arbiter of what is and isn't free software, they are just some of many useful guidelines to consider. Another useful guideline is that the user shall have freedom.
You are either talking about a license nobody is using (at least I've never heard of it) or misconstruing what the AGPL obligates you to do.
I am going to assume it's the latter.
If you in your house take an AGPL program, host it for yourself, and use it yourself, nothing in the AGPL obligates you to publish the source changes.
In fact, even if you take AGPL software and put it behind a paywall and modify it, the only people who the license mandates you to provide the source code for are the people paying.
The AGPL is basically the GPL with the definition of "user" broadened to include people interacting with the software over the network.
And the GPL, again, only requires you to provide the source code, upon request, to users. If you only distribute GPL software behind a paywall, you personally only need to give the source to people paying.
Although in both these cases, nothing stops the person receiving that source code from publishing it under its own terms.
The point he's making is that who is going to actually enforce that? If I take something that has that license and make changes to it, who is going to know? That's the underlying premise here.
The courts?
Google “examples of GPL enforced in court” for a few
Yeah it requires finding out, but how do you prove a whistleblower broke their NDA?
"given that AI can now supposedly engineer a clean-room reimplementation of any published program whatsoever"
I'm missing something there, that's precisely what I'm arguing again. How can it do a clean-room reimplementation when the open source code is most likely in the training data? That only works if you would train on everything BUT the implementation you want. It's definitely feasible but wouldn't that be prohibitively expensive for most, if not all, projects?
If I hired a human to write a clone of GNU grep to be released under a MIT license, and he wrote one that was performed exactly the same as GNU grep, it would be impossible for me to prove that the guy I hired didn't look at the GNU code.
But we'd be able to look at his clone code and see it's different, with different algorithms, etc. We could do a compare and see if there are any parts that were copied. It's certainly possible to clone GNU grep without copying any code and I don't think it would fail any copyright claims just because the GNU grep code is in the wild.
If that was the case, the moment any code is written under the GPL, it could never be reimplemented with a different license.
So instead of a human cloner, I use AI. Sure, the AI has access to the GPL code - every intelligence on the planet does. But does that mean that it's impossible to reimplement an idea? I don't think so.
What you argue is a non-sequitur and regardless of case law really makes no sense when the spirit of the action is to replicate something. Reasonable people would say that replicating and disseminating code with the express purpose of avoiding copyright is a violation of copyright and why it exists in the first place.
Just because something is trivial enough to copy does not mean it was trivial to conceive of and codify. Mens rea really does matter when we are talking about defrauding intellectual property holders and stealing their opportunity.
"Reasonable people would say that replicating and disseminating code with the express purpose of avoiding copyright is a violation of copyright and why it exists in the first place."
But then how can the FSF reimplement AT&T utilities? The FSF didn't invent grep. They wrote a new version of it from scratch under a different license.
Civil War Hospital Clean Room equivalent
Am I right in thinking that is not even "clean room" in the way people usually think of it, e.g. Compaq?
The "clean room" aspect for that came in the way that the people writing the new implementation had no knowledge of the original source material, they were just given a specification to implement (see also Oracle v. Google).
If you're feeding an LLM GPL'd code and it "creates" something "new" from it, that's not "clean room", right?
At the end of the day the supposed reimplementation that the LLM generates isn't copyrightable either so maybe this is all moot.
> If you're feeding an LLM GPL'd code and it "creates" something "new" from it, that's not "clean room", right?
I didn’t RTFA but I suppose that by clean room here they mean you feed the code to ”one” LLM and tell it to write a specification. Then you give the specification to ”another” LLM and tell it to implement the specification.
It's a satire. The authors presented it at FOSDEM. They are people that worked previously for foss communities.
Satire is too dangerous to be presented outside of its community. This honestly should've been left within FOSDEM.
It's great within the context of people who understand it, enlightening even. Sparks conversations and debates. But outside of it ignorance wields it like a bludgeon and dangerous to everyone around them. Look at all the satirical media around fascism, if you knew to criticize you could laugh, but for fascists it's a call to arms.
No one who understands the first thing about this topic could possibly have read that web page and not realized that it was satire.
"Those maintainers worked for free—why should they get credit?"
"Your shareholders didn't invest in your company so you could help strangers."
"For the first time, a way to avoid giving that pesky credit to maintainers."
"Full legal indemnification [...] through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright"
This is because you're already in that mindset.
Try to take the stance of someone who doesn't really know too much about open source other than it's a nuisance to use, this is a great idea! I wanted to use this tool that corporate said we couldn't touch, but now I can!
Maybe I’m missing something but big corps do this, right? I legitimately expect folks like Musk and Zuckerberg to say these things. I get why that’s exactly the reason it’s satire but it’s a little too close to the truth for me to chuckle about it.
If people lack sense of humor or satire, even if pathologically, well, too bad for them. Why should the rest be denied of that satire? It's not harming anyone at all.
Unfortunately it's not too bad for them, it's too bad for everyone they're around. They aren't the ones that lose out when we start dismantling open source communities.
PP's point is that 2025-2026 is exactly the result of satire being weaponized to cause real harm, because people pretend it's truth.
That wasn’t people weaponizing satire, that was people just making weapons
There is an overlay of smeared poop on one of the license files… is that something you are seeing on typical tech company landing pages?
The company is literally named “bad/evil.”
The fact that it took me the comments sections to understand this is satire speaks a lot about the current status of where things are going.
EDIT: Reading it again its quite obvious, I was just skimming at first, but still damn. Hilarious
I didn't see it was satire (having only skimmed the site) until scrolling through the comments and seeing this fake review being quoted. That's when I went "surely not", checked the site, saw it was really there, and was quite relieved this is not yet an actual thing!
Under this name or not I think it's happening regardless..
As any etymology/Latin nerd will tell you, "this name" (MalusCorp) literally translates to EvilCorp, everything about the site is over the top satire. I know Poe's law and all that, but I'm looking askew at commenters in this thread who fail to realize it as either only reading the headline, or are AI-controlled.
Satire points out the absurd
lol - it's literally called malus but I guess that's only an obvious giveaway in retrospect
It's perfectly realistic!
E.g. Palantir, the surveillance analytics company named after the magic orb that purports to let you remotely view anything you want, but actually allows its creator to view you while manipulating you by selectively showing some things and not others.
Especially given that a popular open source project recently tried to do exactly that.
https://github.com/chardet/chardet/issues/327
I really got fooled here for a second, but the unfortunate reality is that people will try this soon, and someone will have to litigate this, if open source is to survive, which will take years and millions of dollars to resolve
Not just "tried"; the current state is that they've done so and are ignoring people telling them they cannot. The "destroy as an example to others" phase hasn't finished yet, but hopefully they'll get sufficient backlash from the projects they supposedly did this to work with to deter future attempts. e.g. they supposedly did this in order to make it part of the Python standard library, so hopefully the response from Python is a massive WTF and "nope".
In fairness to the original mythos that that particular family of awful companies has misused: the palantiri were in fact designed purely for far-seeing, and Sauron wasn't the creator of them, he just got his hands on one and corrupted it into a tool for manipulation.
I feel like this is related to these issues (with somebody attempting this approach for real):
https://github.com/chardet/chardet/issues/327
https://github.com/chardet/chardet/issues/331
It also shows why this approach is questionable. Opus 4.6 without tool use or web access can provide chardets source code in full from memory/training data (ironically, including the licensing header): https://gist.github.com/yannleretaille/1ce99e1872e5f3b7b133e...
Wow, I did not expect such perfect reproduction. Link to the actual source code (before being rewritten):
https://github.com/chardet/chardet/blob/5.0.0/chardet/mbchar...
That's worth its own submission and discussion.
It has been submitted last week, happy reading:
https://news.ycombinator.com/item?id=47259177
This is extremely good satire. Question is, why hasn't anyone done this for real? There's enough people with the right knowledge and who would love to destroy open source for personal gain. Is it that this kind of service would be so open to litigation that it would need a lot of money upfront? Or is someone already working on this, and we're just living out the last good days of OSS?
What would be the incentive for someone to do this for real?
We all have access to SOTA LLMs. If I want a "clean room" implementation of some OSS library, and I can choose between paying a third party to run a script to have AI rebuild the whole library for me and just asking Claude to generate the bits of the library I need, why would I choose to pay?
I think this argument applies to most straightforward "AI generated product" business ideas. Any dev can access a SOTA coding model for $20p/m. The value-add isn't "we used AI to do the thing fast", it's the wrapping around it.
Maybe in this case the "wrapping" is that some other company is taking on the legal risk?
There's a lot of things you could do to be malicious towards other people with minimal effort, yet strangely few people do it. Virtually everyone has morals, and most people's are quite compatible with society (hence we have a society) even if small perturbations in foundational morals sometimes lead to seemingly large discrepancies in resultant actions
You need the right kind of person, in the right life circumstances, to have this idea before it happens for real. By having publicity, it becomes vastly more likely that it finds someone who meets the former two criteria, like how it works with other crime (https://en.wikipedia.org/wiki/Copycat_crime). So thanks, Malus :P
Also, there's a difference between "willing to do a bad thing for money" and "actively searching out a bad thing, then proactively building a whole company out of it in the hopes of making money."
It's the difference between a developer taking a job at Palantir out of college because nobody had a better offer, and a guy spending years in his basement designing "Immigrant Spotter+" in the hopes of selling it to the government. Sure, they're both evil, but lots of people pick the first thing, and hardly anybody does the second.
What do you mean nobody has done it?
It's an inevitable outcome of automatic code generation that people will do this all the time without thinking about it.
Example: you want a feature in your project, and you know this github repo implements it, so you tell an AI agent to implement the feature and link to the github repo just for reference.
You didn't tell the agent to maliciously reimplement it, but the end result might be the same - you just did it earnestly.
The bottleneck is trust and security. I'd rather defenestrate 3rd party libraries with a local instance of copilot than send all my secret sauce to some cloud/SaaS system.
Put differently, this system already exists and is in heavy use today.
> why hasn't anyone done this for real?
WDYM? LLMs are essentially this.
For each project you want to rip off, you'd have to first train an entirely new LLM on all sources except for the target project. Prohibitively expensive.
Most LLMs are trained on a lot of the source code for many open-source projects. This 'project' has the whole song-and-dance about never seeing the source code and separating the system to skirt around legal trouble. Why didn't anyone do that yet?
Because that's impossible. Any "robot" that can generate code must be trained on massive amounts of code, most of which is open source.
And how are you supposed to guarantee equivalent functionality by analyzing "README files, API docs, and type definitions"?
It's described on the web page but it's by having 2 agents. One has access to the code and one doesn't.
Are they the same model?
Not that it matters, I just think the joke is more fun if they are different.
The joke is that you don’t.
not a lot of code is public domain and thus not a lot of training data is available
The post claims (tongue-in-cheek, of course) that their customer owns the resulting code.
But that's not true!
According to binding precedent, works created by an AI are not protected by copyright. NO ONE OWNS THEM!!!
I think maybe this is a good thing, but honestly, it's hard to tell.
This is a misreading of the law. Court cases say that AI cannot own copyright, not that AI output cannot be copyrighted.
If you’re referring to Thaler v. Perlmutter, that is not binding precedent nationwide, only in courts under the D.C. Circuit. And it only applies to “pure” AI-generated works; it did not address AI-assisted works, which seem very likely to be copyrightable.
Though here, the purpose is still served.
If I want to clone some GPL clone into a MIT license, if it ends up in the public domain because it can't be copyrighted, what do I care? I've still got the code I want without the GPL.
> If any of our liberated code is found to infringe on the original license, we'll provide a full refund and relocate our corporate headquarters to international waters.*
I love it. Brilliant satire that foreshadows the future.
The satire is A-grade.
On a quick glance, or skim read, you could be excused for believing this is real, but they drop just enough nuggets throughout that by the end there is no ambiguity.
Really helps illustrates how realistic this could be.
I first encountered the concept of "clean room" in the context of Sean Lahman's free baseball stats database. While technically baseball stats are free, their compiling and manner of presentation in any given format may be claimed as proprietary by any particular provider. And so there's an extensive volunteer effort from baseball fans to "clean room" source them from independent sources such that they are verifying the stats independently of their provenance as a legally permitted basis for building out the database.
I even recall Baseball Mogul relied on the Lahman DB for a period of time. It does make me wonder if we'll see more of that.
There are two teenagers who learned about Malus in the last hour and have started figuring out how to actually build it, right now. They will not cite their source in their IPO statements.
it is straightforward to build this for real, here is my nearly one-shotted tldraw clone from a couple of weeks ago, https://x.com/c_pick/status/2028669568403578931 - the implementation side never saw the code, only the spec (in reality it did see the tldraw code in its training data, but you can't escape that anymore)
Well, that's not what the page describes. You'd have to train an LLM on everything except tldraw, then use that LLM for code generation.
I wonder about this training data. There's so much profit from open source code in training data, actually the most of the code it was taught was open source, shouldn't it be then free? Or at least open weight?
The Torment Nexus must be built, because someone wants a lambo.
Note for people who just briefly skimmed the site: This is satire.
At least you think that this is satire, until the author receives a DMCA from one of the big corps saying that he leaked the transcript of their last meeting
Too late. Someone's senior executive management has probably already seen it and spinning up a new project to implement it.
Luckily LLM’s are nowhere near capable enough to pull this off for anything other than the likes of isEven()
Yeah, thank you. I was starting to get a little heated.
Same, I got as far as "Finally, liberation from open source license obligations." until I went back to the comments.
haha did the same. that being said I’m convinced some people do think AI reimplementation actually means cleanroom…
its partial satire. I kinda believe Claude/Codex spill lots of OSS code without license attribution for many millions of devs already.
It wouldn't be funny if it wasn't close to the truth.
The situation is a bit too Torment Nexus-y for my comfort, thank you very much
I don't know - if you upload a package.json with any dependencies that map to real npmjs.com packages, it does lead you to a Stripe payment page which appears to be real... and it appears you'd be sending real money.
Maybe that's part of the joke, though :)
I know this is satire, but I would wish to see something like this for liberating proprietary & closed-source hardware drivers.
Thank you for pointing that out, I genuinely was scratching my head and questioning if this site was serious.
For now
For now...
The best satire is that which becomes reality.
I would posit that the best satire is that which holds a clear enough mirror to society that people choose for it to not come to pass.
Best comment here!
Malus Corporation = EvilCorp
W.r.t. intent, yes. But w.r.t. content, we are long past a situation where it is unrealistic enough to function as satire.
While such tactics would render certain OSS software licenses absurd, the tactic itself, as a means to get around them, is entirely sound. It just reveals the flawed presupposition of such licenses. And I'm not sure there is really any way to patch them up now.
It would also entirely obviate the need for those very same OSS licenses, if LLMs can simply do a clean-room reimplementation of any copywritten software whatsoever.
It will be like Galaxy Quest - they saw the historical records, copied them and then ... still needed humans to help them :)
I was wondering. I had heard chardet story and wouldn't be surprised to see others moving into that same space.
It legit got me. An actual "whaaaaaatttt?" out loud and then I had to figure out why it was the top of HN haha.
This is satire but this is where things are heading. The impact on the OSS ecosystem is probably not a net positive overall, but don't forget that this also applies to commercial software as well.
There will be many questions asked, like why buy some SaaS with way too many features when you can just reimplement the parts you need? Why buy some expensive software package when you can point the LLM into the binary with Ghidra or IDA or whatever then spend a few weeks to reverse it?
This is going to bring back software patents.
Considering my name's on a software patent submitted just last year, I don't think software patents have gone anywhere...
I was discussing that very point yesterday with a colleague after telling him of recent events. I pointed out that leaning on copyright/copyleft for software has always been a risky move.
Where did they go?
"Change all your core software library dependencies to be unmaintained ripoff copies of those libraries." Sounds wise.....¡¡
Sounds like my CTO. Overuse of LLMs in c-suites is like overuse of weed by teenagers - it may not cause delusions, but it sure seems to make them worse.
Don't worry, I'm positive that we're only a few years out from realizing just how damaging both were/are.
I just hope we realize it before it's too late.
Actually I have been told that replacements to (restricted subsets of) open source libraries, generated by LLM’s, vendored next to our code using the dependency, cannot be vulnerable since they don’t have cve’s, and therefore they don’t ever have to be maintained.
That’s how deep we are in neoliberal single truth shit now
> Our proprietary AI systems have never seen the original source code.
For this to be plausible satire, they need to show how they've trained their models to code, without mit, apache, bsd or GPL/agpl code being in the training set...
I know this is satire, but I have an adjacent problem I could use help with. In my company, we have some legacy apps that run, but we no longer have the source, any everyone that worked on them has probably left the planet.
We need to replatform them at some point, and ideally I'd like to let some agents "use" the apps as a means to copy them / rebuild. Most of these are desktop apps, but some have browser interfaces. Has anyone tried something like this or can recommend a service that's worked for them?
I have actually very convincingly recreated a moderately complex 70s-era mainframe app by having an LLM reimplement it based on existing documentation and by accessing the textual user interface.
The biggest trick is that you need to spend 75% of your time designing and building very good verification tools (which you can do with help from the LLM), and having the LLM carefully trace as many paths as possible through the original application. This will be considerably harder for desktop apps unless you have access to something like an accessibility API that can faithfully capture and operate a GUI.
But in general, LLM performance is limited by how good your validation suite is, and whether you have scalable ways to convince yourself the software is correct.
Interested to keep updated on this point. As a consultant, I've worked on transformation of legacy applications so this would help me greatly as well. We've worked on pretty archaic systems where no one knows how the system works even if we have the source code.
I've done a little bit of this and Claude is pretty great. Take the app and let Claude run wild with it. It does require you to be relatively familiar with the app as you may need to guide it in the right direction.
I was able to get it to rebuild and hack together a .NET application that we don't have source for. This was done in a Linux VM and it gave me a version that I could build and run on Windows.
We're past the point of legacy blackbox apps being a mystery. Happy to talk more, my e-mail is available on my profile.
Well, what kind of desktop apps?
Unless obfuscated C# desktop apps are pretty friendly to decompile.
Hope they have very good lawyers...
Haha, was extremely rage-baited by this. Thanks.
Good idea, but as several comments here suggest, the time when this sort of thing could be taken as satire is gone. I promise you there are multiple people here thinking that this is a good idea. I predict that within a year we will see a service that does exactly this.
This is essentially 'License Laundering as a Service.' The 'Firewall' they describe is an illusion because the contamination happens at the training phase, not the inference phase. You can't claim independent creation when your 'independent developer' (the commercial LLM) already has the original implementation's patterns and edge cases baked into its weights.
In order to really do this, they would need to train LLMs from scratch that had no exposure whatsoever to open source code which they may be asked to reproduce. Those models in turn would be terrible at coding given how much of the training corpus is open source code.
>The 'Firewall' they describe is an illusion because [...]
it is an illusion because this is a satire site.
This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services.
:)
"Our lawyers estimated $4M in compliance costs. MalusCorp's Total Liberation package was $50K. The board was thrilled. The open source maintainers were not, but who cares?"
The solution here seems to be to impose some constraint or requirement which means that literal copying is impossible (remember, copyright governs copies, it doesn't govern ideas or algorithms - that would be 'patents', which essentially no open source software has) or where any 'copying' from vaguely remembered pretraining code is on such an abstract indirect level that it is 'transformative' and thus safe.
For example, the Anthropic Rust C compiler could hardly have copied GCC or any of the many C compilers it surely trained on, because then it wouldn't have spat out reasonably idiomatic and natural looking Rust in a differently organized codebase.
Good news for Rust and Lean, I guess, as it seems like everyone these days is looking for an excuse to rewrite everything into those for either speed or safety or both.
> copyright governs copies, it doesn't govern ideas or algorithms
The second part is true. The first is a little trickier. The copyright applies to some fixed media (text in this case) rather than the idea expressed, but the protections extend well beyond copies. For example, in fiction, the narrative arc and "arrangement" is also protected, as are adaptations and translations.
If you were to try and write The Catcher in the Rye in Italian completely from memory (however well you remember it) I believe that would be protected by copyright even if not a single sentence were copied verbatim.
Obviously satire, but it will clearly be what happens in the future (predicting here, I'm not endorsing this practice). We can scratch train a new LLM on code generated from "contaminated" LLMs. We can then audit all the training data used and demonstrate that the original source wasn't in the training data. Therefore the cleanroom implementation holds. Current LLM training is relying less and less on human generated code. Just look at the open source models from China. They rely heavily on distilling from other models. One additional point. Exposure to the original source isn't enough to show infringement. Linus looked at UNIX source before writing linux.
I think this site is either satire, or serious but with a certain kind of humor in which both they and the reader know they're lying (but it's in everyone's interest to play along).
They do say this:
> Is this legal? / our clean room process is based on well-established legal precedent. The robots performing reconstruction have provably never accessed the original source code. We maintain detailed audit logs that definitely exist and are available upon request to courts in select jurisdictions.
Unless they're rejecting almost all of open source packages submitted by the customer, due to those packages being in the training set of the foundation model that they use, this is really the opposite of cleanroom.
This is definitely a parody though, not a real service.
This site is an obvious parody, but like most comedy these days it betrays the severity of the issues happening today.
This time it's satire, but I bet someone will offer exactly that for real in the next few days. The idea is unethical but far too lucrative from a business perspective.
Often OSS is used not because you want the software, but the software and the upkeep. So even with such a service, you're now just taking code in-house that you have to maintain as well.
Realistically, if it in fact did take 5 minutes to do the cleanroom reimplementation, you could just process updates from the OSS source in realtime.
The people that will take this as a good thing unironically will just have their personal Yes Man do that work internally.
…scanning… …fuming… …blood pressure rising… sees a quote attributed to “Chad Stockholder Engineering Director, Profit First LLC” …oh phew, thank god for that. I actually believed this could be real for a moment!
That's funny.
I find surprising that the polemic I heard more talking, seems to be in the open source to close source direction.
It seems to me, that the more relevant part of this new development, for the software industry, it's a teenager working in the weekend with a LLM and making a functional clone of Autocad, for instance.
Theory: Any system, legal or otherwise, that denies the Axioms of Reality, will eventually fail.
Axiom of Reality: “Intellectual Property” does not exist.
Is AI-driven clean room implementation a wild west at the moment? I suppose there haven't yet been any cases to test this out in real life?
> You have been so generous, so unreasonably, almost suspiciously generous, that you have made it possible for an entire global economy to run on software that nobody technically owns, maintained by people that nobody technically employs, governed by licenses that nobody technically reads. It is a miracle of human cooperation. It is also, from a fiduciary standpoint, completely insane.
Funny but true.
It's funny that humans working together for mutual benefit via any other mechanism than regimented corporate slavery is considered insane.
The issue is how do you interact with other industries/trades who protect their profit making potential.
Ok great - all software and networks are "free." How do you pay for Doctors and Plumbers and Electricians whose earnings are legally protected by the state but whose skill bases are also freely available to be used within the margin of error of a professional or a layman?
Issues like this are great to have conversations about, but if people don't start broadening the scope very quickly, it just turns into the IT/CS worker's worth going to 0 in a world where others worth are protected. And history states, if only 1 group sees the threat, the remaining trades/industries will let it die.
It's not clear to me what your argument has to do with the license laundering service that Malus (Malice?) is offering. Their stealing from the digital commons does nothing to address paying Doctors and Plumbers and Electricians.
It's directed at the person I replied to. It's not directed at the top level OP or Malus which is hilarious, monetized satire.
Focusing overly on corporate structures or specific skills tends to miss the point of how value is assigned in a capitalistic structure when knowledge is cheap. Knowledge has been the capital used by the labor force for hundreds of years. The reason some jobs are resistant is 100% the result of legislation at that point, not anything unique about the job.
"The Trades" seems to be the sales pitch used on the public. In the end they're just labor at that point since I can pump a 20 year old with a master electricians knowledge, keep one master on staff and fire every other person who hits that level when their earnings demand it in the same way we're firing many mid/upper level people in their 30's and 40's now instead of 50's and 60's which is the scenario in Tech today.
Software/IT is just the quickest to be absorbed. Many other industries are just in the slow boil, not seeing it yet.
The value from FOSS is the collaboration between all parties.
There is a mutual agreement between all collaborating parties that "hey we ALL need these core fundamental building blocks of software. why dont we all collaborate in this open space?" And everyone wins.
There is tremendous value in the Linux kernel, and these large open source programs. And this is basically an attack by corporations to attempt to privatize it all.
It's nothing new. This is simply the latest example of capitalist "growth at any cost". We sailed past any immorality hazards a LONG time ago.
Easily explained by the fact that writing some types of software and seeing people using it is fun. Some people take photos for free also.
Doesn’t apply everywhere though.
What's this 'fun' you mention? As far as the incentives in our systems are concerned, anything that's not done in pursuit of monetary gain is certifiably insane. What really matters in life is using all the tricks, manipulation, abuse and loopholes to attain the biggest number in your asset counter. Anyone who doesn't follow the only thing that matters in life is alien, inhuman even. How do they not see it?
The quote above didn't mention corporations at all.
"nobody technically employs" strongly implies that this is not a corporate organization.
" maintained by people that nobody technically employs"
It's not true (and also not funny):
* Many of the people maintaining FOSS are paid to do so; and if we counted 'significance' of maintained FOSS, I would not be surprised if most FOSS of critical significance is maintained for-pay (although I'm not sure).
* Publishing software without a restrictive license is not 'generous', it's the trivial and obvious thing to do. It is the restriction of copying and of source access that is convoluted, anti-social, and if you will, "insane".
* Similarly, FOSS is not a "miracle" of human cooperation, and it what you get when it is difficult to sabotage human cooperation. The situation with physical objects - machines, consumables - is more of a nightmare than the FOSS situation is a miracle. (IIRC, an economist named Veblen wrote about the sabotaging role of pecuniary interests on collaborative industrial processes, about a century ago; but I'm not sure about the details.)
* Many people read licenses, and for the short, paragraph-long licenses, I would even say that most developers read them.
* It is not insane to use FOSS from a "fiduciary standpoint".
> * Many people read licenses, and for the short, paragraph-long licenses, I would even say that most developers read them.
Well, it's one thing to read licenses as a human and another to read them as a lawyer.
That's why it's useful to pick one of the standard licenses that lawyers have already combed over, even if it's a long one like the GPL.
Isn't that the premise of Fallout ?
Nope!
Clean room was a poor choice of words… I thought it was an actual clean room for semiconductor devices :(
It's already a term of art used for this very purpose. https://en.wikipedia.org/wiki/Clean-room_design
The law should be updated to limit clean room reimplementation to a strictly human endeavor. Person, in a faraday cage room, with a machine that is too underpowered to run local LLMs. Reference material (stack overflow archives, language docs, specs, etc) are permitted.
Why only FOSS? Why not Wikipedia?
You take Wikipedia, an LLM rewrites every single article giving them your preferred political spin and generates many more pictures for it. You make it sleeker, and price it at 4.99$ per month.
EDIT: That's crazy. They already did that. Waiting for the torment nexus now I guess.
Look, outside of your corner, a world is much much bigger and every nation and every political leaning has rights to have their own POV(for better or worse), as quite frankly this style of thinking on enforcing what others should do is really irritating. Wikipedia for a time being had already different POVs and it was great for that time period, but as someone that does not have English as first language, I don't dream of a world, where everybody uniformly think the same - because that place already exists where that is a case and that is a graveyard.
This was already done, see: Grokipedia.
aren't you describing what elon already did https://grokipedia.com/
So Grokipedia?
I have a feeling this will lead to huge interoperability and ecosystem fragmentation issues.
Well, there is one way... You can have a government steal all open source code and force its citizens to only use proprietary hardware and proprietary code, all government sanctioned btw. I wonder if we're headed this way.
If this site actually connects to Stripe, it's much more than just satire. It's a honeypot :D
This is brilliant satire. Wonderful response to the “rewrite” of chardet.
^ For those who haven’t been keeping up on the debacle.
The name gives it away :)
> MalusCorp International Holdings Ltd. is not responsible for any moral implications, existential crises, or late-night guilt spirals resulting from the use of our services.
I think they should take some responsibility!
Love the product link in footer to "Emergency AGPL Removal"
Before I visited the site, I was really confused. First, the name means bad, as in evil. Second, I couldn't understand what CRaaS was supposed to be.
But I love it! The perfect response to the "clean room" AI re-implementation and re-licensing of whatever that library is called.
>whatever that library is called
https://news.ycombinator.com/item?id=47259177
I ate the onion. But in my defense, people are really putting forward this argument to relicense from GPL to MIT:
https://github.com/chardet/chardet/issues/327
I feel like we live in an interesting time, where you have to second guess whether someone would actually build something like this. Like, the language is very tongue in cheek, but given how messed up copyright law is, you'd think that by now someone would be doing this, and proudly.
The joke is that the models have already seen the source code of said packages regardless, right?
Yeah it's just a slightly more honest and simplified presentation of what LLMs providers do IMO.
How is this legal. Unless it’s trained excluding *all* open source code it’s not legal.
Also, using api and docs itself though not illegal seems defeat the purpose.
Also, it’s not right how creator says “pesky credits to creator”.
Just build your own then. Credit is the least thing everyone using should do.
You'll find all the answers if you read more carefully:
> Through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright
> If any of our liberated code is found to infringe on the original license, we'll provide a full refund and relocate our corporate headquarters to international waters.
> "Our lawyers estimated $4M in compliance costs. MalusCorp's Total Liberation package was $50K. The board was thrilled. The open source maintainers were not, but who cares?" - Patricia Bottomline, VP of Legal, MegaSoft Industries
Couldn't this be done on proprietary software as well? Have an agent fuzz an interface (any type) for every bit of functionality and document it. Then have it build based on the document?
Are licenses even enforceable now? Given that the law is not being followed in the United States anymore?
Everything is enforceable by the rich, nothing is enforceable by the poor
>Our proprietary AI robots independently recreate any open source project from scratch.
Fact that this is satire aside, why would a company like this limit this methodology to only open source? Since they can make a "dirty room" AI that uses computer-use models, plays with an app, observes how it looks from the outside (UI) and inside (with debug tools), creates a spec sheet of how the app functions, and then sends those specs to the "clean room" AI.
> observes how it looks from the outside (UI) and inside (with debug tools), creates a spec sheet of how the app functions, and then sends those specs to the "clean room" AI.
and tbh, i cannot see any issues if this is how it is done - you just have to prove that the clean room ai has never been exposed to the source code of the app you're trying to clone.
Not sure their attempted point lands the way they think it will. I view this as an unmitigated good. Open source every damn thing. Open the floodgates. Break the system.
I'd cheer for a company like this.
It seems to dance just on the other side of what's legal, though.
> I view this as an unmitigated good.
Then I don't think you've thought it through.
This entire software ecosystem depends on volunteering and cooperation. It demands respect of the people doing the work. Adhering to their licensing terms is the payment they demand for the work they do.
If you steal their social currency, they may just walk away for good, and nobody will pick up the slack for you. And if you're a whole society of greedy little thieves, the future of software will be everyone preciously guarding and hiding their changes to the last open versions of software from some decades ago.
You should read Bruce Perens' testimony in the Jacobsen v. Katzer case that explained all this (and determined that licensing terms are enforceable, and you can't just say "his is open mine is open what's the difference?")
https://web.archive.org/web/20100331083827/http://perens.com...
I mean in the context of AI - we're already seeing the conflagration of SAAS, and software jobs are going kaput. It's my deeply considered opinion that the faster this happens, the better, because it'll force a reckoning with impending AI job loss across the board.
We need to deal with the issues now. The worst possible outcome is a gradual drip-drip-drip of incremental job losses, people shuffling from job to job, taking financial hits, some companies pretending everything is fine, other companies embracing full-bore zero employee work. The longer it goes on, the more wealth and power gets siphoned up by corporations and individuals who already have significant wealth, the bigger the inequality, and the bigger the social turmoil.
Software, graphics design, music, and video (even studio level movies) should cope with this now. It's not going to stop, AI isn't going to get worse, there's not going to be some special human only domain carved out. The sooner we cope with this the better, because it'll set the foundation for the rest of the job loss barreling down on us like the Chicxulub asteroid.
It sounds like you'd advocate for accelerationism (by which I mean "to worsen capitalism to promote revolution against it")
The end result could well be the people bringing out the guillotines for tech executives, or even the Butlerian Jihad.
But I'm not sure everyone would agree we need to race to those dystopian futures. They might prefer a more conservative future where they nip the scamming / copyright infringement at scale / "disruption" in the bud.
The trouble seems to revolve mainly around money. Give enough of it to someone, or even promise it, and so many people just lose their minds and their moral backbone. Politicians in charge of regulating these shenanigans especially so, I'm not sure they had moral backbones to begin with.
It's not naked accelerationism, I just don't want to see years and years of suffering and exploitation and chaos giving a permanent advantage to those already in a position to take that advantage. One significant industry is all it will take; light a fire under the ass of congress and the general public, get people motivated to start taking sensible steps to move towards UBI or some sort of Coasean scheme with nationalized shares distributed to people, or whatever. Doing anything is extraordinarily more effective than doing nothing as this plays out.
> I view this as an unmitigated good. Open source every damn thing.
Agree, I said this in another comment, AI-generated anything should be public domain. Public data in, public domain out.
This train wreck in slow motion of AI slowly eroding the open web is no good, let's rip the bandaid.
Open sourcing all the things sounds fun right up until you hit the point where clean room claims collapse under real legal cross-examination. If you think companies with money on the line are just going to roll over and accept it all as fair play I'd like to introduce you to the concept of discovery at $900/hr. If your business model is a legal speedrun you better budget harder than you code.
Open source is good, washing open source licences is very bad.
I publish under AGPL and if someone ever took my project and washed it to MIT I would probably just take all my code offline forever. Fuck that.
The frustrating thing is I also thought about this as a natural conclusion - but as a natural workflow that corporations will do when they see AGPL dependencies they want to use. (I also think there's a world where we start tightening our software bill of materials anyway.)
I do not believe it will ever again make sense to build open source for business. the era of OSS as a business model will be very limited going forward. As sad and frustrating as it is, we did it to ourselves.
I'd have mined the copied libraries with something that makes it possible to later change terms and extract fees, as it'd be expected that nobody reads the terms for such service
The smells suspiciously like a well positioned gag that is secretly seeking VC attention. The emotional reaction turned attention seeking feels a bit like having ulterior motives... or maybe Moltbook has made me paranoid?
This is quite literally the end of open source. projects will find themselves in the position of making their test suites private to avoid being sherlocked like this
So they recreate the open source project by using an llm that was trained in the open source project's source code.
Some parties wouldn't be thrilled about their "source available" getting cleaned this way. So when this gets completed it would only "clean" real open source that can't afford legal trouble. Satirically structured LLM text is not a defence.
As a hypothetical.
Let’s say instead it consolidated a few packages into 1. This might even be a good idea for security reasons.
Then it offered a mandatory 15% revenue tip to the original projects.
So far GPL enforcement usually comes down to “umm, try and sue us lol”.
How much human intervention is needed for it to be a real innovation and not llm generated. Can I someone to watch Claude do its thing and press enter 3 times ?
If the AI could do good refactor of OS project, remove unused code/features and make the code more efficient. Than we really would be out of jobs :D
First I thought this is about manufacturing. Like semiconductor fabs requirement for room cleanness.
Have fun when using this service is itself used in court as evidence for creating a malicious copy
Heh, why don't you do the opposite - recreate proprietary software with open source license
I expect that thousands of people are now doing just that. Most proprietary software is just a shiny UI in front of a crappy database schema.
You know the satire is so good that people actually confused this for something real:))
interesting name. The opposite of a bonus. So what is, the fact that your fork looses the thousands of eyes (meat and ai) that spot and fix bugs and security leaks?
I did try to upload a requirements.txt with "chardet < 7.0" in it ("Copyright (C) 2024 Dan Blanchard"? I don't think so buddy, it's mine now), but despite claiming otherwise, the satirical site only takes package.json so I uploaded the one from https://github.com/prokopschield/require-gpl/
It does actually generate a price (which is suspiciously like a fixed rate of $1 per megabyte), and does actually lead you to Stripe. What happens if someone actually pays? Are they going to be refunding everything, or are they actually going to file the serial numbers off for you?
Today's satire is tomorrow's reality, if the last 50 or so years is anything to go by.
Man, how could they not wait 2.5 weeks until April 1 !!!
I have to admit It took me an unconfortably long amount of time to realize this was fake-
is the motto, "Don't be good?"
"I solemnly swear that I am up to no good" and their seal is ⍼.
https://www.hp-lexicon.org/magic/solemnly-swear-no-good/
https://news.ycombinator.com/item?id=47329605
https://www.explainxkcd.com/wiki/index.php/2606:_Weird_Unico...
It's interesting that the focus is just on open source licenses. If one can strip licenses from source code using LLMs, then surely a Microsoft employee could do the same with the Windows source code!
they really had an entertaining presentation in fosdem 2026 about this. bit too noisy for my taste but regardless:
https://fosdem.org/2026/schedule/event/SUVS7G-lets_end_open_...
I was on this talk expecting to hear about MongoDB abusing open source (as you could guess from my profile, that’s a topic dear to my heart). Instead, I saw the most entertaining talk in my life.
This is an art project right? …right?
This is satire, but I actually have built something that can do this extremely well as an unintentional side effect. I will not be building my business around this capability however
Edit: I did it. Paid them $0.51 to clean room `copyleft`, just to see what would happen. A clean package is now sitting on my desktop, custom-built (I presume) and fully documented. Deleting it now, for obvious reasons. But is it still satire if they actually provide the literal service they're satirizing?
How far do they take the satire? If you pay them do they actually generate output?
Is it satire? Or is it a warning?
If it's a warning, it's a warning that also delivers the thing it's warning about.
It will be nice to know how many legal personnel fell for this trip. Maybe a leaderboard :D
malus, mala, malum ADJ
bad, evil, wicked; ugly; unlucky;
It's an interesting word in Latin, because depending on the phonetic length of the vowel and gender it vary greatly in meaning. The word 'malus' (short a, masculine adjective) means wicked, the word 'mālus' (long ā, feminine noun) means apple tree, and 'mālus' (long ā, masculine noun) means the mast of a ship.
Homonym of "malice" too. Honestly kind of a brilliant name.
Mal: us
> 2010, Jordan Peterson: clean your room > 2026, Malus: Clean Room as a Service > 2026, Jordan Peterson: how could I have missed this business opportunity
Thought this was about semiconductor cleanrooms at first. Any startups doing that?
Poe's Law just smacked me upside the head on this one. Hard.
Presumably this is a joke, based on the "Success Reports" and the footer, among other things.
"This service is provided "as is" without warranty. MalusCorp is not responsible for any legal consequences, moral implications, or late-night guilt spirals resulting from use of our services."
Let's not give anyone ideas!
The name was too much of a giveaway. I just hope that somebody who inevitably builds this for real is self-aware enough to name themselves so transparently.
About the only reason nobody would actually build this is there's no money in it. Who'd pay for a CRaaS version when they're not even paying for the original open source version?
I do think somebody will eventually vibe-code it for the lulz.
if it were true that indeed was legal to rewrite and relicense open source code, would that also be true for non-open source code? as in, could someone do a similar rewrite of their employers proprietary code and release it publicly?
> per package = max( $0.01, size_kb × $0.01 )
> order total = max( $0.50, sum of all packages )
> $0.50 minimum applies per order (Stripe processing floor). No base fee.
Not sure I can trust their output if this simple thing is fluffed
Distinguished staff level trolling
I bet someone has already made this service for real.
A lot of people, including perhaps the creator of this, feel that LLMs themselves are this service.
It exists! It's called Claude Code.
From their front page:
>*Full legal indemnification: *Through our offshore subsidiary in a jurisdiction that doesn't recognize software copyright*
Heh, ok. So, the thinking is:
1. You contract them.
2. The actual Copyright infringement is done by an __offshore__ company.
3. If you get sued by the original software devs, you seek indemnification from the offshore subsidiary.
4. That offshore subsidiary is in a country without copyright laws or with weak laws so "you're good!"
...
5. Profit.
This is a ridiculous legal defense since this "one-way-street" legal process will almost certainly result in you being sued first... the company actually using the infringing code.
The indemnification is likely worthless since the offshore company won't have any assets anyway and will dissolve once there's a lawsuit and legal process is established.
The "guarantee" is absurd: Their "MalusCorp Guarantee" promises a refund and moving headquarters to international waters if infringement is found. This is not a real legal remedy and is written to sound like a joke, which is telling about their seriousness...
This whole "clean room as a service" concept is a legal gray area at best. In practice, it's extremely difficult to prove tha ta "clean room" process was truly clean, especially with AI models that have been trained on vast amounts of existing code (including the very projects they are "recreating").
The indemnification is a marketing gimmick to make a legally dangerous service seem safe. It creates a facade of protection while ensuring that any financial liability stays with you, the customer who wants to avoid infringement .
whoosh
1. Best part of this (satirical) post is, the service they offer isn't really needed. LLM's can do this already for small projects, and soon likely will for large ones too. You don't need a company to do this, we all have the LLM tooling to do it. Critical we're all spending time thinking about what that means in a thoughtful way.
2. For the sake of argument assume 1 is completely true and feasible now and / or in the near term. If LLM generated code is also non copyrightable... but even if it is... if you can just make a copyleft version via the same manner... what will the licenses even mean any longer?
Was malice.sh taken?
Am I the only one who saw the title and thought it was about physical clean-rooms?
No
See also: claw-guard.org/adnet, ai-ceo.org and ai-chro.org in this category
I love these satirical sites that take a jab at how LLMs are (genuinely) ruining software.
See: https://deploycel.org/
Wait this is joke, yep this is a joke... Wait it's not a joke why are people taking this seriously? Ok good this is a joke wait it's REAL?
I know this is satire, but I worry that it's giving some scumbags out there ideas.
Ah yes, how apropos, a "modest proposal" for a new AI era.
It took me too long to understand it’s satire. BP went through stratosphere before I noticed.
Let’s hope one of these fake AI grifters doesn’t take this as a serious idea, raised a couple hundred million, and do real damage.
(I’m not against AI, I just don’t like nonsense either in tech, or people)
The irony of course is that this service already exists. It's called Claude Code (or Codex, etc...) and it costs $200 / month.
Amazon getting all excited hoping it's real.
Amazon C*s calling Amazon Legal to ask if they could get away with implementing something like this internally, more like.
Oof, this is unironically amazing!
I wish we'd distinguish between bullshit and clearly identified things that _may_ be future threats.
The linked post contains a whopping lie - "What does it mean for the open source ecosystem that 90% of our open source supply chain can currently be recreated in seconds with today's AI agents"
It can't. Not even close. Please, do show a working clean-room implementation of a major opensource package. (Not left-pad)
We really need to stop hyperventilating and get back to reality.
Oh no… VCs will see this and take it seriously
I think we've already seen this with "AI writes a web-browser" type PR. I guess we can still look forward to when they make license evasion an explicit part of their marketing. Then I can wryly laugh when somebody robo-whitewashes leaked commercial software, knowing that they'll get sued anyways.
blegh, i like the motivation but why again and again do you need to write the content of the page with Slop-LLM-GPT? Your motive and points are valid, why waste it on a word filter that cannot capture it?
turd.png classy
Bruh this feels evil hahaha
In this climate, it almost feels like it's not satire.
Now this is a conversation piece
Can we stop with the AI slop here? Last chance then I have to look elsewhere for real content.
I unironically want this service to exist. The GNU GPL "is a tumor on the programming community, in that not only is it completely braindead, but the people who use it go on to infect other people who can't think for themselves."
Historically, it was a good license, and was able to keep Microsoft and Apple in check, in certain respects. But it's too played out now. In the past, a lot of its value came from it being not fully understood. Now it's a known quantity. You will never have a situation where NeXT is forced to open source their Objective-C frontend, for example
edit: it's satire. but likely not too far off from the reality in 6 months.
> Our process is deliberately, provably, almost tediously legal. One set of AI agents analyzes only public documentation: README files, API specifications, type definitions.
since nearly all open source dependencies couple the implementation with type definitions, I'm curious how this could pass the legal bar of the clean room.
Even if they claim to strip the implementation during their clean room process -- their own staff & services have access to the implementation during the stripping process.
yay capitalism. thank god it is a joke!
> Those maintainers worked for free—why should they get credit?
ROFL
I know this is satire but we're in the process of rewriting the .NET Mediatr library because ... it's nothing but a simple design pattern packaged as a paid nuget package. We don't even need LLMs to reprogram it.
So the need is real, at least for enshittified libraries.
I am blown away. Just 16 days ago, we were discussing this HN post: "FreeBSD doesn't have Wi-Fi driver for my old MacBook, so AI built one for me": https://news.ycombinator.com/item?id=47129361
In this post that I wrote: https://news.ycombinator.com/item?id=47131572 ... I theorised about how a company could reuse a similar technique to re-implement an open source project to change its license. In short: (1) Use an LLM to write a "perfect" spec from an existing open source project. (2) Use a different LLM to implement a functionally identical project in same/different programming language then select any license that you wish. Honestly, this is a terrifying reality if you can pay some service to do it on your behalf.