Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.
Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).
One way to reduce distribution is to, raise the price.
Another is to make a worse product.
Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).
The economics of software are going to massively reconfigure in the coming years, open source most of all.
I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
This conclusion makes more sense to me, but maybe I'm too naive.
The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.
This seems similar to the lesson learned for cryptographic libraries where open source libraries vetted by experts become the most trusted.
Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
I expect we're about to find that it's a lot easier to convince a company to spend money running an AI security scan of their dependencies and sharing the results with the maintainers than it is to have them give those maintainers money directly.
(I just hope they can learn to verify the exploits are valid before sharing them!)
This seems kind of crazy. If LLMs are so stunningly good at finding vulnerabilities in code, then shouldn't the solution be to run an LLM against your code after you commit, and before you release it? Then you basically have pentesting harnesses all to yourself before going public. If an LLM can't find any flaws, then you are good to release that code.
A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?
LLMs really are stunningly good at finding vulnerabilities in code, which is why, with closed-source code, you can and probably will use them to make your code as secure as possible.
But you won't keep the doors open for others to use them against it.
So it is, unfortunately, understandable in a way...
I'm not a security expert but can't close source applications be vulnerable and exploited too? I feel like using close source as a defense is just giving you a false sense of security.
It's entirely possible to address all the LLM-found issues and get an "all green" response, and have an attacker still find issues that your LLM did not. Either they used a different model, a different prompt, or spent more money than you did.
It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.
I mean, you should definitely have _some_ level of audit by LLMs before you ship, as part of the general PR process.
But you might need thousands of sessions to uncover some vulnerabilities, and you don’t want to stop shipping changes because the security checks are taking hours to run
Yeah, I don't buy it. If they don't want these security reports, ignore them and continue your path. Blaming AI is just an excuse to close source. If you don't want AI to learn from your code, too late. Add genetic algorithms and fuzzing into AI and it can iterate and learn a billion times faster, no need to learn for humans.
Going closed source is making the branch secret/private, not making it obscure. Obscurity would be zipping up the open source code (without a password) and leaving it online. Obscurity is just called taking additional steps to recover the information. Your passwords are not obscure strings of characters, they are secrets.
Right, but those capabilities are available to you as well. Granted the remediation effort will take longer but...you're going to do that for any existing issues _anyway_ right?
I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.
And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.
I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.
We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.
I don't think this really helps that much. Your neighbor could ask an LLM to decompile your binaries, and then run security analysis on the results.
If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
It only takes 20 minutes and $200 to hack a closed source one too though. LLMs are ludicrously good at using reverse engineering tools and having source available to inspect just makes it slightly more convenient.
Couldn't you just spend those $100 on claude code credits yourself and make sure you're not shipping insecure software? Security by obscurity is not the correct model (IMO)
> neighbors son 15 mins and $100 claude code credits
Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.
We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.
Time will tell, I am in the open source camp, though.
I know plenty of security researchers who exclusively use Claude Code and other tools for blackbox testing against sites they don’t have the source code for. It seems like shutting down the entire product is the only safe decision here!
The real threat is not security but bad actors copying your code and calling it theirs.
IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?
The only open source that will remain will be the real open source projects that are true to the ethos.
I agree with you that AI's disruption of attribution is a much bigger problem, but it's also worth recognizing that not everyone has this same motivation. It mostly affects copyleft open source licenses.
Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.
Today, it's easy to (publicly) evaluate the ability of LLMs to find bugs in open source codebases, because you don't need to ask permission. But this doesn't actually tell us the negative statement, which is that an LLM won't just as effectively find bugs in closed codebases, including through black-box testing, reverse engineering, etc.
If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).
Juxtapose this with the fact that many HNers will decry strong copyleft FOSS licenses as not being truly "open source" - the reality is that closed source software is still full of open-source non-copyleft dependencies. Unless you're rolling your own encryption and TCP stack, being closed source will not be the easy solution that many imagine it to be.
This is some truly exceptionally clownish attention seeking nonsense. The rationale here is complete nonsense, they just wanted to put "because AI" after announcing their completely self-serving decision. If AI cyber offense is such a concern, recognize your role as a company handling truckloads of highly sensitive information and actually fix your security culture instead of just obscuring it.
I mean it's not complete nonsense, but yeah, doing it for security reasons sounds like BS. I actually thought this was going to be about how AI makes it super easy for someone to steal all their code and fold it into their own competing project. I've seen a few open source projects get sideswiped by this, AI is pretty good at copying code (and obfuscating the fact that it was copied). I suspect that's the real reason but it doesn't sound as good. So they went with this half-truth.
Risk tolerance and emotional capacity differs from one individual to another, while I may disagree with the decision I am able to respect the decision.
That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
This is the future now that AI is here. Publishing is going to be dead, look at the tea leaves, how many engineers are claiming they don’t use package managers anymore and just generate dependencies? 5 years and no one will be making an argument for open source or blogging.
I hate how this sounds...but this reads to me "we lack the confidence in our code security so we're closing the source code to conceal vulnerabilities which may exist."
> if AI can be pointed and find vulnerabilities then do it yourself before publishing the code
At your cost.
Every time you push. (or if not that, at least every time there is a new version that you call a release)
Including every time a dependency updates, unless you pin specific versions.
I assume (caveat: I've not looked into the costs) many projects can't justify that.
Though I don't disagree with you that this looks like a commercial decision with “LLM based bug finders could find all our bad code” as an excuse. The lack of confidence in their own code while open does not instil confidence that it'll be secure enough to trust now closed.
For-profit companies using open-source software should bear that cost - that's my position.
I believe than N companies using an open source project and contributing back would make this burden smaller than one company using the same closed-source project.
Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.
Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.
Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
The tools are available to everyone. It's becoming easier for hackers to attack you at the same speed that it's becoming easier for you to harden your systems. When everyone gains the same advantage at the same time, nothing has really changed.
It makes me think of how great chess engines have affected competitive chess over the last few years. Sure, the ceiling for Elo ratings at the top levels has gone up, but it's still a fair game because everyone has access to the new tools. High-level players aren't necessarily spending more time on prep than they were before; they're just getting more value out of the hours they do spend.
I agree it's a shit tactic, but one thing I can say for those running software businesses is that it's not an equivalent linear increase on both sides. It's asymmetric, because # of both attackers and the amount of attack surface (exposed 3rd party dependencies, for example) is near infinite, with no opportunity cost for failure by the bad actors (hackers). However a single failure can bring down a company, particularly when they may be hosting sensitive user data that could ruin their customers' businesses or lives.
I think Cal are making the wrong call, and abandoning their principles. But it isn't fair to say the game is accelerating in a proportionate way.
Ultimately, he concludes that while in the short run the game defines the players' actions, an environment that makes cooperation too risky naturally forces participants to stop cooperating to protect themselves from being "exploited" (this bit is around 34:39 - 34:46)
Sure, I can see that to a degree. And there definitely is a bit of chaos during the transition period as everyone scrambles to figure out what the landscape looks like now. I could understand if they decided to temporarily do less-frequent code releases, or maybe release their code on a delay or something, while they wait for the dust to settle. But I don't think permanently ending open source development is the right move.
Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.
Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).
One way to reduce distribution is to, raise the price.
Another is to make a worse product.
Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).
The economics of software are going to massively reconfigure in the coming years, open source most of all.
I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.
(I might be very wrong here)
Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
(I just hope they can learn to verify the exploits are valid before sharing them!)
I might like to live there.
I'd give them more credits if they use the AI slop unmaintainability argument.
A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?
https://en.wikipedia.org/wiki/Linus%27s_law
But you won't keep the doors open for others to use them against it.
So it is, unfortunately, understandable in a way...
It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.
This! I love OSS but this argument seems to get overlooked in most of the comments here.
But you might need thousands of sessions to uncover some vulnerabilities, and you don’t want to stop shipping changes because the security checks are taking hours to run
I feel like with AI, self-hosting software reliably is becoming easier so the incentives to pay for a hosted service of an OSS project are going down.
Wanna sack a load of staff? - AI
Wanna cut your consumer products division? - AI
Wanna take away the source? - AI
I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.
And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
Give him $100 to obtain that capability.
Give each open source project maintainer $100.
Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.
I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.
We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.
If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
You can keep the untested branch closed if you want to go with “cathedral” model, even.
To what end? You can just look at the code. It's right there. You don't need to "hack" anything.
If you want to "hack on it", you're welcome to do so.
Would you like to take a look at some of my open-source projects your neighbour's kid might like to hack on?
Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.
We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.
Time will tell, I am in the open source camp, though.
IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?
The only open source that will remain will be the real open source projects that are true to the ethos.
Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.
How has this changed?
If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).
https://news.ycombinator.com/item?id=47780712
It seems like an easy decision, not a difficult one.
This post's argument seems circular to me.
That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
At your cost.
Every time you push. (or if not that, at least every time there is a new version that you call a release)
Including every time a dependency updates, unless you pin specific versions.
I assume (caveat: I've not looked into the costs) many projects can't justify that.
Though I don't disagree with you that this looks like a commercial decision with “LLM based bug finders could find all our bad code” as an excuse. The lack of confidence in their own code while open does not instil confidence that it'll be secure enough to trust now closed.
I believe than N companies using an open source project and contributing back would make this burden smaller than one company using the same closed-source project.
Great move.
Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.
Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.
Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
It makes me think of how great chess engines have affected competitive chess over the last few years. Sure, the ceiling for Elo ratings at the top levels has gone up, but it's still a fair game because everyone has access to the new tools. High-level players aren't necessarily spending more time on prep than they were before; they're just getting more value out of the hours they do spend.
I think Cal are making the wrong call, and abandoning their principles. But it isn't fair to say the game is accelerating in a proportionate way.
See: https://www.youtube.com/watch?v=2CieKDg-JrA
Ultimately, he concludes that while in the short run the game defines the players' actions, an environment that makes cooperation too risky naturally forces participants to stop cooperating to protect themselves from being "exploited" (this bit is around 34:39 - 34:46)
Then good, that overengineered, intentionally-crippled crap should go away.