I don't like Ed Zitron automatic dismissal of everything AI and the constant profanities in his writing are getting old, and it's usually not very well structured, but that said... I like the perspective he has about the money involved.
OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity. If you take all private equity together, it still won't be enough for OpenAI in next four years.
It's just staggering amount of money that the world needs to bet on OpenAI.
I like reading Zitron’s output, but this remark stood out to me as him weighing in on a domain he’s basically clueless about.
Spread across 4 years, there’s way more than $1tn in available private capital. Berkshire Hathaway alone has 1/3 of that sitting in a cash pile right now. You don’t have to look at many balance sheets to see that the big tech companies are all also sitting on huge piles of cash.
I don’t personally believe OAI can raise that money. But the money exists.
The dude is a living embodiment of "overconfident and wrong". He picks a side, then cherry picks until he has it all lined up so that his side is Obviously Right and anything else is Obviously Wrong.
Then reality happens and he's proven wrong, but he never fucking learns.
He is following a very well worn path of writing histrionically about a bubble and making money off it. Point out the obvious that it's a bubble. Throw out a lot of figures supporting it. Get your emotional hooks in (prey on fears of job loss, hatred of corporate mismanagement/execs etc) and charge a subscription for it. I liked an article or two of his but he's basically meatGPTing The Information with cursing.
Those numbers still fall way short of $1T and the $4.3b in rev is only the first half of 2025 - they're projecting $12-13b for the year.
I have my own reservations about the company, but there's a pretty real path toward huge revenues and profitability for them that seems pretty obvious to me.
They do not need it. Arguably no one needs it. I am at best luke-warm on LLMs, but how are any sane people rationalizing these requests? What is the opportunity cost of spending a billion or even a hundred billion dollars on compute instead of on climate tech or food security or healthcare or literally anything else?!
Well, it's the rich's money, dedicated to replacing as many people as possible in the Great war on labor.
I easily see the rich betting a trillion dollars especially if it's not their money and they start employing government funds in the name of a fictitious AI arms race.
They smell blood in the water. Reducing everyone to as minimum wage as possible.
Capital is already concentrated, aligned along monopolies and cartels, oligarchical control, and AI is the final key to total control to whatever degree they desire.
> Well, it's the rich's money, dedicated to replacing as many people as possible in the Great war on labor.
A lot of "the rich's" money is actually backed by the pension funds, 401k and similar investment vehicles.
That's the dirty secret in most of today's world. A lot of ultra-large companies would absolutely deserve getting broken up just for being way too powerful, the problem is any such attempts would send the stonk markets and, with them, people's pensions tanking.
Which is precisely why this arrangement exists. Think of it as chaining the galley slaves to the galley - if it sinks, then so do they, so their "interest" is to keep rowing.
But that is a short-term perspective. Long term, if we do nothing, we remain galley slaves.
> A lot of "the rich's" money is actually backed by the pension funds, 401k and similar investment vehicles.
Not really clear what you mean by this. But arguably all of the rich's money (and all money in general) is backed by labour/the ability to exchange that money for someone else's labour.
The amount of shares in many a large company that are held by passive or semi-passive investment vehicles.
It's not just high net worth individuals and nation-state entities (wealth funds) that pump money into YC, Meta, Apple, NVidia, Microsoft and god knows what else, the bulk of the ownerships is held indirectly by the wide masses via one sort or another of pension schemes.
Elon Musk doesn't play with his own money on xAI, he plays with the money of his investors, and so do all the other actors in the AI bubble.
I think you're greatly overestimating how much ownership the general public has. The bottom 90% of the population own like 12% of all equities, while bottom 50% only own 1% [0]
This will only be true a few decades. Nobody entering the workforce today will ever have any retirement funds. Most people who have been in it for a decade will never have any. Pensions aren't even a thing for most people. There will be a point where the common folk have very little remaining incentive not to burn it all down.
There will be at least one trillionaire within the next 12-24 months, and likely there will be multiple trillionaires within 5 years, the way wealth is consolidating at an accelerated rate. These amounts of investment don't seem to be fictional anymore.
Altman has sort of linked the fate of OpenAI to that of Nvidia, AMD, Oracle, Microsft etc with these huge deals / partnerships. We've seen the impact of these deals on stock prices before even a penny has changed hands.
Tracks with his reputation for power play and politics.
That certainly explains why Microsoft is so desperate to force everyone into using their AI whether they want to or not. I'm wondering if the deal will end Microsoft when OpenAI goes belly up.
I want windows to play games, for computing I use Linux but they keep foisting shite I don’t want on me just to play games, AI and sodding OneDrive can piss off.
I’ve kept windows around because it was less painful to game on than Linux but Linux is better than ever and windows is getting worse, at some point those lines are gonna cross and for the first time in 30 odd years I’ll not be running a single device with a Microsoft operating system on.
It's not just pushing it on the users. There's also a heavy-handed push to get teams to use more AI coding inside the company. I'll let you guess what that does to software quality, on average...
Regarding Linux gaming, the biggest problem there right now is all the multiplayer games with kernel anti-cheat. But I suspect that it'll be resolved eventually by Valve pushing for SteamOS support (although I doubt it'll ever work on any random Linux distro).
I'll just throw in support for gaming on Linux – it's pretty nice feeling these days! I still have the occasional (once every 5–8 months?) update cause a short-lived bug, but it's a very justifiable trade-off to avoid Windows these days.
It depends on the extent to which the promise was peddled and whether MSFT can be trusted with the cash balance - investors will reflect that in the stock price in future if there is a bubble bursting event. If that scenario pans out, Apple will be sitting there very pretty given it has not spent any real money on pursuing LLMs.
> OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity.
Recent estimates I've seen of uninvested private equity capital are ~$1.5 trillion and total private equity $5+ trillion, with several hundred million in new pricate equity funds raised annually, so this simply seems incorrect even assuming either only currrent “dry powder" is considered, or only new funds available over the next four years, much less both and/or the rest of private equity.
There's a stronger case for world hunger being bottlenecked than healthcare. World hunger is a logistics problem now, but no amount of money lets you print doctors.
You can't just throw money at the world hunger problem, it will end up in some warlord's coffers. Hunger still exists because it is politically useful to keep people hungry.
I don't know what wealth distribution means in this context, or why it's relevant at all, but food grows fast and doctors take like 20 years to grow no matter how much money you throw at it or where you get the money. And the context above was more specifically "fully pay health care costs" which is a comical fantasy the moment you try to actually define what that means, because the limit is not the price.
Changing the entire paradigm of medical care would be possible with enough money. There's no logical reason it takes 20 years to become a doctor. The fact that it does severely hampers both the quantity and quality of doctors. Becoming a doctor is much less about knowledge and intelligence than it is about attrition resistance. Loads of capable students disregard medical careers each year for more rapidly attainable positions. In many cases these are the MOST capable students because they recognize the problems with pursuing medical degrees.
Certainly the most skilled and advanced in the medical field will need significant schooling but there needs to be a major reform in healthcare training. One that produces more knowledgeable and skilled professionals and not a glut of questionably competent nurse practitioners.
With a little bit of lag time (school) we could have a metric fuckton of doctors. We have a metric fuckton of shitty lawyers. Doctors are artificially gated in the US
What’s the joke? “What do you call the person who graduated last in their class from med school? A doctor.”
> Doesn't America alone already spend 2 or 3 trillion a year on healthcare?
There's a huge difference between "paying for healthcare" and "paying a healthcare provider" here in the United States. Oftentimes the latter has 2 or 3 additional zeroes attached.
Sure, but Congress invariably pretend to say the former but mean the latter. You're asking for single-payer instead of privatized health insurance, they will sooner bankrupt the country than switch to sanity. Congress and its funding sources are now captured by privatized health insurance:
In 2023-4, Health came #7 in total political donations, #8 is Lawyers & Lobbyists; the combined "Finance/Insur/RealEst" is #1; would be useful to see "Insurance" broken out by health insurers vs non-health (can anyone cite a more granular breakdown?). [https://www.opensecrets.org/elections-overview/sectors]
It's not single payer vs privatized insurance. Why is this myth so persistent in US?? There are many different options for public healthcare, of which single payer is but one, and it's not even the most popular worldwide. Many European countries are not single payer, including e.g. Germany.
If that's true, all you have to do is convince other people that it's true and they can just vote for someone to deploy that money. Don't need to wait for someone else to do it.
There's another important bit there: 'we need to be able to tell the AI to make money for us and no-one else can compete with us on that'. I think both halves of that are questionable.
I mean FWIW they could probably make those folks happy by just spitting out a list of everything to short because of ai disruption on each new release lol
In late 2021, Ed Zitron wrote (on Twitter) that the future of all work was "work from home" and that no one would ever work in an office again. I responded:
"In the past, most companies have had processes geared towards office work. Covid-19 has forced these companies to re-gear their processes to handle external workers. Now that the companies have invested in these changed processes, they are finding it easier to outsource work to Brazil or India. Here in New York City, I am seeing an uptick in outsourcing. The work that remains in the USA will likely continue to be office-based because the work that can be done 100% remotely will likely go over seas."
He was wryly communicating, "your argument was so stupid I don't even need to engage with it".
In my experience he has a horrible response to criticism. He's right on the AI stuff, but he responds to both legitimate and illegitimate feedback without much thoughtfulness, usually non-sequitur redirect or ad hominem.
In his defense though, I expect 97% of feedback he gets is Sam Altman glazers, and he must be tired.
He's right on the AI stuff? How do you figure that? As far as I can tell, OpenAI is still operating. It sounds like you agree with him on the AI stuff, but he could be wrong, just like how he was wrong about remote work.
I'm actually more inclined to believe he's wrong if he gets so defensive about criticism. That tells me he's more focused on protecting his ego than actually uncovering the truth.
Wether or not OpenAI is sustainable or not is only a question that can be answered in hindsight. If OpenAI is still around in 10 years, in the same sort of capacity, does OP become retroactively wrong?
My point is, you can agree that OpenAI is unsustainable, but it's not clear to me that is a decided fact, rather than an open conjecture. And if someone is making that decision from a place of ego, I have greater reason to believe that they didn't reason themselves into that position.
The fact they are not currently even close to profitable with ever increasing costs and the sobering scaling realities there is something you could consider, and if you do believe they are sustainable, then you would have to believe (in my opinion, unlikely scenarios) they will somehow become sustainable, which is also a conjecture.
Seems a little unreasonable to point out “they are still around” as a refutation of the claim they aren’t sustainable when, in fact, the moment the investment money faucet keeping them alive is turned off they collapse and very quickly.
I don't think he's right about everything. He is particularly weak at understanding underlying technology, as others have pointed out. But, perhaps by luck, he is right most of the time.
For example, he was the lone voice saying that despite all the posturing and media manipulation by Altman, that OpenAI's for-profit transformation would not work out, and certainly not by EOY2025. He was also the lone voice saying that "productivity gains from AI" were not clearly attributable to such, and are likely make-believe. He was right on both.
Perhaps you have forgotten these claims, or the claims about OpenAI's revenue from "agents" this year, or that they were going to raise ChatGPT's price to $44 per month. Altman and the world have seemingly memory-holed these claims and moved on to even more fantastical ones.
I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):
- Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.
- Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.
- Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
- Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
- Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.
I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.
He is right about this too. They are doing #2 on this list.
Is he right on the AI stuff? Like, on the OpenAI company stuff he could be? I don't know? But on the technology? He really doesn't seem to know what he's talking about.
I generally don't agree with him on much; it's just nobody really talks about how much money those companies burn, and are expected to burn, in bigger perspective.
For me 10 billion, 100 billion and 1 trillion are all very abstract numbers - until you show much unreal 1 trillion is.
Attach your name to this publicly, and you're a clown. I don't know why the world started listening to clowns and taking them seriously, when their personas are crafted to be non-serious on purpose.
Hasn't he been saying that OpenAI is going to shut down every year for the last few years now? And that models are never going to get better than they were in 2022? I think he's pretty clearly a grifter chasing an audience that wants to be reassured that big tech is going away soon.
Ed Zitron may be many things but he is no grifter. He writes what he believes and believes what he says, and I basically agree with all of it. The chattering class in SV has been wildly wrong for years, and they'll really look foolish when the market crashes horribly and these companies collapse.
He's saying more than that the companies are going to collapse; he's making pronouncements about the underlying technology, which are claims that are much harder to defend. I'm not entirely sure he understands the distinction between the companies and the technology, though.
Respectably...what?? Ed at this point is one of the most well read people on Earth for this topic. Of course he knows the difference between the companies and the technology. He goes in depth both on why he think the companies are financially unviable AND why he's unimpressed by LLM's technologically alllll the time.
Even as someone who is generally inclined to agree with his thesis, I find Ed Zitron's discussions as to why AI does not and will never work deeply unconvincing.
I don't think he fundamentally gets what's going on with AI on the tech level and how the Moore's law type improvements in compute have driven this and will keep doing so. He just kind of sees that LLM chatbots are not much good and assumes things will stay like that. If that were so investing $1tn would make no sense. But it's not true.
How do you want to define grifter? He shows up, makes a lot of big promises, talks a lot of shit, doesn't actually engage with any real criticism, gets paid for it, and then exits, stage left. He could be right, he could be wrong, but he leaves no room for debate. If all you want is someone to yell at you about how right your feelings on something are, I mean, hey, I have a therapist too. I don't ask her for financial advice though.
The AI is getting better at things like maths. I recently asked it about iterating the Burrows-Wheeler transform, and it appeared to really understand that. It's not super easy to reason about why it's reversible, etc. and I felt that it got it.
This is obviously not AGI, and we're very far from AGI as we can see by trying out these LLMs on things like stories or on analyzing text, or dealing with opposing arguments, but for programming and maths performance is at a level where it's actually useful.
My answer to this is - so what? What are the effects in the real economy?
Theres probably a good 2-3 years left of runway for substantial progress before things really fall off a cliff. There has to be some semblance of GDP (real) growth.
I think it might be possible to automate development of short programs. I also think it might be possible to reduce the confusion, misunderstandings and cliches, so that the models can work for longer stretches.
But people probably expect to get the next version for what they pay in subscriptions now, so I can't imagine much more revenue growth for the model companies.
A lot of this is because there isn't a good definition of AGI. Look at Sama's recent interviews, that's how he deflects, along with the statement about the Turing test having ultimately been inconsequential. They have an internal definition of AGI that is "the model can perform the vast majority of economically viable tasks at the level of the highest humans" which isn't the story the investors are expecting when they hear AGI, so they're trying to stay mum to truthfully roadmap AGI while not blatantly lying to capital.
Platforms usually deliver significant value that is hard to replicate. OpenAI doesn't have any such thing. It's trivially replaced, and there's many competitors already. OpenAI is ahead of the curve, but they don't seem to have any particular way to do sticky capture. Migrating to a different LLM is an afternoon's work at most, not nearly the complexity of porting an app between OS' or creating a robust hardware driver model.
Yeah, I'm extremely loyal to ChatGPT Plus and Codex, but is't because OpenAI has a native Mac app that I like, and Codex included with Plus and has served me well enough to not look at Claude. I like GPT-5 quite a bit as a user. I'll concede none of these are small things - they've had my money for 2+ years - but they're not gigantic advantages either.
At an enterprise level however, in the current workload I am dealing with, I can't get GPT-5 with high thinking to yield acceptable results; Gemini 2.5 Pro is crushing it in my tests.
Things are changing fast, and OpenAI seems to be the most dynamic player in terms of productization, but I'm still failing to see the moat.
Interesting, so it's not just me who's finding Gemini 2.5 Pro to be the quiet leader? Deep Research also seems to be better in it, and the limits for that are far more generous to boot (20 per day on Gemini!).
Makes one wonder if Google will eventually sweep this field.
OpenAI is dynamic for consumer apps. Anthropic seems much better at productizing AI that you can actually build with, while also catering to enterprise in their own productized offerings
Also Claude Opus 4.1 runs multidimensional circles around GPT-5 in my view. The only better use case for GPT-5 is when you need it to scrape the web for data
Too bad Anthropic fucks over their 20x max subscribers. Their whole opus usage limit bullshit in favor of sonnet 4.5 which is objectively worse regardless of whatever they say. They clearly want to save money and take our $200.
Their platform mostly just works. Their api playground and docs are like 100x better than whatever garbage Anthropic has.
I think their UX is way better, and I can have long AF conversations without lagging. I can even change models in the same conversation. Basic shit Anthropic can’t figure out (they can fleece their 20x max subscribers tho)
I think if they get the AI human interface right, they will have their iPhone moment and 10x.
The platform is just a search tool and extended processing modes. That is easily replicated elsewhere.
The only moat they have is the fact that you still need a GPU of some kind to reliably run even a tiny LLM. But the gap between what absolutely needs a server farm and what can be ran on a store bought gaming computer is quickly closing. You can already run mixture of experts models on gaming rigs with a high degree of usability compared to just one year ago. And that tech continues to be pushed further and further. It's only a matter of time until we see ChatGPT levels of access running on a quad core laptop totally offline. And once that happens, all such a system would need is the correct tooling on top of the AI model "brain" to make it usable.
And beyond that, what if you could have an AI model on an ASIC-style add-in card? Where's their moat then?
Don't forget about Macs. Top models can run even fairly large LLMs (although time to first token is... not great). But even midline can certainly run LLMs in the ballpark of ~30B params very well, which is where things start to get interesting IMO.
I'd say their DX is way better compared to the competition. The playground, tooling, and web experience with documentation is far superior to the competition (even tho it has issues sometimes). User experience is key in a sea of the same shit.
I mean you could say the same about a Macbook or iPhone/iPad, but for the actual people (not HN users lol) out there, they vastly prefer Apple to HP, Dell, etc. Due to their wallet some can't though.
There are literally thousands of other laptops that do the "same thing" (computer for doing shit).
Those who say otherwise are usually broke and know deep down that given proper purchasing power, they too would prefer to use a $3k Macbook Pro than some POS Dell.
Same with android.
Everyone knows it is cope based on PP, it is just in poor taste to actually call it like it is (idc)
I don’t think they will get any moat. I might be wrong on this of course, but I don’t see a killer feature for these stochastic parrots that can’t be easily replicated
Never mind the phrase. If your parrot can compete in international math and programming competitions at the gold-medal level and make entire subreddits fail the Turing test, I would like to borrow your parrot.
"the term stochastic parrot is a metaphor, introduced by Emily M. Bender and colleagues in a 2021 paper, that frames large language models as systems that statistically mimic text without real understanding"
Much of the value I get from ChatGPT is based on it's memory system that has a good profile on me. For example if I ask it to find me something to eat at X restaurant, it will already know my allergies, dietary preferences, weight goals, medications, other foods I like, etc and suggest an item based on that, all workout me explicitly telling it.
Moving from ChatGPT to Claude I would lose a lot of this valuable history.
Or you could type up "My allergies are X, Y, Z, I have the following preferences..." etc. and put it into whatever chat bot you like. Obviously this is a bit of a pain, but it probably constrains significantly how much of a premium ChatGPT can charge. You might not bother if ChatGPT is time and a half more expensive, but what if it's 3x as much as the competition? What if there's a free alternative that's just as good?
I pay for the $200/mo ChatGPT, to me that's insanely cheap compared to the value it provides. Better pricing is highly unlikely to make me switch. If a competitor were able to sustain a lead in model intelligence/capability then I'd consider it.
Even simpler, ask ChatGPT "How would you describe my dietary preferences and restrictions?", maybe throw in some personality guidance of "You are training my executive assistant".
Have you tried asking ChatGPT to output everything it knows about you to a format easily digestible by LLMs? Does the memory stick between model switches?
I like to ask each llm what it knows about me. I feel like I could take that output and feed it into another llm and the new one would be up to speed quickly....
You could literally ask it to write out everything it knows about you into a form usable in a CLAUDE.md file, put that in the directory you're using e.g. claude code in, and boom.
The play OpenAI is making has nothing to do with the underlying models any more. They release good ones but it doesn't really matter, they're going for being the place people land when they open a web browser. That is incredibly sticky and not easily replaced. Being the company that replaces the phrase "oh just Google it" is worth half the money in the world, and I think they're well on their way to getting there.
I agree, in general. I don't know what the world looks like in 10 years if all of the weird attempts at fitting an LLM as a replacement to what we have currently actually works, however. Facebook is probably the closest analogy, where there's plenty of room to grow, but at some point you're going to have the OS builders shut down your efforts unless you want to build your own OS.
I do say ChatGPT when referring to LLMs/genAI in general, but I do hate saying it as it is nowhere near as nice to say as "google". I will switch immediately once something better comes up.
“Chat” is already in the youth lexicon, originally referring to an amorphous blob of live stream viewers. It now kind of refers to a non-existent but omnipresent viewer of your life.
I think ChatGPT might turn into just “chat” as the next evolution of the term.
Google was always terrified by the fact that they had no moat in Search. This was clear from interviews and articles at the time. That's why they decided to roll up the ad market instead, and once they had the advertisers Search became a self-fulfilling monopoly.
Google Search would be moatless if not for the AdMob purchase.
Google search requires a lot of resources to crawl, keep indices up to date, and provide user level significance (i.e. different results to different users). Then couple it with their other services (Google Maps, etc).
The competitors have not come even close to Google's level of quality.
With LLMs, it's different. Gemini/Claude are as good, for the most part. And users don't care that much either - most use the standard free ChatGPT, which likely is worse than many competitors' paid models.
It's not just the quality. The are people who do complain about how they perceive it significantly decreased over time. Yet there are many other factors, including presence as default search engine in so many setup out there.
> Migrating to a different LLM is an afternoon's work at most, not nearly the complexity of porting an app between OS' or creating a robust hardware driver model.
I question this. Each vendor's offering has its own peculiar prompt quirks, does it not? Theoretically, switching RDBMS vendors (say Ora to Ingress) was also "an afernoon's work" but it never happened. The minutia is sticky with these sort of almost-but-not 'universal' interfaces.
> I question this. Each vendor's offering has its own peculiar prompt quirks, does it not? Theoretically, switching RDBMS vendors (say oracle to postgres) was also "an afernoon's work" but it never happened. The minutia is sticky with these sort of almost-but-not 'universal' interfaces.
The bigger problem is that there was never a way to move data between oracle->postgres in pure data form (i.e. point pgsql at your oracle folder and it "just works"). Migration is always a pain, and thus there is a substantial degree of stickiness, due to the cost of moving databases both in terms of risk and effort.
In contrast, vendors [1] are literally offering third party LLMS (such as claude) in addition to their own and offering one-click switching. This means users can try and if they desire switch with little friction.
Moreover, it's trivial to run several LLMs side-by-side for a while and measure the success of each, then migrate to the one that performs the best. And you can even migrate in-progress chats since all the context is passed on each call anyway.
The current LLMs support data export/import inherently because the interface is pure text.
All one needs to do is say something like “tell me all of personalization factors you have on me” and then just copy and paste that into the next LLM with “here’s stuff you should know about how to personalize output for me”
> The minutia is sticky with these sort of almost-but-not 'universal' interfaces.
True, but that's not really applicable here since LLMs themselves are not stable, and are certainly not stable within a vendors own product line. Like imagine if every time Oracle shipped a new version it was significantly behaviorally inconsistent with the previous one. Upgrading within a vendor and switching vendors ends up being the same task. So you quickly solidify on either
1) never upgrading, although with these being cloud services that's not necessarily feasible, and since LLMs are far from a local maxima in quality that'd quickly leave your stack obsolete
or
2) being forced to be robust, which makes it easy to migrate to other vendors
It's reasonable to question it, but there's a fun Chinese paper (https://arxiv.org/abs/2507.15855) where they attempt to create a system of prompts that can be used with commercial LLMs to solve hard maths problems.
It turns out that they can use the same prompt system for all of them, with no changes and still solve 5/6 IMO problems. I think this is possibly iffy, since people might have updated the models etc., but it's pretty obvious that this kind of thing is how OpenAI are doing their multi-stage thinking thing for maths internally.
Consequently if prompt systems are this transferable for these hard problems, why wouldn't both they and individual prompts, be highly transferable in general?
No, changing rdbms is a totally different challenge. They provide predictible reproducible output which are very sensitive and brittle to the slightest minor change in the input, while with LLM you expect the exact opposite.
Changing the LLM backend in some IDE is as complicated as selecting an option in a dropbox for those who integrate such a feature. They are other scenarios where it might be a bit more complicated to transition of course, but that's it.
If you are doing things “properly” then you have good evals that let you test the behaviour of different LLMs and see if they work for your problem.
The vendors have all standardised on OpenAIs API surface - you can use OpenAIs SDK with a number of providers - so switching is very easy. There are also quite a few services that offer this as a service.
The real test is does a different LLM work - hence the need to evals to check.
If you took any of current top 3 models from me, I would not miss the deleted one in the least. I run almost every non-trivial prompt through multiple models anyway.
OpenAI's moat is the data you give it. It's the same reason so many people have GMail accounts, even though we all know Google sucks. It's not that you like GMail better than any other email service, it's because migrating to another service is a pain in the ass.
OpenAI will either use customer data to enshittify to a level never seen before, or they will go insolvent.
I actually prefer Google Gemini. 2.5 is free and works awesome for what I need AI for. It just made my resume I uploaded immeasurably better last night.
That's why OpenAI is hard pivoting towards products now. The big one IMO is Instant Checkout and Agentic Commerce Protocol. ChatGPT is going to turn into a product recommendation engine and OpenAI is going to get a slice of every purchase, which is going to disrupt the current impression/click adtech model and potentially Google and Amazon themselves. It's an open question how hard they can do this without enshittifying ChatGPT itself, but we'll see.
The notion that people would be willing to let LLMs spend money is, frankly, insane given the hallucination problems that still don't have any clear solution in sight.
OpenAI turned that research into a product before Google, which is a huge failure on Google's part, but that's orthogonal to the invention of what powers modern models.
You mean the social network that is currently dying the same death as countless other platforms before it, just on a larger scale?
Maybe some are too young to remember the great migrations from/to MySpace, MSN, ICQ, AIM, Skype, local alternatives like StudiVZ, ..., where people used to keep in contact with friends. Facebook was just the latest and largest platform where people kept in touch and expressed themselves in some way. People adding each other on Facebook before others to keep in touch hasn't been a thing for 5 years. It's Instagram, Discord, and WhatsApp nowadays depending on your social circle (two of which Meta wisely bought because they saw the writing on the wall).
If I open Facebook nowadays, then out of ~130 people I used to keep in touch with through that platform, pretty much nobody is still doing anything on there. The only sign of life will be some people showing as online because they've the facebook app installed to use direct messaging.
No, people easily migrate between these platforms. All it takes is put your new handles (discord ID/phone number/etc) as a sticky so people know where to find you. And especially children will always use whichever platform their parents don't.
Small caveat: This is a German perspective. I don't doubt there's some countries where Facebook is still doing well.
>No, people easily migrate between these platforms.
No? It's rare for these platforms to survive, the one that was closest to challenging Facebook was kneecapped by the US government.
The time between the founding of MySpace to Facebook was a little over a year. Instagram has been the largest social network for close to decade now, and it's not like others haven't been trying. META is up 600% over last 3 years. I'm really question your definition of the word "dying"
FB in particular is becoming deader and deader every month, and it's only a matter of time before something else comes along that sucks their attention away.
People say you're wrong but I agree. Facebook is nothing more than a rent-seeking middle man between you and your friends/family. Instead of just talking to your family normally, like over the phone, now you have to talk to them in between mountains of Sponsored Content and AI-generated propaganda. It provides no productive value to the world except for making the world more annoying and making people more isolated from one another.
When you realize this, you realize that a lot of other supposedly valuable tech companies operate in the exact same way. Worrying that our parents' retirement depends heavily on their valuations!
I mean, technically no, FB has a network effect of the other people on it either being the people you want to talk to, or the people you want to advertise to.
That's a precise, incisive observation: OpenAI is trivial (any AI provider is), supported by evidence as demonstrated. It has no claim to operating software that's specifically distinct from others.
And business partnerships, government partnerships, and AI regulation (to establish laws that keep competitors out). Sam knows they have no moat and will try every avenue to establish one.
they understand that, and that's why they're making it sticky by adding in app purchasing, advertising, integrations. also why they hired OGs from IG/FB. They are building the moat and hoping that first to market is going to work out.
they are trying to become/replace google. they are first to market for an entirely new query paradigm and in app purchases and advertising are just one aspect of a platform.
The current AI wave has been compared (by sama) to electricity and sometimes transistors. AI is just going to be in all the products. The trillion dollar question is: Do you care what kind of electricity you are using? So, will you care what kind of AI you are using.
In the last few interviews with him I have listened to he has said that what he wants is "your ai" that knows you, everywhere that you are. So his game is "Switching Costs" based on your own data. So he's making a device, etc etc.
Switching costs are a terrific moat in many circumstances and requires a 10x product (or whatever) to get you to cross over. Claude Code was easily a 5x product for me, but I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.
I do not think that apps inside chatgpt matters to me at all and I think it will go the way of all the other "super app" ambitions openai has.
If you take that at face value, shouldn't every investor just back Google or Apple instead? Like, OpenAI is, at best, months ahead when it comes to model quality. But for them to get integrated into the lives of people in the way all their competitors are would take years. If the way in which ai becomes this ubiquitous trillion dollar thing involves making it hyper-personalized, is there any way in which OpenAi is particularly well positioned to achieve that?
> I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.
Today I asked GPT5 to extract a transcript of all my messages in the conversation and it hallucinated messages from a previous conversation, maybe leaked through the memory system. It cannot tell the difference. Indiscriminate learning and use of memory system is a risk.
I mean don't you think this is is more analogous to the introduction of computing than electricity? If you told people in 1960 that there would be supercomputers inside people's refrigerators do you think they would have believed you?
And most people actually don't care what CPU they have in their laptop (enthusiasts still do which i think continues to match the analogy), they care more about the OS (chatGPT app vs gemini etc).
can the world and tech survive fruitfully without AI? yes. can the world and tech survive without electricity and transistors - not really. the modern world would come crashing down if transistors and electricity disappeared overnight. if AI disappeared over night the world might just be a better place.
I don't understand the link between the title of the article, and its content. If I summarize their three points:
1. Corporate strategy of OpenAI is becoming a monopole
2. OpenAI is investing in infrastructure because they think they'll have lots of users in the future
3. Making videos on Sora is fun, and people are gonna post more of these.
How does that substantiate "we live in OpenAI's world"? Am I missing something?
I am not sure about this. They definitely created a brand new service and data flows that didn’t exist before and they have the majority of the mind share, however it’s already commoditized. The next two to three years will show how the chips fall. I can see that it’s tough or almost impossible for apple to get a share in this but google is right there to take the consumer side. For enterprise again we have to wait and see how gcp and AWS do.
The value is not in the llm but vertical integration and providing value. OpenAI has identified this and is doing is vertical integration in a hurry. If the revenue sustains it will be because of that. For consumer space again, nvidia is better positioned with their chips and SoCs but OpenAI is not a sure thing yet. By that I don’t mean they are going to fall apart, they will continue to make a large amount fmloney but whether it’s their world or not is still up in the air.
I'm on the verge of unsubscribing from Stratechery. The last month has been a bunch of fawning over Meta, YouTube, and constant talk about and fawning over OpenAI and whatever latest models are coming out. It's kind of tiring and boring. I swear I heard them talk about some YouTube influencers event like five times across their different shows and across time. Like, I do not care at all.
As a longtime loyal subscriber to Stratechery... I kinda agree. But as the other commenters did point out, this does reflect how the market seems to feel about OpenAI, at least. (Meta - I'm less sure of; Thompson does fawn over Meta quite a bit, I personally think it's too much and seems to not fully reflect reality, but man do they really cane it when you see their usage numbers, so maybe he's right.)
I did think his GPT-5 commentary was good, insofar as picking up the nuance of why it's actually better than the immediate reactions I, at least, saw in the headlines.
Where I do agree with you is how Stratechery's getting a little oversaturated. I'm happy Ben Thompson is building a mini media empire, but I might have liked it more when it was just a simple newsletter that I got in my inbox, rather than pods, YouTube videos, and expanding to include other tech/news doyens. Maybe I'm just a tech media hipster lol.
Is this fawning or just reflecting reality? I’m generally in the “LLMs kinda suck camp” and I read the headline and thought “yep 100%”. OpenAI is able to raise and deploy insane amounts of capital on a whim. Regardless of that being a good or bad thing it’s still true.
Did you listen to the recent interview with Ben Bajarin? I thought that interview alone justified the subscription. Curious as to whether anyone else felt the same.
Fantastic interview. Hard to get much info from inside the world Bajarin was speaking of. Notable how everyone is saying they can't get capacity for the tokens they're trying to serve.
The question is if the company can add more value to the models than someone else. I still see a lot of gaps in the ecosystem, eg evaluation/testing systems and integrations beyond chat interface and active control to get good results, not to mention other types of models that deal with 3d world or temporal data. There is an opportunity for an outsider to come and grab parts of the pie whilst the biggest are competing.
They do look like trying to grab the market with tooling but if you can use their tools (oss) and switch the models then where is the moat?
Can someone elucidate us as to how so many platforms (ChatGPT, Gemini, Claude, etc etc) all sprung up so quickly? How did the engineering teams immediately know how to go about doing this kind of tech with LLMs and DNNs and whatnot?
By 2020/2021 with the release of GPT-3, the trajectory of a lot of the most obvious product directions had already become clear. It was mainly a matter of models becoming capable enough to unlock them.
E.g. here's a forecast of 2021 to 2026 from 2021, over a year before ChatGPT was released. It hits a lot of the product beats we've come to see as we move into late 2025.
It's not much different to other ML, pretty much it's on a bigger and more expensive scale. So once someone figured out the rough recipie (NN architecture, ludicrous scale of weights and data, reinforcement learning tuning), it's not hard for other experts in the field to replicate, so long as they have the resources. Deepseek was pretty much a side project, for example.
I imagine it wasn't as immediate as it might look on the outside. If they all were working independently on similar ideas for a while, one of them launching their product might have caused the others to scramble to get theirs out as well to avoid missing the train.
I think it's also worth pointing out that the polish on these products was not actually there on day one. I remember the first week or so after ChatGPT's initial launch being full of stories and screenshots of people fairly easily getting around some of the intended limitations with silly methods like asking it to write a play where the dialogue has the topic it refused to talk about directly or asking it to give examples about what types of things it's not allowed to say in response to certain questions. My point isn't that there wasn't a lot of technical knowledge that went into the initial launch, but that it's a bit of an oversimplification to view things at a binary where people didn't know how to do it before, but then they did.
Was it that quickly? GPT 3 is where I would kind of put the start of this and that was in 2020, they had to work on the technology for quite a while before it got like this. Everyone else has been able to follow their progress and see what works.
All of the products you mention already had research teams (in the case of ChatGPT and Claude that actually predated most of their engineers). So knowing how to build small language models was always in their wheel house. Scaling up to larger LLMs required a few algorithmic advancements but for the most part it was a question of sourcing more data and more compute. The remarkable part of transformers is their scaling laws, which let us achieve much better models without having to reinvent new architecture.
Intersection of cloud compute power being plentiful combined with existing LMs. As I understand it, right now, it's really just throwing compute power at existing LMs to learn on gigantic datasets.
I wonder if OpenAI's app platform is going to be more like Windows (most economic value goes to user and app partners) or like Facebook (most economic value goes to facebook, app makers get screwed. I mean Microsoft acted badly towards a lot of partners, but it was a true platform.
OpenAI doesn't have a moat unfortunately. One URL replacement away and you can switch most models in minutes. I have personally done this many times over the last year and a half.
It only takes labs to produce better and better models, and the race to bottom on token costs.
Honestly how google maintains a "moat" over other search engines could use a good business study. They've defied some pretty serious competitors there without an obvious lock in or anything.
(You can say default in various browsers and a phone OS and that's probably the main component but it's not clear changing that default would let Bing win or etc.)
The victory lap from Sam Altman and all the money being raised makes people forget the following:
- Open source LLM models at most 12 months behind ChatGPT/Gemini;
- Gemini from Google is just as good, also much cheaper. For both Google and the users, as they make their own TPU;
- Coding. OpenAI has nothing like Sonnet 4.5
They look like they invested billions to do research for competitors, which have already taken most of their lunch.
Now with the Sora 2 App, they are just burning more and more cash, so people watch those generated videos in Tiktok and Youtube.
I find it hilarious all the big talk. I hope I get proven wrong, but they seem to be getting wrecked by competitors.
I think platform monopolies are a thing of the past, where, when most of the world was asleep, a few silicon valley garage companies took over the green field and locked-in a huge customer base irreversibly, thus colonizing the world. The world is now much more awake and connected, ruling out any concentration of dominance. You can't have a repeat of colonial times.
I mean we transitioned from the product to the brand, with no reason to. Soon it will just flow back into a product. I don't care if a Samsung fridge has gemini GPT or openai GPT, as long as it works
"[Microsoft's] platform power didn’t just come from controlling applications on top of Windows, but the OEM ecosystem underneath. If OpenAI builds AI for everyone, then they are positioned to extract margin from companies up-and-down the stack — even Nvidia. "
Ah yes, the ChromeOS strategy. How'd that work out for Google?
Building a platform is good, a way to make quite a bit of money. It's worked really well for Google and Apple on phones (as Ben notes). But there's a reason it didn't happen for Google on PCs. Find it hard to believe it will for OpenAI. They don't (and can not) control the underlying hardware.
It worked great, Web is ChromeOS Platform for all practical purposes, with Firefox meagre 3%, and Safari only being relevant thanks to iOS, and the whole Electron crap as "native apps".
I don't think that this is true in terms of being able to extract profits from the OEMs underneath, which is what the parent commenter quoted from the article. I don't think this refutes their response as much as it's a different point than the one they were responding to.
Whats more, every single child here in Australia is learning on a school issued Chrome Book. To my kids, a spread sheet is google sheets and a power point is google slides.
(We were joking about it just last week because my partner asked my eldest what was the Power Point he was working on and he said, "Whats Power Point?")
The question is to what degree that matters - if this power applies anywhere you can access ChatGPT (which is anything with a web browser), do you actually need to control the hardware?
Stratechery has always been shallow, but these overt advertisements are disturbing:
- Sneaking in how someone went from a Sora skeptic to a purported creator within a week.
- Calling the result the "future of creation".
- Titling the advertisement "It’s OpenAI’s World, We’re Just Living in It".
What they are doing here is to pitch Sora to attention deficit teenagers in order to have yet another way to make the hair of the favorite content creator red. As if that didn't already exist.
My feeling is that the tech industry has been in, "hot water" since at least 2018 and has been using private equity and bullshit hype-trains to garner interest in new technologies in lieu of the public getting hip to the fact that mostly computers are spying on you, making you mentally ill, and stealing your data in exchange for, "being able to participate." As pointed out by others OpenAI and the rest of the AI ecosystem will need a financial miracle to stay afloat and offer their products for a competitive price.
There's so much of what, "AI" is becoming that just seems like a massive psy-op to breathe one last breath of life into what is the skeleton of the old Silicon Valley. Innovation is possible but if the future really is liberal authoritarianism/oligarchy there's no room in the contrived market for, "innovative products that greatly improve human life."
Openai is an artificially and ridiculously inflated balloon. It has nothing except initial market capture with hype. But yes they will keep whipping investors and keep burning money.
Whenever I see a post at #1 with not comment, I know it's been artificially pumped to the top. So many people upvoted but not a single comment yet. Let there be some comments! lol.
OpenAI is a geopolitically important play besides being a tech startup so it gets pumped in funding and in PR, to show that we're still leading the world. But that premise is largely hallucinated.
I had a different takeaway - that a lot of folks on here read Ben Thomson and respect his work! It sounds like Ben is pretty bullish on OpenAI and maybe he's convinced folks through his work to agree with this take.
This is just a roundup article (though there are still some good nuggets about Sora vs Meta's app.) Looking forward to another HN discussion purely driven through article title vibes. With nothing mooring the discussion, you know it's going to be "good".
I do like stratechery but this is a roundup newsletter and not an article. If the HN thread gets engagement it will likely be based on the headline and not any of the articles in the roundup.
Ben Thompson bopped around doing engineer things at Apple, Microsoft, and Automattic, until more than a decade ago he started this subscription newsletter with business-of-tech kind of analysis. The success of his paid newsletter gave Substack the idea [0].
A fair chunk of the tech who’s-who seem to find his thinking useful.
I've seen articles from this blog over the years and every time there are a bunch of comments referring to the author on a first name basis. As far as I can tell he's a guy who posts a bunch of hot takes on finance/economics/markets/etc. and I guess is very well known to a core audience that might overestimate his name recognition to other people who might just be seeing something on the front page of Hacker News without recognizing the source.
There's nothing inherently wrong with comments referring to him with by his first name, but I don't think I've ever seen a similar pattern with any other sources here outside of maybe a few with much more universal name recognition. It's always struck me as a little bit odd, but not a big enough deal for me to go out of my way to comment about it before now.
Anecdotes are only a datapoint and nearly meaningless by themselves. For a great many others just the stratechery.com website alone is enough to get a view and an upvote.
Are you sure you property clean up the blood stains of the call girls you murdered in your basement?
And what I mean by that is what evidence are either of us bringing that our claims are true?
Therefore I bring that I have no evidence at all about my claim above, and seeming you're in the same boat.
This said the website above lists
>I am not paid by any company for any opinion I post on Stratechery or in any public forum, including podcasts and Twitter.
>I do not hold individual stocks in any company I write about. I do hold various 401k and IRA accounts that invest in a wide-ranging basket of stocks, over which I have no control.
>I occasionally agree to speaking engagements for both public and private events, but not for companies I cover on Stratechery. Compensation will vary based on the nature of the customer and event, as well as the topic. I do not do any consulting at this time.
So you tell me.
>I pay for all of my own travel and expenses when I attend company events.
It's easy to write a cynical reaction, but the simplest explanation is usually true. In this case, it's just a very good headline.
I read Stratechery. Ben's articles are what he makes for public consumption. This weekly summary thing is a new roundup for subscribers, and just happens to be public, and if you're not a subscriber you can't follow the links. If Ben could choose something to be #1 on Hacker News it would likely be a full article with this headline, rather than a weekly summary post for subscribers.
OpenAI has been at the top of the app store for years now. A lot of people are interested in it. That trivially explains the upvotes without a conspiracy.
I don't like Ed Zitron automatic dismissal of everything AI and the constant profanities in his writing are getting old, and it's usually not very well structured, but that said... I like the perspective he has about the money involved.
https://www.wheresyoured.at/the-case-against-generative-ai/
OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity. If you take all private equity together, it still won't be enough for OpenAI in next four years.
It's just staggering amount of money that the world needs to bet on OpenAI.
I like reading Zitron’s output, but this remark stood out to me as him weighing in on a domain he’s basically clueless about.
Spread across 4 years, there’s way more than $1tn in available private capital. Berkshire Hathaway alone has 1/3 of that sitting in a cash pile right now. You don’t have to look at many balance sheets to see that the big tech companies are all also sitting on huge piles of cash.
I don’t personally believe OAI can raise that money. But the money exists.
Isn't that every domain?
The dude is a living embodiment of "overconfident and wrong". He picks a side, then cherry picks until he has it all lined up so that his side is Obviously Right and anything else is Obviously Wrong.
Then reality happens and he's proven wrong, but he never fucking learns.
He is following a very well worn path of writing histrionically about a bubble and making money off it. Point out the obvious that it's a bubble. Throw out a lot of figures supporting it. Get your emotional hooks in (prey on fears of job loss, hatred of corporate mismanagement/execs etc) and charge a subscription for it. I liked an article or two of his but he's basically meatGPTing The Information with cursing.
Everyone's got to make a living somehow, I guess.
> OpenAI needs 1 trillion dollars in next four years just to keep existing.
More like $115 billion based on your own link. The $1T is a guess based on promises made, far from "needs to spend that much just to exist".
Spending $30 billion a year doesn't sound that apocalyptic.
> Spending $30 billion a year doesn't sound that apocalyptic.
they have $1000billion in future liabilities. https://www.ft.com/content/5f6f78af-aed9-43a5-8e31-2df7851ce...
they have income of about 4billion a half, with a loss of $13bn.
"signed about $1tn in deals" doesn't have to mean you have $1tn in future liabilities. It depends on what deals you've done.
Those numbers still fall way short of $1T and the $4.3b in rev is only the first half of 2025 - they're projecting $12-13b for the year.
I have my own reservations about the company, but there's a pretty real path toward huge revenues and profitability for them that seems pretty obvious to me.
OpenAI WANTS 1 trillion dollars.
They do not need it. Arguably no one needs it. I am at best luke-warm on LLMs, but how are any sane people rationalizing these requests? What is the opportunity cost of spending a billion or even a hundred billion dollars on compute instead of on climate tech or food security or healthcare or literally anything else?!
Well, it's the rich's money, dedicated to replacing as many people as possible in the Great war on labor.
I easily see the rich betting a trillion dollars especially if it's not their money and they start employing government funds in the name of a fictitious AI arms race.
They smell blood in the water. Reducing everyone to as minimum wage as possible.
Capital is already concentrated, aligned along monopolies and cartels, oligarchical control, and AI is the final key to total control to whatever degree they desire.
> Well, it's the rich's money, dedicated to replacing as many people as possible in the Great war on labor.
A lot of "the rich's" money is actually backed by the pension funds, 401k and similar investment vehicles.
That's the dirty secret in most of today's world. A lot of ultra-large companies would absolutely deserve getting broken up just for being way too powerful, the problem is any such attempts would send the stonk markets and, with them, people's pensions tanking.
Which is precisely why this arrangement exists. Think of it as chaining the galley slaves to the galley - if it sinks, then so do they, so their "interest" is to keep rowing.
But that is a short-term perspective. Long term, if we do nothing, we remain galley slaves.
> A lot of "the rich's" money is actually backed by the pension funds, 401k and similar investment vehicles.
Not really clear what you mean by this. But arguably all of the rich's money (and all money in general) is backed by labour/the ability to exchange that money for someone else's labour.
> Not really clear what you mean by this.
The amount of shares in many a large company that are held by passive or semi-passive investment vehicles.
It's not just high net worth individuals and nation-state entities (wealth funds) that pump money into YC, Meta, Apple, NVidia, Microsoft and god knows what else, the bulk of the ownerships is held indirectly by the wide masses via one sort or another of pension schemes.
Elon Musk doesn't play with his own money on xAI, he plays with the money of his investors, and so do all the other actors in the AI bubble.
I think you're greatly overestimating how much ownership the general public has. The bottom 90% of the population own like 12% of all equities, while bottom 50% only own 1% [0]
[0] https://finance.yahoo.com/news/wealthiest-10-americans-own-9...
Not direct ownership. He means through pension schemes, indirect dependency.
This will only be true a few decades. Nobody entering the workforce today will ever have any retirement funds. Most people who have been in it for a decade will never have any. Pensions aren't even a thing for most people. There will be a point where the common folk have very little remaining incentive not to burn it all down.
There will be at least one trillionaire within the next 12-24 months, and likely there will be multiple trillionaires within 5 years, the way wealth is consolidating at an accelerated rate. These amounts of investment don't seem to be fictional anymore.
I think it's pretty smart actually.
Altman has sort of linked the fate of OpenAI to that of Nvidia, AMD, Oracle, Microsft etc with these huge deals / partnerships. We've seen the impact of these deals on stock prices before even a penny has changed hands.
Tracks with his reputation for power play and politics.
That certainly explains why Microsoft is so desperate to force everyone into using their AI whether they want to or not. I'm wondering if the deal will end Microsoft when OpenAI goes belly up.
It’s rapidly ending my deal with Microsoft.
I want windows to play games, for computing I use Linux but they keep foisting shite I don’t want on me just to play games, AI and sodding OneDrive can piss off.
I’ve kept windows around because it was less painful to game on than Linux but Linux is better than ever and windows is getting worse, at some point those lines are gonna cross and for the first time in 30 odd years I’ll not be running a single device with a Microsoft operating system on.
And they deserve it.
Steam on (arch)linux works so well these days, I haven't needed windows for gaming in a while.
It's not just pushing it on the users. There's also a heavy-handed push to get teams to use more AI coding inside the company. I'll let you guess what that does to software quality, on average...
Regarding Linux gaming, the biggest problem there right now is all the multiplayer games with kernel anti-cheat. But I suspect that it'll be resolved eventually by Valve pushing for SteamOS support (although I doubt it'll ever work on any random Linux distro).
I'll just throw in support for gaming on Linux – it's pretty nice feeling these days! I still have the occasional (once every 5–8 months?) update cause a short-lived bug, but it's a very justifiable trade-off to avoid Windows these days.
Difficult to say for certain.
It depends on the extent to which the promise was peddled and whether MSFT can be trusted with the cash balance - investors will reflect that in the stock price in future if there is a bubble bursting event. If that scenario pans out, Apple will be sitting there very pretty given it has not spent any real money on pursuing LLMs.
> OpenAI needs 1 trillion dollars in next four years just to keep existing. That's more than all currently available capital, together, from all private equity.
Recent estimates I've seen of uninvested private equity capital are ~$1.5 trillion and total private equity $5+ trillion, with several hundred million in new pricate equity funds raised annually, so this simply seems incorrect even assuming either only currrent “dry powder" is considered, or only new funds available over the next four years, much less both and/or the rest of private equity.
You could literally eliminate homelessness, world hunger, and fully pay health care costs in that same timeframe with much less cash.
Homelessness and world hunger are not bottlenecked by money. There's a stronger case for healthcare, but that's also substantially a political issue.
There's a stronger case for world hunger being bottlenecked than healthcare. World hunger is a logistics problem now, but no amount of money lets you print doctors.
You can't just throw money at the world hunger problem, it will end up in some warlord's coffers. Hunger still exists because it is politically useful to keep people hungry.
Actually funding education via wealth distribution is a great way of ”printing doctors”.
I don't know what wealth distribution means in this context, or why it's relevant at all, but food grows fast and doctors take like 20 years to grow no matter how much money you throw at it or where you get the money. And the context above was more specifically "fully pay health care costs" which is a comical fantasy the moment you try to actually define what that means, because the limit is not the price.
Changing the entire paradigm of medical care would be possible with enough money. There's no logical reason it takes 20 years to become a doctor. The fact that it does severely hampers both the quantity and quality of doctors. Becoming a doctor is much less about knowledge and intelligence than it is about attrition resistance. Loads of capable students disregard medical careers each year for more rapidly attainable positions. In many cases these are the MOST capable students because they recognize the problems with pursuing medical degrees.
Certainly the most skilled and advanced in the medical field will need significant schooling but there needs to be a major reform in healthcare training. One that produces more knowledgeable and skilled professionals and not a glut of questionably competent nurse practitioners.
simply, we have a scarcity of healthcare because we don’t invest the wealth we collectively produce in healthcare.
A logistics problem is just a money problem, throw enough cash at it and you can get anything moved to anywhere
With a little bit of lag time (school) we could have a metric fuckton of doctors. We have a metric fuckton of shitty lawyers. Doctors are artificially gated in the US
What’s the joke? “What do you call the person who graduated last in their class from med school? A doctor.”
Sure, I think that helps my point. You can't create "good" doctors out of thin air with money, just "more" doctors, and it takes forever.
Do we really need "good" doctors all of the time? I think the industry is overregulated to restrict the supply of healthcare.
It’s funny how some people think shitty lawyers are good and some people think good lawyers are shitty, huh?
Fully pay health care costs for _who_ for four years with 1 trillion dollars?
Doesn't America alone already spend 2 or 3 trillion a year on healthcare?
America spends $5T on healthcare per year.
That is twice as much per capita as our "peer" nations (UK, France, Canada, Germany, etc) and we have poorer outcomes.
> Doesn't America alone already spend 2 or 3 trillion a year on healthcare?
There's a huge difference between "paying for healthcare" and "paying a healthcare provider" here in the United States. Oftentimes the latter has 2 or 3 additional zeroes attached.
Sure, but Congress invariably pretend to say the former but mean the latter. You're asking for single-payer instead of privatized health insurance, they will sooner bankrupt the country than switch to sanity. Congress and its funding sources are now captured by privatized health insurance:
In 2023-4, Health came #7 in total political donations, #8 is Lawyers & Lobbyists; the combined "Finance/Insur/RealEst" is #1; would be useful to see "Insurance" broken out by health insurers vs non-health (can anyone cite a more granular breakdown?). [https://www.opensecrets.org/elections-overview/sectors]
It's not single payer vs privatized insurance. Why is this myth so persistent in US?? There are many different options for public healthcare, of which single payer is but one, and it's not even the most popular worldwide. Many European countries are not single payer, including e.g. Germany.
Cash is just paper. You can't eat paper.
Time is money. the investment of time and resources spent on AI could be spent producing food or healing people.
Not that I think we should, but the "Cash is just paper" attitude makes no sense. If it's just paper, how is OpenAI training AI using just paper?
Money Can Be Exchanged For Goods and Services
You can't eat AI, either.
Yet another hidden benefit of a human workforce, that AI can't match.
You mean like, at least we can eat the human workforce?
Soylent Green is people!
It’s more recyclable than old gpus and out of date models
It seems to me that you're suggesting cannablism, but maybe I'm reading you wrong?
It does seem like a rather modest proposal.
If that's true, all you have to do is convince other people that it's true and they can just vote for someone to deploy that money. Don't need to wait for someone else to do it.
The openai bet goes "within 4 years, we need to be able to tell the AI to make money for us." Are they on track?
There's another important bit there: 'we need to be able to tell the AI to make money for us and no-one else can compete with us on that'. I think both halves of that are questionable.
I mean FWIW they could probably make those folks happy by just spitting out a list of everything to short because of ai disruption on each new release lol
Not sure.
https://youtu.be/OYlQyPo-L4g
From 7th minute, truth about ChatGPT - 2% paid users.
Yeah the conversion rate is poor and its demand is not price inelastic, I would argue for many individuals demand is very price elastic.
In late 2021, Ed Zitron wrote (on Twitter) that the future of all work was "work from home" and that no one would ever work in an office again. I responded:
"In the past, most companies have had processes geared towards office work. Covid-19 has forced these companies to re-gear their processes to handle external workers. Now that the companies have invested in these changed processes, they are finding it easier to outsource work to Brazil or India. Here in New York City, I am seeing an uptick in outsourcing. The work that remains in the USA will likely continue to be office-based because the work that can be done 100% remotely will likely go over seas."
He responded:
"Pee pee poo poo aaaaaaaaaaa peeeeee peeeeee poop poop poop."
I don't know if he was taking drugs or what. I find his persona on Twitter to be baffling.
He was wryly communicating, "your argument was so stupid I don't even need to engage with it".
In my experience he has a horrible response to criticism. He's right on the AI stuff, but he responds to both legitimate and illegitimate feedback without much thoughtfulness, usually non-sequitur redirect or ad hominem.
In his defense though, I expect 97% of feedback he gets is Sam Altman glazers, and he must be tired.
He's right on the AI stuff? How do you figure that? As far as I can tell, OpenAI is still operating. It sounds like you agree with him on the AI stuff, but he could be wrong, just like how he was wrong about remote work.
I'm actually more inclined to believe he's wrong if he gets so defensive about criticism. That tells me he's more focused on protecting his ego than actually uncovering the truth.
The fact that OpenAI is still operating and the argument that it is completely unsustainable are not two incompatible things.
Wether or not OpenAI is sustainable or not is only a question that can be answered in hindsight. If OpenAI is still around in 10 years, in the same sort of capacity, does OP become retroactively wrong?
My point is, you can agree that OpenAI is unsustainable, but it's not clear to me that is a decided fact, rather than an open conjecture. And if someone is making that decision from a place of ego, I have greater reason to believe that they didn't reason themselves into that position.
The fact they are not currently even close to profitable with ever increasing costs and the sobering scaling realities there is something you could consider, and if you do believe they are sustainable, then you would have to believe (in my opinion, unlikely scenarios) they will somehow become sustainable, which is also a conjecture.
Seems a little unreasonable to point out “they are still around” as a refutation of the claim they aren’t sustainable when, in fact, the moment the investment money faucet keeping them alive is turned off they collapse and very quickly.
I don't think he's right about everything. He is particularly weak at understanding underlying technology, as others have pointed out. But, perhaps by luck, he is right most of the time.
For example, he was the lone voice saying that despite all the posturing and media manipulation by Altman, that OpenAI's for-profit transformation would not work out, and certainly not by EOY2025. He was also the lone voice saying that "productivity gains from AI" were not clearly attributable to such, and are likely make-believe. He was right on both.
Perhaps you have forgotten these claims, or the claims about OpenAI's revenue from "agents" this year, or that they were going to raise ChatGPT's price to $44 per month. Altman and the world have seemingly memory-holed these claims and moved on to even more fantastical ones.
He has never said that OpenAI would be bankrupt, his position (https://www.wheresyoured.at/to-serve-altman/, Jul 2024) is:
I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):
- Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.
- Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.
- Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
- Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
- Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.
I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.
He is right about this too. They are doing #2 on this list.
Is he right on the AI stuff? Like, on the OpenAI company stuff he could be? I don't know? But on the technology? He really doesn't seem to know what he's talking about.
> But on the technology? He really doesn't seem to know what he's talking about.
That puts him roughly on-par with everyone who isn't Gerganov or Karpathy.
I generally don't agree with him on much; it's just nobody really talks about how much money those companies burn, and are expected to burn, in bigger perspective.
For me 10 billion, 100 billion and 1 trillion are all very abstract numbers - until you show much unreal 1 trillion is.
It helps if you divide by the world population. Say ~10bn for this purpose so that's $1, $10 or $100 per head roughly.
> "Pee pee poo poo aaaaaaaaaaa peeeeee peeeeee poop poop poop."
Attach your name to this publicly, and you're a clown. I don't know why the world started listening to clowns and taking them seriously, when their personas are crafted to be non-serious on purpose.
Like I said, clowns.
Hasn't he been saying that OpenAI is going to shut down every year for the last few years now? And that models are never going to get better than they were in 2022? I think he's pretty clearly a grifter chasing an audience that wants to be reassured that big tech is going away soon.
Ed Zitron may be many things but he is no grifter. He writes what he believes and believes what he says, and I basically agree with all of it. The chattering class in SV has been wildly wrong for years, and they'll really look foolish when the market crashes horribly and these companies collapse.
He's saying more than that the companies are going to collapse; he's making pronouncements about the underlying technology, which are claims that are much harder to defend. I'm not entirely sure he understands the distinction between the companies and the technology, though.
Lecun pretty much says the same things, as most experts actually. Only the execs and marketing teams keep yapping about AGI
I didn't say anything about AGI. I think AGI is very silly.
Respectably...what?? Ed at this point is one of the most well read people on Earth for this topic. Of course he knows the difference between the companies and the technology. He goes in depth both on why he think the companies are financially unviable AND why he's unimpressed by LLM's technologically alllll the time.
Even as someone who is generally inclined to agree with his thesis, I find Ed Zitron's discussions as to why AI does not and will never work deeply unconvincing.
I don't think he fundamentally gets what's going on with AI on the tech level and how the Moore's law type improvements in compute have driven this and will keep doing so. He just kind of sees that LLM chatbots are not much good and assumes things will stay like that. If that were so investing $1tn would make no sense. But it's not true.
Having a large audience does not imply being the most well informed or correct.
"Saying what a lot of people want to hear" is not a good proxy for truthfulness or correctness.
Does Ed produce anything other than a newsletter?
How do you want to define grifter? He shows up, makes a lot of big promises, talks a lot of shit, doesn't actually engage with any real criticism, gets paid for it, and then exits, stage left. He could be right, he could be wrong, but he leaves no room for debate. If all you want is someone to yell at you about how right your feelings on something are, I mean, hey, I have a therapist too. I don't ask her for financial advice though.
All that money and they’ve been trying to shy away from talk of AGI recently to lower expectations.
The AI is getting better at things like maths. I recently asked it about iterating the Burrows-Wheeler transform, and it appeared to really understand that. It's not super easy to reason about why it's reversible, etc. and I felt that it got it.
This is obviously not AGI, and we're very far from AGI as we can see by trying out these LLMs on things like stories or on analyzing text, or dealing with opposing arguments, but for programming and maths performance is at a level where it's actually useful.
My answer to this is - so what? What are the effects in the real economy?
Theres probably a good 2-3 years left of runway for substantial progress before things really fall off a cliff. There has to be some semblance of GDP (real) growth.
I think it might be possible to automate development of short programs. I also think it might be possible to reduce the confusion, misunderstandings and cliches, so that the models can work for longer stretches.
But people probably expect to get the next version for what they pay in subscriptions now, so I can't imagine much more revenue growth for the model companies.
A lot of this is because there isn't a good definition of AGI. Look at Sama's recent interviews, that's how he deflects, along with the statement about the Turing test having ultimately been inconsequential. They have an internal definition of AGI that is "the model can perform the vast majority of economically viable tasks at the level of the highest humans" which isn't the story the investors are expecting when they hear AGI, so they're trying to stay mum to truthfully roadmap AGI while not blatantly lying to capital.
Platforms usually deliver significant value that is hard to replicate. OpenAI doesn't have any such thing. It's trivially replaced, and there's many competitors already. OpenAI is ahead of the curve, but they don't seem to have any particular way to do sticky capture. Migrating to a different LLM is an afternoon's work at most, not nearly the complexity of porting an app between OS' or creating a robust hardware driver model.
Yeah, I'm extremely loyal to ChatGPT Plus and Codex, but is't because OpenAI has a native Mac app that I like, and Codex included with Plus and has served me well enough to not look at Claude. I like GPT-5 quite a bit as a user. I'll concede none of these are small things - they've had my money for 2+ years - but they're not gigantic advantages either.
At an enterprise level however, in the current workload I am dealing with, I can't get GPT-5 with high thinking to yield acceptable results; Gemini 2.5 Pro is crushing it in my tests.
Things are changing fast, and OpenAI seems to be the most dynamic player in terms of productization, but I'm still failing to see the moat.
Interesting, so it's not just me who's finding Gemini 2.5 Pro to be the quiet leader? Deep Research also seems to be better in it, and the limits for that are far more generous to boot (20 per day on Gemini!).
Makes one wonder if Google will eventually sweep this field.
OpenAI is dynamic for consumer apps. Anthropic seems much better at productizing AI that you can actually build with, while also catering to enterprise in their own productized offerings
Also Claude Opus 4.1 runs multidimensional circles around GPT-5 in my view. The only better use case for GPT-5 is when you need it to scrape the web for data
Too bad Anthropic fucks over their 20x max subscribers. Their whole opus usage limit bullshit in favor of sonnet 4.5 which is objectively worse regardless of whatever they say. They clearly want to save money and take our $200.
https://github.com/anthropics/claude-code/issues/8449
I hope they get destroyed in the AI race.
FWIW I don't disagree
Actually, Anthrophic has a Mac app I can use, and I cannot use the OpenAI one.
Fun fact, it’s an Electron app
The irony of Claude being pretty good at writing Swift code.
I think their moat is ironically the platform.
Like Apple vs every other computer maker.
Their platform mostly just works. Their api playground and docs are like 100x better than whatever garbage Anthropic has.
I think their UX is way better, and I can have long AF conversations without lagging. I can even change models in the same conversation. Basic shit Anthropic can’t figure out (they can fleece their 20x max subscribers tho)
I think if they get the AI human interface right, they will have their iPhone moment and 10x.
The platform is just a search tool and extended processing modes. That is easily replicated elsewhere.
The only moat they have is the fact that you still need a GPU of some kind to reliably run even a tiny LLM. But the gap between what absolutely needs a server farm and what can be ran on a store bought gaming computer is quickly closing. You can already run mixture of experts models on gaming rigs with a high degree of usability compared to just one year ago. And that tech continues to be pushed further and further. It's only a matter of time until we see ChatGPT levels of access running on a quad core laptop totally offline. And once that happens, all such a system would need is the correct tooling on top of the AI model "brain" to make it usable.
And beyond that, what if you could have an AI model on an ASIC-style add-in card? Where's their moat then?
Don't forget about Macs. Top models can run even fairly large LLMs (although time to first token is... not great). But even midline can certainly run LLMs in the ballpark of ~30B params very well, which is where things start to get interesting IMO.
> although time to first token is... not great
The token cache works on CUDA too. So yeah, the initial loadout sucks, but almost everything from then on is solid.
I'd say their DX is way better compared to the competition. The playground, tooling, and web experience with documentation is far superior to the competition (even tho it has issues sometimes). User experience is key in a sea of the same shit.
I mean you could say the same about a Macbook or iPhone/iPad, but for the actual people (not HN users lol) out there, they vastly prefer Apple to HP, Dell, etc. Due to their wallet some can't though.
There are literally thousands of other laptops that do the "same thing" (computer for doing shit).
Those who say otherwise are usually broke and know deep down that given proper purchasing power, they too would prefer to use a $3k Macbook Pro than some POS Dell.
Same with android.
Everyone knows it is cope based on PP, it is just in poor taste to actually call it like it is (idc)
I don’t think they will get any moat. I might be wrong on this of course, but I don’t see a killer feature for these stochastic parrots that can’t be easily replicated
> … stochastic parrots… What a wonderful phrase, mind if I borrow it?
It's commonly used at this point, for what it's worth
Never mind the phrase. If your parrot can compete in international math and programming competitions at the gold-medal level and make entire subreddits fail the Turing test, I would like to borrow your parrot.
you don't have to get permission
plus, they didn't come up with it
"the term stochastic parrot is a metaphor, introduced by Emily M. Bender and colleagues in a 2021 paper, that frames large language models as systems that statistically mimic text without real understanding"
https://dl.acm.org/doi/10.1145/3442188.3445922
Much of the value I get from ChatGPT is based on it's memory system that has a good profile on me. For example if I ask it to find me something to eat at X restaurant, it will already know my allergies, dietary preferences, weight goals, medications, other foods I like, etc and suggest an item based on that, all workout me explicitly telling it.
Moving from ChatGPT to Claude I would lose a lot of this valuable history.
Or you could type up "My allergies are X, Y, Z, I have the following preferences..." etc. and put it into whatever chat bot you like. Obviously this is a bit of a pain, but it probably constrains significantly how much of a premium ChatGPT can charge. You might not bother if ChatGPT is time and a half more expensive, but what if it's 3x as much as the competition? What if there's a free alternative that's just as good?
I pay for the $200/mo ChatGPT, to me that's insanely cheap compared to the value it provides. Better pricing is highly unlikely to make me switch. If a competitor were able to sustain a lead in model intelligence/capability then I'd consider it.
Even simpler, ask ChatGPT "How would you describe my dietary preferences and restrictions?", maybe throw in some personality guidance of "You are training my executive assistant".
> Or you could type up "My allergies are X, Y, Z, I have the following preferences..."
Why use a car when you can just walk?
tbh most americans could do with walking more
Have you tried asking ChatGPT to output everything it knows about you to a format easily digestible by LLMs? Does the memory stick between model switches?
I suspect it does RAG on my previous convos, so this wouldn't work fully, but to some extent yes that would help.
I like to ask each llm what it knows about me. I feel like I could take that output and feed it into another llm and the new one would be up to speed quickly....
Still not much of a moat, especially with data portability.
In the EU/UK you might not have rights to the memories right now, but you've rights to the inputs that created those memories in the first place.
Wouldn't be too hard to export your chat history into a different AI automatically.
You could literally ask it to write out everything it knows about you into a form usable in a CLAUDE.md file, put that in the directory you're using e.g. claude code in, and boom.
No moat.
The play OpenAI is making has nothing to do with the underlying models any more. They release good ones but it doesn't really matter, they're going for being the place people land when they open a web browser. That is incredibly sticky and not easily replaced. Being the company that replaces the phrase "oh just Google it" is worth half the money in the world, and I think they're well on their way to getting there.
But that also makes them the product itself and requires standalone profitability which is not at all like the Windows' analogy being presented.
I agree, in general. I don't know what the world looks like in 10 years if all of the weird attempts at fitting an LLM as a replacement to what we have currently actually works, however. Facebook is probably the closest analogy, where there's plenty of room to grow, but at some point you're going to have the OS builders shut down your efforts unless you want to build your own OS.
I do say ChatGPT when referring to LLMs/genAI in general, but I do hate saying it as it is nowhere near as nice to say as "google". I will switch immediately once something better comes up.
“Chat” is already in the youth lexicon, originally referring to an amorphous blob of live stream viewers. It now kind of refers to a non-existent but omnipresent viewer of your life.
I think ChatGPT might turn into just “chat” as the next evolution of the term.
A lot of the points also apply to Google as a search engine, yet we're still seeing Google owning about 90% of search engine usage.
Google was always terrified by the fact that they had no moat in Search. This was clear from interviews and articles at the time. That's why they decided to roll up the ad market instead, and once they had the advertisers Search became a self-fulfilling monopoly.
Google Search would be moatless if not for the AdMob purchase.
Google search requires a lot of resources to crawl, keep indices up to date, and provide user level significance (i.e. different results to different users). Then couple it with their other services (Google Maps, etc).
The competitors have not come even close to Google's level of quality.
With LLMs, it's different. Gemini/Claude are as good, for the most part. And users don't care that much either - most use the standard free ChatGPT, which likely is worse than many competitors' paid models.
It's not just the quality. The are people who do complain about how they perceive it significantly decreased over time. Yet there are many other factors, including presence as default search engine in so many setup out there.
Google is a product not a platform. How many companies are still using Google search for their intranets? (yes yes I'm old, don't remind me)
Meanwhile bing search is actually a platform, and is what then powers other "search engines" (duckduckgo, kagi, etc...)
> Migrating to a different LLM is an afternoon's work at most, not nearly the complexity of porting an app between OS' or creating a robust hardware driver model.
I question this. Each vendor's offering has its own peculiar prompt quirks, does it not? Theoretically, switching RDBMS vendors (say Ora to Ingress) was also "an afernoon's work" but it never happened. The minutia is sticky with these sort of almost-but-not 'universal' interfaces.
> I question this. Each vendor's offering has its own peculiar prompt quirks, does it not? Theoretically, switching RDBMS vendors (say oracle to postgres) was also "an afernoon's work" but it never happened. The minutia is sticky with these sort of almost-but-not 'universal' interfaces.
The bigger problem is that there was never a way to move data between oracle->postgres in pure data form (i.e. point pgsql at your oracle folder and it "just works"). Migration is always a pain, and thus there is a substantial degree of stickiness, due to the cost of moving databases both in terms of risk and effort.
In contrast, vendors [1] are literally offering third party LLMS (such as claude) in addition to their own and offering one-click switching. This means users can try and if they desire switch with little friction.
[1] https://blog.jetbrains.com/ai/2025/09/introducing-claude-age...
Moreover, it's trivial to run several LLMs side-by-side for a while and measure the success of each, then migrate to the one that performs the best. And you can even migrate in-progress chats since all the context is passed on each call anyway.
The current LLMs support data export/import inherently because the interface is pure text.
All one needs to do is say something like “tell me all of personalization factors you have on me” and then just copy and paste that into the next LLM with “here’s stuff you should know about how to personalize output for me”
> The minutia is sticky with these sort of almost-but-not 'universal' interfaces.
True, but that's not really applicable here since LLMs themselves are not stable, and are certainly not stable within a vendors own product line. Like imagine if every time Oracle shipped a new version it was significantly behaviorally inconsistent with the previous one. Upgrading within a vendor and switching vendors ends up being the same task. So you quickly solidify on either
1) never upgrading, although with these being cloud services that's not necessarily feasible, and since LLMs are far from a local maxima in quality that'd quickly leave your stack obsolete
or
2) being forced to be robust, which makes it easy to migrate to other vendors
It's reasonable to question it, but there's a fun Chinese paper (https://arxiv.org/abs/2507.15855) where they attempt to create a system of prompts that can be used with commercial LLMs to solve hard maths problems.
It turns out that they can use the same prompt system for all of them, with no changes and still solve 5/6 IMO problems. I think this is possibly iffy, since people might have updated the models etc., but it's pretty obvious that this kind of thing is how OpenAI are doing their multi-stage thinking thing for maths internally.
Consequently if prompt systems are this transferable for these hard problems, why wouldn't both they and individual prompts, be highly transferable in general?
No, changing rdbms is a totally different challenge. They provide predictible reproducible output which are very sensitive and brittle to the slightest minor change in the input, while with LLM you expect the exact opposite.
Changing the LLM backend in some IDE is as complicated as selecting an option in a dropbox for those who integrate such a feature. They are other scenarios where it might be a bit more complicated to transition of course, but that's it.
If you are doing things “properly” then you have good evals that let you test the behaviour of different LLMs and see if they work for your problem.
The vendors have all standardised on OpenAIs API surface - you can use OpenAIs SDK with a number of providers - so switching is very easy. There are also quite a few services that offer this as a service.
The real test is does a different LLM work - hence the need to evals to check.
> Each vendor's offering has its own peculiar prompt quirks, does it not?
That applies even when you switch models within a vendor, though.
If you took any of current top 3 models from me, I would not miss the deleted one in the least. I run almost every non-trivial prompt through multiple models anyway.
OpenAI's moat is the data you give it. It's the same reason so many people have GMail accounts, even though we all know Google sucks. It's not that you like GMail better than any other email service, it's because migrating to another service is a pain in the ass.
OpenAI will either use customer data to enshittify to a level never seen before, or they will go insolvent.
I actually prefer Google Gemini. 2.5 is free and works awesome for what I need AI for. It just made my resume I uploaded immeasurably better last night.
https://www.reddit.com/r/Bard/comments/1mkj4zi/chatgpt_pro_g...
That's why OpenAI is hard pivoting towards products now. The big one IMO is Instant Checkout and Agentic Commerce Protocol. ChatGPT is going to turn into a product recommendation engine and OpenAI is going to get a slice of every purchase, which is going to disrupt the current impression/click adtech model and potentially Google and Amazon themselves. It's an open question how hard they can do this without enshittifying ChatGPT itself, but we'll see.
The notion that people would be willing to let LLMs spend money is, frankly, insane given the hallucination problems that still don't have any clear solution in sight.
They only invented modern AI, can easily be replaced!!
They did nothing of the sort. Google invented modern AI: https://en.wikipedia.org/wiki/Attention_Is_All_You_Need
OpenAI turned that research into a product before Google, which is a huge failure on Google's part, but that's orthogonal to the invention of what powers modern models.
You could say the same thing about Facebook...
You mean the social network that is currently dying the same death as countless other platforms before it, just on a larger scale?
Maybe some are too young to remember the great migrations from/to MySpace, MSN, ICQ, AIM, Skype, local alternatives like StudiVZ, ..., where people used to keep in contact with friends. Facebook was just the latest and largest platform where people kept in touch and expressed themselves in some way. People adding each other on Facebook before others to keep in touch hasn't been a thing for 5 years. It's Instagram, Discord, and WhatsApp nowadays depending on your social circle (two of which Meta wisely bought because they saw the writing on the wall).
If I open Facebook nowadays, then out of ~130 people I used to keep in touch with through that platform, pretty much nobody is still doing anything on there. The only sign of life will be some people showing as online because they've the facebook app installed to use direct messaging.
No, people easily migrate between these platforms. All it takes is put your new handles (discord ID/phone number/etc) as a sticky so people know where to find you. And especially children will always use whichever platform their parents don't.
Small caveat: This is a German perspective. I don't doubt there's some countries where Facebook is still doing well.
>No, people easily migrate between these platforms.
No? It's rare for these platforms to survive, the one that was closest to challenging Facebook was kneecapped by the US government.
The time between the founding of MySpace to Facebook was a little over a year. Instagram has been the largest social network for close to decade now, and it's not like others haven't been trying. META is up 600% over last 3 years. I'm really question your definition of the word "dying"
Last time I checked 99% of normies are still happily scrolling FB and Instagram
FB in particular is becoming deader and deader every month, and it's only a matter of time before something else comes along that sucks their attention away.
The beauty of Meta is that they can just fake engagement metrics. Not defending them, just knowing that they do
People say you're wrong but I agree. Facebook is nothing more than a rent-seeking middle man between you and your friends/family. Instead of just talking to your family normally, like over the phone, now you have to talk to them in between mountains of Sponsored Content and AI-generated propaganda. It provides no productive value to the world except for making the world more annoying and making people more isolated from one another.
When you realize this, you realize that a lot of other supposedly valuable tech companies operate in the exact same way. Worrying that our parents' retirement depends heavily on their valuations!
> Worrying that our parents' retirement depends heavily on their valuations!
Maybe you should short the stock to hedge your parents' retirement :)
FB has network effects (your friends/neighbors/etc are there). When I use ChatGPT, I don't care whether anyone else uses it as well.
I mean, technically no, FB has a network effect of the other people on it either being the people you want to talk to, or the people you want to advertise to.
Not at all, because it is not easy to replicate and move its social graph
That's a precise, incisive observation: OpenAI is trivial (any AI provider is), supported by evidence as demonstrated. It has no claim to operating software that's specifically distinct from others.
That's why they're working on consumer hardware
And business partnerships, government partnerships, and AI regulation (to establish laws that keep competitors out). Sam knows they have no moat and will try every avenue to establish one.
As Peter Thiel says: “competition is for losers”
they understand that, and that's why they're making it sticky by adding in app purchasing, advertising, integrations. also why they hired OGs from IG/FB. They are building the moat and hoping that first to market is going to work out.
I do not believe that advertising and purchasing is at the top of the list of things what make software sticky
they are trying to become/replace google. they are first to market for an entirely new query paradigm and in app purchases and advertising are just one aspect of a platform.
The current AI wave has been compared (by sama) to electricity and sometimes transistors. AI is just going to be in all the products. The trillion dollar question is: Do you care what kind of electricity you are using? So, will you care what kind of AI you are using.
In the last few interviews with him I have listened to he has said that what he wants is "your ai" that knows you, everywhere that you are. So his game is "Switching Costs" based on your own data. So he's making a device, etc etc.
Switching costs are a terrific moat in many circumstances and requires a 10x product (or whatever) to get you to cross over. Claude Code was easily a 5x product for me, but I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.
I do not think that apps inside chatgpt matters to me at all and I think it will go the way of all the other "super app" ambitions openai has.
If you take that at face value, shouldn't every investor just back Google or Apple instead? Like, OpenAI is, at best, months ahead when it comes to model quality. But for them to get integrated into the lives of people in the way all their competitors are would take years. If the way in which ai becomes this ubiquitous trillion dollar thing involves making it hyper-personalized, is there any way in which OpenAi is particularly well positioned to achieve that?
They haven't been ahead in model quality for some time now.
> I do think GPT5 is doing a better job on just "remembering personal details" and it's compelling.
Today I asked GPT5 to extract a transcript of all my messages in the conversation and it hallucinated messages from a previous conversation, maybe leaked through the memory system. It cannot tell the difference. Indiscriminate learning and use of memory system is a risk.
I mean don't you think this is is more analogous to the introduction of computing than electricity? If you told people in 1960 that there would be supercomputers inside people's refrigerators do you think they would have believed you?
And most people actually don't care what CPU they have in their laptop (enthusiasts still do which i think continues to match the analogy), they care more about the OS (chatGPT app vs gemini etc).
>The current AI wave has been compared (by sama) to electricity and sometimes transistors. AI is just going to be in all the products.
Sorry, but you have to be beyond thick to believe any of this.
Ya absolutely wild take.
can the world and tech survive fruitfully without AI? yes. can the world and tech survive without electricity and transistors - not really. the modern world would come crashing down if transistors and electricity disappeared overnight. if AI disappeared over night the world might just be a better place.
That might be because AI is new so we don't rely on it. The world got on for a long time before electrical engineering came about.
I don't understand the link between the title of the article, and its content. If I summarize their three points:
1. Corporate strategy of OpenAI is becoming a monopole 2. OpenAI is investing in infrastructure because they think they'll have lots of users in the future 3. Making videos on Sora is fun, and people are gonna post more of these.
How does that substantiate "we live in OpenAI's world"? Am I missing something?
I am not sure about this. They definitely created a brand new service and data flows that didn’t exist before and they have the majority of the mind share, however it’s already commoditized. The next two to three years will show how the chips fall. I can see that it’s tough or almost impossible for apple to get a share in this but google is right there to take the consumer side. For enterprise again we have to wait and see how gcp and AWS do.
The value is not in the llm but vertical integration and providing value. OpenAI has identified this and is doing is vertical integration in a hurry. If the revenue sustains it will be because of that. For consumer space again, nvidia is better positioned with their chips and SoCs but OpenAI is not a sure thing yet. By that I don’t mean they are going to fall apart, they will continue to make a large amount fmloney but whether it’s their world or not is still up in the air.
> The value is not in the llm but vertical integration and providing value.
The irony being that LLMs are particularly good at writing the web frontend code, lowering the technical barrier to entry for competitors.
I'm on the verge of unsubscribing from Stratechery. The last month has been a bunch of fawning over Meta, YouTube, and constant talk about and fawning over OpenAI and whatever latest models are coming out. It's kind of tiring and boring. I swear I heard them talk about some YouTube influencers event like five times across their different shows and across time. Like, I do not care at all.
As a longtime loyal subscriber to Stratechery... I kinda agree. But as the other commenters did point out, this does reflect how the market seems to feel about OpenAI, at least. (Meta - I'm less sure of; Thompson does fawn over Meta quite a bit, I personally think it's too much and seems to not fully reflect reality, but man do they really cane it when you see their usage numbers, so maybe he's right.)
I did think his GPT-5 commentary was good, insofar as picking up the nuance of why it's actually better than the immediate reactions I, at least, saw in the headlines.
Where I do agree with you is how Stratechery's getting a little oversaturated. I'm happy Ben Thompson is building a mini media empire, but I might have liked it more when it was just a simple newsletter that I got in my inbox, rather than pods, YouTube videos, and expanding to include other tech/news doyens. Maybe I'm just a tech media hipster lol.
Is this fawning or just reflecting reality? I’m generally in the “LLMs kinda suck camp” and I read the headline and thought “yep 100%”. OpenAI is able to raise and deploy insane amounts of capital on a whim. Regardless of that being a good or bad thing it’s still true.
Did you listen to the recent interview with Ben Bajarin? I thought that interview alone justified the subscription. Curious as to whether anyone else felt the same.
Fantastic interview. Hard to get much info from inside the world Bajarin was speaking of. Notable how everyone is saying they can't get capacity for the tokens they're trying to serve.
The question is if the company can add more value to the models than someone else. I still see a lot of gaps in the ecosystem, eg evaluation/testing systems and integrations beyond chat interface and active control to get good results, not to mention other types of models that deal with 3d world or temporal data. There is an opportunity for an outsider to come and grab parts of the pie whilst the biggest are competing.
They do look like trying to grab the market with tooling but if you can use their tools (oss) and switch the models then where is the moat?
I doubt it They made 4.1 billion in revenue and had a loss of 13 billion last quarter on their Ai offering. They are not profitable
Can someone elucidate us as to how so many platforms (ChatGPT, Gemini, Claude, etc etc) all sprung up so quickly? How did the engineering teams immediately know how to go about doing this kind of tech with LLMs and DNNs and whatnot?
By 2020/2021 with the release of GPT-3, the trajectory of a lot of the most obvious product directions had already become clear. It was mainly a matter of models becoming capable enough to unlock them.
E.g. here's a forecast of 2021 to 2026 from 2021, over a year before ChatGPT was released. It hits a lot of the product beats we've come to see as we move into late 2025.
https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...
(The author of this is one of the authors of AI 2027: https://ai-2027.com/)
Or e.g. AI agents (this is a doc from about six months before ChatGPT was released: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...)
This is the paper that kicked off the current generation: https://proceedings.neurips.cc/paper_files/paper/2017/file/3....
That was 2017. And of course Google & UofT were working on it for many years before the paper was published.
It's not much different to other ML, pretty much it's on a bigger and more expensive scale. So once someone figured out the rough recipie (NN architecture, ludicrous scale of weights and data, reinforcement learning tuning), it's not hard for other experts in the field to replicate, so long as they have the resources. Deepseek was pretty much a side project, for example.
I imagine it wasn't as immediate as it might look on the outside. If they all were working independently on similar ideas for a while, one of them launching their product might have caused the others to scramble to get theirs out as well to avoid missing the train.
I think it's also worth pointing out that the polish on these products was not actually there on day one. I remember the first week or so after ChatGPT's initial launch being full of stories and screenshots of people fairly easily getting around some of the intended limitations with silly methods like asking it to write a play where the dialogue has the topic it refused to talk about directly or asking it to give examples about what types of things it's not allowed to say in response to certain questions. My point isn't that there wasn't a lot of technical knowledge that went into the initial launch, but that it's a bit of an oversimplification to view things at a binary where people didn't know how to do it before, but then they did.
Was it that quickly? GPT 3 is where I would kind of put the start of this and that was in 2020, they had to work on the technology for quite a while before it got like this. Everyone else has been able to follow their progress and see what works.
GPT 2 didn't have a chat interface but had made a splash in some circles (think spam-adjacent).
Edit: mixed up my dates claiming DALL E came out before GPT 3
I remember subredditsimulator was big (based on GPT and GPT2), but there was some confusion because no one could get ahold of these models
All of the products you mention already had research teams (in the case of ChatGPT and Claude that actually predated most of their engineers). So knowing how to build small language models was always in their wheel house. Scaling up to larger LLMs required a few algorithmic advancements but for the most part it was a question of sourcing more data and more compute. The remarkable part of transformers is their scaling laws, which let us achieve much better models without having to reinvent new architecture.
Once you have the weights, actually running these models is easy. The code is not complicated - they are just huge in terms of memory requirements.
Deep learning has now been around for a long time. Running these models is well understood.
obviously running them at scale for multiple users is more difficult.
The actual front ends are not complicated - as is evidenced by the number of open source equivalents.
Intersection of cloud compute power being plentiful combined with existing LMs. As I understand it, right now, it's really just throwing compute power at existing LMs to learn on gigantic datasets.
"so quickly" meaning over the last 3 years? (ChatGPT was launched in Nov 2022)
I wonder if OpenAI's app platform is going to be more like Windows (most economic value goes to user and app partners) or like Facebook (most economic value goes to facebook, app makers get screwed. I mean Microsoft acted badly towards a lot of partners, but it was a true platform.
Or like Apple. Mix of both, except they take the best ideas and bring them in house (and occasionally use proprietary APIs to do things others can’t)
OpenAI doesn't have a moat unfortunately. One URL replacement away and you can switch most models in minutes. I have personally done this many times over the last year and a half.
It only takes labs to produce better and better models, and the race to bottom on token costs.
> OpenAI doesn't have a moat unfortunately.
The moat is the branding, for most people AI means ChatGpt.
you could say the same thing about a search engine. if they keep an edge on brand or quality they can find ways of monetizing that attention
Honestly how google maintains a "moat" over other search engines could use a good business study. They've defied some pretty serious competitors there without an obvious lock in or anything.
(You can say default in various browsers and a phone OS and that's probably the main component but it's not clear changing that default would let Bing win or etc.)
The victory lap from Sam Altman and all the money being raised makes people forget the following:
- Open source LLM models at most 12 months behind ChatGPT/Gemini; - Gemini from Google is just as good, also much cheaper. For both Google and the users, as they make their own TPU; - Coding. OpenAI has nothing like Sonnet 4.5
They look like they invested billions to do research for competitors, which have already taken most of their lunch.
Now with the Sora 2 App, they are just burning more and more cash, so people watch those generated videos in Tiktok and Youtube.
I find it hilarious all the big talk. I hope I get proven wrong, but they seem to be getting wrecked by competitors.
I think platform monopolies are a thing of the past, where, when most of the world was asleep, a few silicon valley garage companies took over the green field and locked-in a huge customer base irreversibly, thus colonizing the world. The world is now much more awake and connected, ruling out any concentration of dominance. You can't have a repeat of colonial times.
I mean we transitioned from the product to the brand, with no reason to. Soon it will just flow back into a product. I don't care if a Samsung fridge has gemini GPT or openai GPT, as long as it works
"[Microsoft's] platform power didn’t just come from controlling applications on top of Windows, but the OEM ecosystem underneath. If OpenAI builds AI for everyone, then they are positioned to extract margin from companies up-and-down the stack — even Nvidia. "
Ah yes, the ChromeOS strategy. How'd that work out for Google?
Building a platform is good, a way to make quite a bit of money. It's worked really well for Google and Apple on phones (as Ben notes). But there's a reason it didn't happen for Google on PCs. Find it hard to believe it will for OpenAI. They don't (and can not) control the underlying hardware.
It worked great, Web is ChromeOS Platform for all practical purposes, with Firefox meagre 3%, and Safari only being relevant thanks to iOS, and the whole Electron crap as "native apps".
I don't think that this is true in terms of being able to extract profits from the OEMs underneath, which is what the parent commenter quoted from the article. I don't think this refutes their response as much as it's a different point than the one they were responding to.
Whats more, every single child here in Australia is learning on a school issued Chrome Book. To my kids, a spread sheet is google sheets and a power point is google slides.
(We were joking about it just last week because my partner asked my eldest what was the Power Point he was working on and he said, "Whats Power Point?")
> every single child here in Australia is learning on a school issued Chrome Book
Many? Most? Possibly, but absolutely not every single one.
The counter argument, saying that it’s likely going to be Google, appears in the latest Acquired.fm podcast.
They’re the only AI lab with their own silicon.
Edit: they didn’t say “likely,” they just marveled at the talent + data + TPU + global data centers + not needing external investment.
If I recall correctly, their theory was that google could deliver tokens cheaper than anyone else.
Amazon.
Every single major AI player at the moment is designing chips with some getting close to their first tape out. Including OpenAI.
The question is to what degree that matters - if this power applies anywhere you can access ChatGPT (which is anything with a web browser), do you actually need to control the hardware?
> Ah yes, the ChromeOS strategy. How'd that work out for Google?
They dominate the EDU market?
If OpenAI is only able to dominate a market the size of this, I don't think it's their world that we're all living in
May I interject with my pathetic attempt at a Classification of Intelligences ala Linnaeus
Pre-AI :: Examples :: Timeline
---------------------------------------------------------------------
NHI Non-human Intelligence :: e.g. dolphins, apes, crows etc. :: millions of years
HI Human Intelligence :: e.g. Einstein, Trump, Confuscius, Homer :: thousands of years
Post-AI
---------------------------------------------------------------------
AI Artificial Intelligence :: ChatGTP, Gemini, many others :: countable months
AGI :: Artificial General Intelligence ; not there yet; :: zero
AIApHI AI Assisted/Approved/Audited Human Intelligence :: See AI :: countable months
HIApAI Human Intelligence Assisted/Approved/Audited AI :: The Future? :: zero
I have mentioned no individuals here to avoid legal action. My point on AI is ... wait and see. Chill.
Stratechery has always been shallow, but these overt advertisements are disturbing:
- Sneaking in how someone went from a Sora skeptic to a purported creator within a week.
- Calling the result the "future of creation".
- Titling the advertisement "It’s OpenAI’s World, We’re Just Living in It".
What they are doing here is to pitch Sora to attention deficit teenagers in order to have yet another way to make the hair of the favorite content creator red. As if that didn't already exist.
My feeling is that the tech industry has been in, "hot water" since at least 2018 and has been using private equity and bullshit hype-trains to garner interest in new technologies in lieu of the public getting hip to the fact that mostly computers are spying on you, making you mentally ill, and stealing your data in exchange for, "being able to participate." As pointed out by others OpenAI and the rest of the AI ecosystem will need a financial miracle to stay afloat and offer their products for a competitive price.
There's so much of what, "AI" is becoming that just seems like a massive psy-op to breathe one last breath of life into what is the skeleton of the old Silicon Valley. Innovation is possible but if the future really is liberal authoritarianism/oligarchy there's no room in the contrived market for, "innovative products that greatly improve human life."
There's hope in: https://worrydream.com/
Openai is an artificially and ridiculously inflated balloon. It has nothing except initial market capture with hype. But yes they will keep whipping investors and keep burning money.
They also have the leading model.
In which area?
I'd assume active user count
hype
Which model? Sonnet 4.5 DESTROYS anything OpenAI has for coding.
5-pro and 5-codex-high
Whenever I see a post at #1 with not comment, I know it's been artificially pumped to the top. So many people upvoted but not a single comment yet. Let there be some comments! lol.
OpenAI is a geopolitically important play besides being a tech startup so it gets pumped in funding and in PR, to show that we're still leading the world. But that premise is largely hallucinated.
I had a different takeaway - that a lot of folks on here read Ben Thomson and respect his work! It sounds like Ben is pretty bullish on OpenAI and maybe he's convinced folks through his work to agree with this take.
This is just a roundup article (though there are still some good nuggets about Sora vs Meta's app.) Looking forward to another HN discussion purely driven through article title vibes. With nothing mooring the discussion, you know it's going to be "good".
I do like stratechery but this is a roundup newsletter and not an article. If the HN thread gets engagement it will likely be based on the headline and not any of the articles in the roundup.
yeah something tells me Ben does not care about being #1 on hn
I don't even know who "Ben" is...
Ben Thompson bopped around doing engineer things at Apple, Microsoft, and Automattic, until more than a decade ago he started this subscription newsletter with business-of-tech kind of analysis. The success of his paid newsletter gave Substack the idea [0].
A fair chunk of the tech who’s-who seem to find his thinking useful.
[0] https://www.vox.com/2017/10/16/16480782/substack-subscriptio...
I've seen articles from this blog over the years and every time there are a bunch of comments referring to the author on a first name basis. As far as I can tell he's a guy who posts a bunch of hot takes on finance/economics/markets/etc. and I guess is very well known to a core audience that might overestimate his name recognition to other people who might just be seeing something on the front page of Hacker News without recognizing the source.
There's nothing inherently wrong with comments referring to him with by his first name, but I don't think I've ever seen a similar pattern with any other sources here outside of maybe a few with much more universal name recognition. It's always struck me as a little bit odd, but not a big enough deal for me to go out of my way to comment about it before now.
Anecdotes are only a datapoint and nearly meaningless by themselves. For a great many others just the stratechery.com website alone is enough to get a view and an upvote.
Are you sure Ben is not getting paid $7,000 per post by OpenAI?
Are you sure you property clean up the blood stains of the call girls you murdered in your basement?
And what I mean by that is what evidence are either of us bringing that our claims are true?
Therefore I bring that I have no evidence at all about my claim above, and seeming you're in the same boat.
This said the website above lists
>I am not paid by any company for any opinion I post on Stratechery or in any public forum, including podcasts and Twitter.
>I do not hold individual stocks in any company I write about. I do hold various 401k and IRA accounts that invest in a wide-ranging basket of stocks, over which I have no control.
>I occasionally agree to speaking engagements for both public and private events, but not for companies I cover on Stratechery. Compensation will vary based on the nature of the customer and event, as well as the topic. I do not do any consulting at this time.
So you tell me.
>I pay for all of my own travel and expenses when I attend company events.
Are you sure Ben is not getting paid $7,000 per post by OpenAI?
It's easy to write a cynical reaction, but the simplest explanation is usually true. In this case, it's just a very good headline.
I read Stratechery. Ben's articles are what he makes for public consumption. This weekly summary thing is a new roundup for subscribers, and just happens to be public, and if you're not a subscriber you can't follow the links. If Ben could choose something to be #1 on Hacker News it would likely be a full article with this headline, rather than a weekly summary post for subscribers.
OpenAI has been at the top of the app store for years now. A lot of people are interested in it. That trivially explains the upvotes without a conspiracy.
Kudos to the headline writer on this one.
Sorry, not buying it.
I mean it usually means dang thought the piece deserved a conversation. Which in this case I think it does.