I am becoming more and more convinced that AI cant be used to make something better than what could have built before AI.
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
I’m finding that the code LLMs produce is just average. Not great, not terrible. Which makes sense, the model is basically a complex representation of the average of its training data right? If I want what I consider ‘good code’ I have to steer it.
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
We've gone from "it's glorified auto-complete" to "the quality of working, end-to-end features, is average", in just ~2 years.
I think it goes without saying that they will be writing "good code" in short time.
I also wonder how much of this "I don't trust them yet" viewpoint is coming from people who are using agents the least.
Is it rare that AI one-shots code that I would be willing to raise as a PR with my name on it? Yes, extremely so (almost never).
Can I write a more-specified prompt that improves the AI's output? Also yes. And the amount of time/effort I spend iterating on a prompt, to shape the feature I want, is decreasing as I learn to use the tools better.
I think the term prompt-engineering became loaded to mean "folks who can write very good one-shot prompts". But that's a silly way of thinking about it imo. Any feature with moderate complexity involves discovery. "Prompt iteration" is more descriptive/accurate imo.
> They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.
It seemed to me that improvements due to training (i.e. the model) in 2025 were marginal. The biggest gains were in structuring how the conversation with the LLM goes.
Think about it from a resource (calorie) expenditure stand point.
Are you expending more resources on writing the prompts vs just doing without it? Thats the real question.
If you are expending more, which is what Simon is indicating at - are you really better off? Id argue not, given that this cant be sustained for hours on end. Yet the expectation from management might be that you should be able to sustain this for 8 hours.
So again, are you better off? Not in the slightest.
Many things in life are counter-intuitive and not so simple.
P.s. youre not getting paid more for increasing productivity if you are still expected to work 8 hrs a day... lmao. Thankfully im not a SWE.
Simon: "I'm frequently finding myself with work on two or three projects running parallel. I can get so much done, but after just an hour or two my mental energy for the day feels almost entirely depleted."
Youre a time waster, stop posting and creating noise.
People often describe the models as averaging their training data, but even for base models predicting the most likely next token this is imprecise and even misleading, because what is most likely is conditional on the input as well as what has been generated so far. So a strange input will produce a strange output — hardly an average or a reversion to the mean.
On top of that, the models people use have been heavily shaped by reinforcement learning, which rewards something quite different from the most likely next token. So I don’t think it’s clarifying to say “the model is basically a complex representation of the average of its training data.”
The average thing points to the real phenomenon of underspecified inputs leading to generic outputs, but modern agentic coding tools don’t have this problem the way the chat UIs did because they can take arbitrary input from the codebase.
And Unix was mainly made by two people, it's astounding that as I get older, even tech managers don't know "the mythical man month", and how software production generally scales.
> Sorry but a 99.999% of developers could not have built Unix. Or Winamp.
> Managers are crossing their fingers that devs they hire are no worse than average, and average isn't very good.
The problem is that that's the same skill required to safely use AI tools. You need to essentially audit its output, ensure that you have a sensible and consistent design (either supplied as input or created by the AI itself), and 'refine' the prompts as needed.
AI does not make poor engineers produce better code. It does make poor engineers produce better-looking code, which is incredibly dangerous. But ultimately, considering the amount of code written by average engineers out there, it actually makes perfect sense for AI to be an average engineer — after all, that's the bulk of what it was trained on! Luckily, there's some selection effect there since good work propagates more, but that's a limited bias at best.
Agree completely. Where I'm optimistic about AI is that it can also help identify poorly written code (even it's own code), and it can help rewrite it to be better quality. Average developers can't do this part.
From what I've found it's very easy to ask the AI to look at code and suggest how to make the code maintainable (look for SRP violations, etc, etc). And it will go to work. Which means that we can already build this "quality" into the initial output via agent workflows.
What youre pointing at is the trade off between concentration of understanding vs fragmented understanding across more people.
The former is always preferred in the context of product development but poses a key person risk. Apple in its current form is a representation of this - Steve did enough work to keep the company going for a decade after his death. Now its sort-of lost on where to go next. But on the flip side look at its market cap today vs 2000.
, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product.
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
I will use my right to disagree. Maybe not 4 people everywhere, but if you have product with well thought feature set you create those and then you really don't need 1000s people to just keep it alive and add features one by one.
I - of course - am talking about perfect approach with everyone focused to not f** it up ;)
Big projects can still be highly modular, and projects built by "1000s of devs" typically are. If your desired change can be described clearly without needing too much unrelated context, the LLM will probably get it right.
Do not insult P-II w/256 MB of RAM. That thing used to run this demo[0] at full speed without even getting overwhelmed.
Except some very well maintained software, some of the mundane things we do today waste so much resources it makes me sad.
Heck, the memory use of my IDE peaks at VSCode's initial memory consumption, and I'd argue that my IDE will draw circles around VSCode while sipping coffee and compiling code.
> for no reason other than our own arrogance and apathy.
I'll add greed and apparent cost-reduction to this list. People think they win because they reduce time to market, but that time penalty is delegated to users. Developers gain a couple of hours for once, we lose the same time every couple of days while waiting our computers.
Once I have read a comment by a developer which can be paraphrased as "I won't implement this. It'll take 8 hours. That's too much". I wanted to plant my face to my keyboard full-force, not kidding.
Heck, I tuned/optimized an algorithm for two weeks, which resulted in 2x-3x speedups and enormous memory savings.
We should understand that we don't own the whole machine while running our code.
Haha, I know. Just worded like that to mean that even a P-II can do many things if software is written well enough.
You're welcome. That demo single-handedly thrown me down the high performance computing path. I thought, if making things this efficient is possible, all the code I'll be writing will be as optimized as it can be as the constraints allow.
Another amazing demo is Elevated [1]. I show its video to someone and ask about the binary and resources size. When they hear the real value, they generally can't believe it!
Much of the new datacenter capacity is for GPU-based training or inference, which are highly optimized already. But there's plenty of scope for optimizing other, more general workloads with some help from AI. DRAM has become far more expensive and a lot of DRAM use on the server is just plain waste that can be optimized away. Same for high-performance SSDs.
I do feels things in general are more "snappy" at the OS level, but once you get into apps (local or web), things don't feel much better than 30 years ago.
The two big exceptions for me are video and gaming.
I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
> I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
I would assume (generally speaking) that CAD and video editing applications are carefully designed for efficiency because it's an important differentiator between different applications in the same class.
In my experience, these applications are some of the most exciting to use, because I feel like I'm actually able to leverage the power of my hardware.
IMO the real issue are bloated desktop apps like Slack, Discord, Spotify, or Claude's TUI, which consume massive amounts of resources without doing much beyond displaying text or streaming audio files.
I get this comment everytime I say this but there are levels to this. What you think is bad today could be considered artisan when things become worse than today.
I mean, you've never used the desktop version of Deltek Maconomy, have you? Somehow I can tell.
My point here is not to roast Deltek, although that's certainly fun (and 100% deserved), but to point out that the bar for how bad software can be and still, somehow, be commercially viable is already so low it basically intersects the Earth's centre of gravity.
The internet has always been a machine that allows for the ever-accelerated publishing of complete garbage of all varieties, but it's also meant that in absolute terms more good stuff also gets published.
The problem is one of volume not, I suspect, that the percentages of good versus crap change that much.
So we'll need better tools to search and filter but, again, I suspect AI can help here too.
Underrated comment. The reason that everyone complains about code all the time is because most code is bad, and it’s written by humans. I think this can only be a step up. Nailing validation is the trick now.
Validation was always the hard part, outside of truly novel areas - think edges of computer science (which generally happen very rarely and only need to be explored once or a handful of times).
Validation was always the hard part because great validation requires great design. You can't validate garbage.
No, wealth gets more concentrated. Fewer people on the team will be able to afford a comfortable lifestyle and save for retirement. More will edge infinitesimally closer to "barely scraping by".
“You never needed 1000s of engineers to build software anyway”
What is the point of even mentioning this? We live in reality. In reality, there are countless companies with thousands of engineers making each piece of software. Outside of reality, yes you can talk about a million hypothetical situations. Cherry picking rare examples like Winamp does nothing but provide an example of an exception, which yes, also exists in the real world.
Completely agree. There is a common misunderstanding/misconception in product development, that more features = better product.
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
It’s more about operational resilience and serving customers than product development. If you run early WhatsApp like organisation just 1 person leaving can create awful problems. Same for serving customers especially big clients need all kinds of reports and resources that skeleton organisation can not provide.
Yeah, that’s a misconception too based on my experience.
I’ve seen many people (even myself) thinking the same: if I quit/something happens to me, there will be no one who knows how this works/how to do this. Turned out the businesses always survived. There was a tiny inconvenience, but other than that: nothing. There is always someone who is willing to pick up/take over the task in zero amount of time.
I mean I agree with you, in theory. But that’s not what I’ve seen in practice.
> I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.
> In agile methodologies we measure the output of the developers.
No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?
There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.
I didn’t mean the Agile Manifesto prescribes individual productivity measurement. I meant what often happens in “agile in the wild”: we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success, while the harder question (“did this deliver user/business value?”) is weakly measured or ignored.
Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments. Even in Scrum, you still have roles with accountability, and teams still need some form of prioritization and product decision-making (otherwise you just get activity without direction).
So yeah: agile ideals don’t say “measure dev output.” But many implementations incentivize output/throughput, and that’s the misconception I was pointing at.
> we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success
That sounds more like scrum or something in that wheelhouse, which isn't agile, but what I earlier called pre-agile. They are associated with agile as they are intended to be used as a temporary transitionary tool. One day up and telling your developers "Good news, developers. We fired all the managers. Go nuts!" obviously would be a recipe for disaster. An organization wanting to adopt agile needs to slowly work into it and prove that the people involved can handle it. Not everyone can.
> Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments.
That's the pre-agile step. You don't get rid of managers immediately, you put them to work stepping in when necessary and helping developers learn how to manage without a guiding hand. "Business people" remain involved in agile. Perhaps you were thinking of that instead? Under agile they aren't managers, though, they are partners who work together with the developers.
Wait, surely adding 10x more agents to my project will speed up development, improve the end product, and make me more productive by that same proportion, right?
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
I just build a programming language in couple of hours, complete with interpreter with claude code. I know nothing about designing and implementing programming languages: https://github.com/m-o/MoonShot.
Yes, my point is that it was possible to build it before AI and in much less effort than people imagine. People in college build an interpreter in the less than couple weeks anyway and that probably has more utility.
Consider two scenarios:
1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.
2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.
Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.
Yes, but you can combine the solutions. Aka, you know what you are working on.You can make it much faster. Or you builds something and learn from it.
I think there will be a lot of slop and a lot of usefull stuff.
But also, what i did was just an experiment to see if it is possible, i don't think it is usable, nor do i have any plans to make it into new language. And it was done in less than 3 hours total time.
So for example, if you want to try new language features. Like let's say total immutability, or nullability as a type. Then you can build small language and try to write a code in it. Instead of writing it for weeks, you can do it in hours.
Also i didn't read that book, if there are similarities in language it must be accident or claude steering me to what he knows. And if its the interpreter design, than it probably if from that book.
And they told us, that they don't memorise the material.
Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.
With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
I still need to figure out how to deal with that, for now I just time box these sessions.
But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.
Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.
That's weird, I'm the opposite. Previously I would start coding immediately, because writing the code helps me figure out the what and the how, and because I'd end up with modular/reusable bits that will be helpful later anyway.
Now I sit on an idea for a long time, writing documentation/specs/requirements because I know that the code generation side of things is automated and effortlessly follows from exhaustive requirements.
>With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
How much of this is because you don't trust the result?
I've found this same pattern in myself, and I think the lack of faith that the output is worth asking others to believe in is why it's a throwaway for me. Just yesterday someone mentioned a project underway in a meeting that I had ostensibly solved six months ago, but I didn't even demo it because I didn't have any real confidence in it.
I do find that's changing for myself. I actually did demo something last week that I 'orchestrated into existence' with these tools. In part because the goal of the demo was to share a vision of a target state rather than the product itself. But also because I'm much more confident in the output. In part because the tools are better, but also because I've started to take a more active role in understanding how it works.
Even if the LLMs come to a standstill in their ability to generate code, I think the practice of software development with them will continue to mature to a point where many (including myself) will start to have more confidence in the products.
it might depend how you word it, i specifically asked about a caldav and a firefox sync solution, explaining how much difficulty-adverse i was, and i have been berated both times
Sitting on an idea doesn’t have to mean literally sitting and staring at the ceiling, thinking about it. It means you have an idea and let it stew for a while, your mind coming back to it on its own while you’re taking a shower, doing the dishes, going for a walk… The idea which never comes back is the one you abandon and would’ve been a waste of time to pursue. The idea which continues to be interesting and popping into your head is the worthwhile one.
When you jump straight into execution because it’s easy to do so, you lose the distinction.
Sitting on an idea doesn't necessarily mean being inactive. You can think at the same time as doing something else. "Shower thoughts" are often born of that process.
I know, and letting an agent/LLM "think" about some ideas does not waste your time either. Yes, it "wastes" energy and you need to read and think about the results after, we don't have neural interfaces to computer, so the inner thinking feedback loop will always be faster. But I keep thinking GP comment was unfair: you can just have your idea in the background to check whether it is good or not exactly the same, and after that time "discuss" it with an LLM, or ask it to implement the idea because you think it's solid enough. It's a false dichotomy.
If you do not know what you want to build, how to ask the AI what you want and are unable to tell what the correct requirements are; then it becomes a waste of time and money.
More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.
Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.
The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.
That describes the majority of cases actually worth working on as a programmer in the traditional sense of the word. You build something to begin to discover the correct requirements and to picture the real problem domain in question.
> You build something to begin to discover the correct requirements and to picture the real problem domain in question.
You lose that if the agent builds it for you, though; there is no iteration cycle for you, only for the agent. This means you are missing out on a bunch of learning that you would previously had gotten from actually writing something.
Prior to agents, more than once a week I'd be writing some code and use some new trick/technique/similar. I expect if you feel that there is no programming skills and tricks left for you to learn, then sure, you aren't missing out on anything.
OTOH, I've been doing this a long time, and I still learn new things (for implementation, not design) on each new non-trivial project.
> You build something to begin to discover the correct requirements and to picture the real problem domain in question.
That's one way, another way is to keep the idea in your head (both actively and "in the background) for days/weeks, and then eventually you sit down and write a document, and you'll get 99% of the requirements down perfectly. Then implementation can start.
Personally I prefer this hammock-style development and to me it seems better at building software that makes sense and solves real problems. Meanwhile "build something to discover" usually is best when you're working with people who need to be able to see something to believe there is progress, but the results are often worse and less well-thought out.
It's better to have a solid concrete idea written down of the entire system that you know you want to build which has ironed out the limitations, requirements and the constraints first before jumping into the code implementation or getting the agent to write it for you.
The build-something-to-discover approach is not for building robust solutions in the long run. By starting with the code first without knowing what it is you are solving or just getting the AI to generate something half-working but breaks easily and changing it once again for it to become even more complicated just wastes more time and tokens.
Someone still has to read the code and understand why the project was built on a horrible foundation and needs to know how to untangle the AI vibe-coded mess.
with agentic development, I've finally considered doing open source work for no reason aside from a utility existing
before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day
back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money
TBH, I have found AI addictive, you use it for the first time, and its incredible. You get a nice kick of dopamine. This kick of dopamine, is decreasing with every win you get. What once felt incredible, is just another prompt today.
Those things don't excite you any more.
Plus, the fact that you no longer exercise your brain at work any more.
Plus, the constant feeling of FOMO.
I think the randomness is addicting. While writing a prompt often doesn't result in the perfect outcome, it very well could. Pressing the "prompt lever" (again and again), waiting for the result to show up looks a lot like gambling.
What felt incredible was getting the setup and prompting right and then producing reasonable working code at 50x human speed. And you're right, that doesn't excite after a while.
But I've found my way to what, for me, is a more durable and substantial source of satisfaction, if not excitement, and that is value. Excuse the cliche, but its true.
My life has been filled with little utilities that I've been meaning to put together for years but never found the time. My homelab is full of various little applications that I use, that are backed up and managed properly. My home automation does more than it ever did, and my cabin in the countryside is monitored and adaptive to conditions to a whole new degree of sophistication. I have scripts and workflows to deal with a fairly significant administrative load - filing and accounting is largely automated, and I have a decent approximation of an always up-to-date accountant and lawyer on hand. Paper letters and PDFs are processed like its nothing.
Does all the code that was written at machine-speed to achieve these things thrill me? No, that's the new normal. Is the fact that I'm clawing back time, making my Earthly affairs orderly in a whole new way, and breathing software-life into my surroundings without any cloud or big-tech encroachment thrilling? Yes, sometimes - but more importantly it's satisfying and calming.
As far as using my brain - I devote as much of my cognitive energy to these things as I ever have, but now with far more to show for it. As the agents work for me, I try to learn and validate everything they do, and I'm the one stitching it all into a big cohesive picture. Like directing a film. And this is a new feeling.
Many of programmers became programmers because they find the idea of programming fascinating, probably in their middle school days. And then they went to be professionals. Then they burned out and if they were lucky, transited to management.
Of course not everyone is like that, but you can't say it isn't common, right.
If what once felt incredible is just another prompt today, what is incredible today? Addictive personalities usually double down to get a bigger dopamine kick - that's why they stay addicted. So I don't think you truly found it addictive in the conventional sense of the term. Also excercising the brain has been optional in software for quite a while tbh.
Yeah if you want to keep your edge you have to find other ways to work your programming brain.
But as far as output - we all have different reasons for enjoying software development but for me it's more making something useful and less in the coding itself. AI makes the fun parts more fun and the less fun parts almost invisible (at small scale).
We'll all have to wrestle with this going forward.
This is not a technology problem. AI intensifies work because management turns every efficiency gain into higher output quotas. The solution is labor organization, not better software.
Labor organization yes! I don't quite know how to achieve it. I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...
> I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
Nah. A labor organization of any meaningful size needs management. And a tech union would have to be humongous given how easy it is to move the work around.
Why does management turn efficiency gains into higher output quotas? Because competition forced them to. This is a feature of free market capitalism. A single participant can't decide to keep output as is when efficiency improves, because it will lose the competition to those that increase output. Labor organization could be the solution if it was global. Labor organizations that are based in a single country will just lead to work moving to countries without labor organization.
This problem of efficiency gains never translating to more free time is a problem deep in our economic system. If we want a fix, we need to change the whole system.
The driving force is not management or even developers, it's always the end users. They get to do more with less, thanks to the growing output. This is something to the celebrated, not a problem to be "solved" with artificial quotas.
No, absolutely not. I would even be for labor organization if it had no impact on this matter primarily because I don't see why it would be a negative.
You can be libertarian and a capitalist and still be pro-union. At the end of the day, a Collective Bargaining Agreement is just a private contract between two parties. It can be a way to raise wages without government setting a minimum price for labor.
While I'd agree most of its proponents (like myself) also favor other left-wing policies, I'm just saying it doesn't need to be.
This argument has been used against every new technology since forever.
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
> And the initial gut reaction is to resist by organizing labor.
Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.
Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.
Labor organizing is (obviously) banned on HackerNews.
This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.
Just a regular senior SDE at one of the Mag7. I can tell you everyone at these companies is replaceable within a day. Even within an hour. Even the head of depts have no power above them, they can be fired on short notice.
This website is literally a place for capitalists (mostly temporarily embarrassed) to brag about how they're going to cheat and scam their way to the top.
So race to the bottom where you work more and make less per unit of work? Great deal, splendid idea.
The only winners here are CEOs/founders who make obscene money, liquidate/retire early while suckers are on the infinite treadmill justifying their existence.
Gotten XFCE to run with Orca and AT-SPI communicating to make the desktop environment accessible.
None of this would have happened without AI. Of course, it's only useful for a few people that are blind, use Android, and love Linux and Emacs and such. But it's improved my life a ton. I can do actual work on my phone. I've got Org-mode, calendar, Org-journal, desktop chromium, ETC. all on my phone. And if AI dies tomorrow, I'll still have it. The code is all there for me to learn from, tweak, and update.
I just use one agent, Codex. I don't do the agent swarms yet.
I would learn more about air combat by listening to a 12 minutes conversation with a jet fighter pilot, than I will from 3-day seminar by air force journalists.
Tell me about what’s the LLM impact on your work, on account your work is not wiring about AI.
Or if one wish for a more explicit noise filter: Don’t tell me what AI can do. Show me what you shipped with it that isn’t about AI.
When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.
When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.
It sort of sounds link you expected a one-shot miracle or am I misreading? Try this: start from scratch and use openspec explore - talk to it about what you are going for, tell it to call the built-in frontend-design skill and install a hugo skill first (https://skills.sh/?q=hugo), context7 for docs and playwright to check it's work via screenshots etc. Also optionally share any websites with the ascetic / layout you are going for. Be very descriptive. Also ask it to teach you about hugo as it goes and explain it's decisions, I'v learned a lot this way.
I probably will, I use AI extensively, but mostly when I can't remember tedious syntax or suspect something can be done in a better way. And that work well for me... If I go too much towards vibe coding, the fun is sucked away for me.
thought so. I find that too much vibe coding (less spec) will make the AI perform worse, even with 4.6 opus. Pseudocode is obviously the best they perform, having a good lower-level specs usually provide a good result too.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".
Can't say I would put off sleep for it but I get the sentiment for sure.
What kills me personally is that I'm constantly 80% there, but the remaining 20% can be just insurmountable. It's really like gambling: Just one more round and it'll be useful, OK, not quite, just one more, for hours.
Do you mean in terms of adding one more feature or in terms of how a feature you're adding almost works but not quite right?
I find the latter a lot more challenging to cut my losses when it's on a good run (and often even when I know I could just write this by hand), especially because there's as much if not more intrigue about whether the tool can accomplish it or not. These are the moments where my mind has drifted to think about it the exact way you describe it here.
No I kind of see this too, but the 80% is very much the more simple stuff. AI genuinely saves me some time, but I always notice that if I try to "finish" a relatively complex task that's a bit unique in some regards, when a bit more complex work is necessary, something slightly domain-related maybe, I start prompting and prompting and banging my head against the terminal window to make it try to understand the issue, but somehow it still doesn't turn out well at all, and I end up throwing out most of the work done from that point on.
Sometimes it looks like some of that comes from AI generally being very very sure of its initial idea "The issue is actually very simple, it's because..." and then it starts running around in circles once it tries and fails, you can pull it out with a bit more prompting, but it's tough. The thing is, it is sometimes actually right, from the very beginning, but if it isn't...
This is just my own perspective after working with these agents for some time, I've definitely heard of people having different experiences.
And let's get real: AI companies will not be satisfied with you paying $20 or even $200 month if you can actually develop your product in a few days with their agents. They are either going to charge a lot more or string you along chasing that 20%.
That's an interesting business model actually : "Oh hey there, I see you're almost finished your project and ready to launch, watch theses adverts and participate in this survey to get the last 10% of your app completed"
I'm also coming to the conclusion that LLMs have basically the same value as when I tried them out with GPT-3 : good for semantic search / debugging. Bad for generation as you constantly have to check it, correct it, and the parts you trust it to get "right" are often those that are biting you afterwards - or if right introduce gaps in your own knowledge that make you slowly inefficient in your "generation controller" role.
Yes, and I’m convinced AI companies either pay or brainwash these people to put out blog posts like this to spread the idea that it actually works. It doesn’t.
As someone who prefers to do one task at a time, using AI tools makes me feel productive and unproductive at the same time: productive because I am able to do finish my task faster, unproductive because I feel like I am wasting my time while I am waiting for the AI to respond.
Productivity aside, what I notice most at work is that our offshore resources are submitting basically 100% AI generated work. Beyond the code itself, ever since we rolled out Copilot, their English has improved immensely. I have to wonder what is the point of keeping them on if they’re just sub-par prompting all their work.
When developers who were comfortable as individual contributors start using agentic AI they necessarily start to work somewhat as managers.
The workflow and responsibilities are very different. It can be a painful transition.
There has always been a strong undercurrent of developers feeling superior to managers and PMs and now those develoeprs are being forced to confront the reality of a manager or PM's experience.
Work is changing, and the change is only going to accelerate.
I like working on my own projects, and where I found AI really shone was by having something there to bounce ideas off and get feedback.
That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.
People are a gas, and they expand to fill the space they're in. Tools that produce more work do make people's lives easier, they mean an individual just needs to do more work using their tools to do so. This is a disposition that most people have, and therefore it's unavoidable. AI is not exciting to me. I only need to use it so I don't fall behind my peers. Why would I ever be interested in that?
I've been saying this since ChatGPT first came out: AI enables the lazy to dig intellectual holes they cannot dig out, while also enables those with active critical analysis and good secondary considerations to literally become the fabled 10x or more developer / knowledge worker. Which creates interesting scenarios as AI is being evaluated and adopted: the short sighted are loudly declaring success, which will be short term success, and they are bullying their work-peers that they have the method they all should follow. That method being intellectually lazy, allowing the AI to code for them, which they then verify with testing and believe they are done. Meanwhile, the quiet ones are figuring out how to eliminate the need for their coworkers at all. Managers are observing productivity growth, which falters with the loud ones, but not with those quiet ones... AI is here to make the scientifically minded excel and the short cut takers can footgun themselves out of there.
This is a cope. Managers are not magicians who will finally understand who is good and who is just vibe coding demos. In fact now its gonna become even harder to understand differences for the managers. In fact its more likely that the managers are at the same risk because without a clique of software engineers, they would have nothing to manage.
Don't bet on it. Those managers are the previously loud short sighted thinkers that finagled their way out of coding. Those loud ones are their buddies.
This intensification is really a symptom of the race to the bottom. It only feels 'exhausting' for people who don't want to lose their job or business to an agent; for everyone else, the AI is just an excuse to do less.
The way you avoid losing your job to an AI agent is not 'intensifying' its use, but learning to drive it better. Much of what people are calling 'intensification' here is really just babysitting and micromanaging their agent because it's perpetually running on vibes and fumes instead of being driven effectively with very long, clearly written (with AI assistance!) prompts and design documents. Writing clear design documentation for your agent is a light, sustainable and even enjoyable activity; babysitting its mistakes is not.
That’s a simplistic take. Displacement isn't about being "dumb", it's about unit economics. A company will replace a brilliant person with a good enough AI if it costs 10% of the salary. The "smart" people who are keeping their jobs are exactly the ones Simon is talking about. They’re being "forced" to work more to prove their value against a machine that never sleeps. That’s the intensification.
I’m convinced all these blog posts on AI productivity are some kind of high level propaganda trying to create a Folie à deux that AI is much better than it is.
The worst part is that it’s so convincing: not only does everyone who can’t make it work feel gaslit about it, but some people even pretend that it works for them so they don’t feel like they’re missing out.
I remember the last time this happened and people were convinced (for like 2 years) that a gif of an ape could somehow be owned and was worth millions of dollars.
It certainly feels like a lot of the crypto bros have moved on to being AI bros.
I'm chalking my poor experience to being too cheap to pay $200 a month for Claude Max 20x so I can run the multiple agents that need to supervise each other.
If increased productivity means your output is the same but now you produce it quicker, sure! But lets be honest, this is rarely the case and the “free” time is now filled with more work.
Managers don’t even need to push anything. FOMO does all the work.
Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.
I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.
Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.
You do it to yourself, you do, and that's why it really hurts.
> Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
Computer languages were the lathe for shaping the machines to make them do whatever we want, AI is a CNC. Another abstraction layer for making machines do whatever we want them to do.
I feel that the popularization of bloated UI "frameworks", like React and Electron, coupled with the inefficiency tolerated in the "JS ecosystem" have been precursors to this dynamic.
It seems perfectly fitting to me that Anthropic is using a wildly overcomplicated React renderer in their TUI.
React devs are the perfect use case for "AI" dev tools. It is perfectly tolerated for them to write highly inefficient code, and these frameworks are both:
A) Arcane and inconsistently documented
B) Heavily overrepresented in open-source
Meaning there are meaningful gains to be had from querying these "AI" tools for framework development.
In my opinion, the shared problem is the acceptance of egregious inefficiency.
I don't disagree with the concept of AI being another abstraction layer (maybe) but I feel that's an insult to a CNC machine which is a very precise and accurate tool.
LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM. I would say this is extremely precise text generation, much better than most humans.
Just like with CNC though, you need to feed it with the correct instructions. It's still on you for the machined output to do the expected thing. CNC's are also not perfect and their operators need to know the intricacies of machining.
> LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM.
What domains do you work in? This description does not match my experience whatsoever.
I'm primarily into mobiles apps these days but using the LLMs I'm able to write software in languages that I don't know with tech that I don't understand well(like bluetooth).
My two cents that this is part of the learning curve. With collective experience this type of work will be more understood, shared and explored. It is intense in the beginning because we are still discovering how to work with it. I think the other part being that this is a non-deterministic tool which does increase some cognitive load.
Isn't the point of AI that you can scroll endlessly while something else works for you?
If you're going to stay single-minded, why wouldn't you just write the code yourself? You're going to have to double check and rewrite the AI's shitty work anyway
> the productivity boost these things can provide is exhausting.
What I personally find exhausting is Simon¹ constantly discovering the obvious. Time after time after time it’s just “insights” every person who smoked one blunt in college has arrived at.
Stop for a minute! You don’t have to keep churning out multiple blog posts a day, every day. Just stop and reflect. Sit back in your chair and let your mind rest. When a thought comes to you, let it go. Keep doing that until you regain your focus and learn to distinguish what matters from what is shiny.
Yes, of course, you’re doing too much and draining yourself. Of course your “productivity” doesn’t result in extra time but is just filled with more of the same, that’s been true for longer than you’ve been alive. It’s a variation of Parkinson’s law.
I find Simon’s blog and TILs to be some of the highest signal to noise content on the internet. I’ve picked up an incredible number of useful tips and tricks. Many of them I would not have found if he did not share things as soon as he discovered something that felt “obvious.” I also love how he documents small snippets and gists of code that are easy to link to and cross-reference. Wish I did more of that myself.
> Many of them I would not have found if he did not share things as soon as he discovered something
You don’t know that. For all you know, your life would’ve been richer if you’ve read those thoughts after they’ve been left to stew for longer. For all you know, if that happened you would’ve said “most” instead of “many”. Or maybe not, no one can say for sure until it happens
> that felt “obvious.”
It’s not about feeling obvious. There is value in exploring obvious concepts when you’ve thought about them for longer, maybe researched what others before you had to say on the matter, and you can highlight and improve on all of that. Everyone benefits from a thoughtful approach.
> I’ve picked up an incredible number of useful tips and tricks. (…) I also love how he documents small snippets and gists of code that are easy to link to and cross-reference.
That is (I think clearly, but I may be wrong) not what I’m talking about. A code snippet is very far removed in meaning from a human insight. What I wrote doesn’t just concern Simon’s readers, but Simon as a person. Being constantly “on” isn’t good, it leads to exhaustion (as reported), which leads to burnout. While my first paragraph in the previous comment was a criticism, it was merely an introduction to the rest of the post which was given in empathy. I want us all to do and be better.
> There is value in exploring obvious concepts when you’ve thought about them for longer, maybe researched what others before you had to say on the matter, and you can highlight and improve on all of that.
Respectfully, I think you are wholly missing the point. Running a link along and adding commentary to what other people have written is just social media, it is not at all the same thing as reflecting on a concept thoroughly. We’re discussing two different concepts.
I have a link blog which is links plus commentary. Each post takes 10-30 minutes to write. They're exactly like social media, though I try to add something new rather than just broadcast other people's content. https://simonwillison.net/blogmarks/
I recently added "notes" which are effectively my link blog without a link. Very social media! I use those for content that doesn't deserve more than a couple of paragraphs: https://simonwillison.net/notes/
And then there are "entries". That's my long form content, each taking one to several hours (or occasionally more, eg ny annual LLM roundups). Those are the pieces of long-form writing where I aim to "reflect on a concept thoroughly": https://simonwillison.net/entries/
They have established themselves as a reliable communicator of the technology, they are read far and wide, that means they are in a great position to influence the industry-wide tone, and I'm personally glad they are bringing light to this issue. If it upsets you that someone else wrote about something you understood, perhaps consider starting a blog of your own.
Writing about "the obvious" is a useful service. Often people doubt what their own experience is telling them until someone else helps confirm their suspicions and put them into words.
Seems you wrote this at the same time I was responding to someone else. Since I addressed that point in the other reply, I link there and quote the relevant section.
> There is value in exploring obvious concepts when you’ve thought about them for longer, maybe researched what others before you had to say on the matter, and you can highlight and improve on all of that. Everyone benefits from a thoughtful approach.
I’m not saying “don’t share the obvious”, because what is obvious to one person won’t be for someone else. What I am advocating for is thinking more before doing so. In your posts I have repeatedly seen you advocate for opposing ideas at different (but not too distant) points in time. Often you also share a half-baked thought which only later gets the nuance it requires.
More often than not, it’s clear the thoughts should have been stewed for longer to develop into better, more powerful and cohesive ideas. Furthermore, that approach will truly give you back time and relaxation. I take no pleasure in you being exhausted, that is a disservice to everyone.
> In your posts I have repeatedly seen you advocate for opposing ideas at different (but not too distant) points in time.
One of my core beliefs is that "two things can be true at the same time". I write about opposing ideas because they have their own merits.
I believe that most of the criticisms of generative AI are genuine problems. I also believe that generative AI provides incredible value to people who learn how to use it effectively.
I like to think I'm consistent about most of the topics I write about though. Got any examples that stood out to you of my inconsistency?
> One of my core beliefs is that "two things can be true at the same time".
Which is, of course, true in some cases and false in others. But again, not what I’m talking about.
> Got any examples that stood out to you of my inconsistency?
Sorry, I don’t. You publish too often and obviously I’m not going to trawl through a sea of posts to find specific examples. I’m not trying to attack you. Again, my initial post was written in empathy; you’re of course free to take it in earnest and reflect on it or ignore it.
Also, I haven’t called you inconsistent. You’re using that word. I’m not saying you’re constantly flip-flopping or anything like that, and it’s not inconsistent to change one’s mind or evolve one’s ideas.
It feels like you’re doing in these comments what I have just described: going in too fast with the replies without really thinking it through, without pausing to understand what the argument is. It’s difficult to have a proper honest conversation if I’m trying to be deliberate towards you but you’re being solely reactive. That is, frankly, exhaustive, and that’s precisely what I’m advocating against.
Your primary argument here is that it's better to sit with ideas for a while before writing about them.
My counter-argument is that's what I do... for my long form writing (aka "entries"). My link blog is faster reactions and has different standards - while I try to add value to everything I write there it's still a high volume of content where my goal is to be useful and accurate and interesting but not necessarily deep and thoughtful.
And yeah, you're absolutely right that the speed at which I comment here is that same thing again. I treat comments like they were an in-person conversation. They're how I flesh out ideas.
> I’ve definitely felt the self-imposed pressure to only write something if it’s new, and unique, and feels like it’s never been said before. This is a mental trap that does nothing but hold you back.
That's why I like having different content types - links and quotes and notes and TILs - that reduce the pressure to only publish if I have something deep, thoughtful and unique to say.
The opposite is also true. People often follow people off of a (figurative) cliff because that's what everyone is doing. We have copious toxic online communities to show for that. Most of the conversations around AI are falling into that type of cultish aspect. Look no further than YouTube to find how many born-again-AI-zealots emerged with ClawdBot/MoltBot/OpenClaw. It's just not as obvious in the blogosphere. The thing that is obvious is the constant "findings" that are nothing more than opinions. There's no historical evidence you won't change your mind in 10 minutes. And that's why I feel, as I read them, that these types of blog posts are foundationally built in sand.
> You don’t have to keep churning out multiple blog posts a day, every day.
The way Simon offers to send you less content if you sign up for their paid newsletter always made me suspicious that goal could be to overwhelm on purpose.
I'm paid by my GitHub sponsors, who get a monthly summary of what I've been writing about on the basis that I don't want to put a paywall on my content but I'm happy for people to pay me to send them less stuff.
I also make ~$600/month from the ads on my site - run by EthicalAds.
I don't take payment to write about anything. That goes against my principals. It would also be illegal in the USA (FTC rules) if I didn't disclose it - and most importantly it would damage my credibility as a writer, which is the thing I value most.
The big potential money maker here is private consulting based on the expertise (and credibility) I've developed and demonstrated over time. I should do more of that!
Simon is a new form of troll and you hit the nail on the head of what: soapboxing the obvious all in the name of AI. Just like the OpenClaw article that hit the FP yesterday, these types of folks are either doing this for marketing or they're really elated by the mediocre. Has Simon actually produced anything novel or compelling? His blog posts surely aren't - so if that's any indication of his work output I wouldn't be surprised if the answer is a hard no.
And, who wants to be working on 3 projects simultaneously? This is the new "multitasking" agenda from generations ago with a new twist: now I just manage prompts and agents, bro! But the reality is: you think you're doing more than you actually are. Maybe Simon is just placating to his inevitable AGI overlords that he will still be useful in the coming Altmania revolution? No idea. Either way half the time I read his posts (only because they're posted here and I'm excited for his new discoveries) I can't stand to stomach his drivel.
> Has Simon actually produced anything novel or compelling?
Here are some of my recent posts which I self-evaluate as "novel and compelling".
- Running Pydantic’s Monty Rust sandboxed Python subset in WebAssembly https://simonwillison.net/2026/Feb/6/pydantic-monty/ - demonstrating how easy and useful it is to be able to turn Rust code into WASM that can run independently or be used inside a Python wheel for Pyodide in order to provide interactive browser demos of Rust libraries.
- ChatGPT Containers can now run bash, pip/npm install packages, and download files https://simonwillison.net/2026/Jan/26/chatgpt-containers/ - in which I reverse engineered and documented a massive new feature of ChatGPT that OpenAI hadn't announced or documented anywhere
If we revisit these posts in a week, a month and then a year my question is: was it useful? Are others building off of this, still?
My is answer right now is: you can't answer that question yet and the fact that you are looking for immediate validation showcases you're just building random things. Which is great if that's what you want to do. But is it truly novel or compelling? Given you just move on to the next thing, there seems to be a lack of direction and in that regard I would say: no.
Just because you're doing more doesn't mean anything unless it's truly useful for you or others. I just don't think that's the case here. It's a new of form of move fast and break things. And while that can have net positives, we also are very aware it has many net negatives.
It's called the market. If you can compete while not employing eight year olds on your assembly lines and dumping carcinogens in the river, go ahead and compete with those bad companies.
> It’s called the market. If you can compete while providing better life balance, go ahead and compete with those bad companies
With friends like you, who needs enemies? Imagine if we said that about everything. Go ahead and start a garment factory with unlocked exit doors and see if you can compete against these bad garment companies. Go ahead and start your own coal mines that pay in real money and not funny money only redeemable at the company store. Go ahead and start your own factory and guarantee eight hours work, eight hours sleep, eight hours recreation. It is called a market, BRO‽
Corporations have tried to reduce employee burnout exactly never times.
That’s something that starts at the top. The execs tend to be “type A++” personalities, who run close to burnout, and don’t really have much empathy for employees in the same condition.
But they also don’t believe that employees should have the same level of reward, for their stress.
For myself, I know that I am not “getting maximum result” from using LLMs, but I feel as if they have been a real force multiplier, in my work, and don’t feel burnt out, at all.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
It's like the invention of the power loom, but for knowledge workers. Might be interesting to look at the history of industrialisation and the reactions to it.
40 years ago when I was a history major in college one of my brilliant professors gave us a book to read called "the myth of domesticity".
In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.
This happens with any technology. AI is no different.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Literal work junkies.
And what’s the point?
If you’re working on your own project then “just one more feature, bro” isn’t going to make next Minecraft/Photopea/Stardew Valley/name your one man wonder.
If you’re working for someone, then you’re a double fool, because you’re doing work of two people for the pay of one.
In reality, it's a partner who helps with the dishes by bringing home 3 neighbours worth of dirty dishes. Then says, "You're doing a great job with how fast you're scrubbing those dishes."
An older fairytale goes this way... humans invented tools to transport them from a place to place faster than ever, and also communication systems that transmit messages faster than ever. Now, everything can be done in less time and they can enjoy the rest. Right?
Hopefully it will be like an Ebola virus, so that everyone will see how deadly it is instead of like smoking where you die of cancer 40 years down the line.
I think I'll have a happier medium when I get additional inputs set up, such as just talking to a CLI running my full code base on a VPS, but through my phone and airpods, only when it needs help
at least I won't be vegetating at a laptop, or shirking other possible responsibilities to get back to a laptop
I’ll never get “I wish I could control it from y phone” types, or “I’ve been working while walking my dog, amazing”. Why would you want to subject yourself to work outside of working hours?
The beauty of work laptop is that either I work or I don’t. Laptop open - work time, laptop closed - goodbye, see you on Monday.
That's the bane of all productivity increasing tools, any time you free up immediately gets consumed by more work.
People keep on making the same naive assumption that the total amount of work is a constant when you mess with the cost of that work. The reality is that if you make something cheaper, people will want more of it. And it adds up to way more than what was asked before.
That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.
And every time you have people being afraid to lose their jobs. Sometimes jobs indeed disappear because that particular job ceases to exist because technique X got replaced with technique Y. But mostly people just keep their jobs and learn the new thing on the job. Or they change jobs and skill up as they go. People generally only lose their jobs when companies fail or start shrinking. It's more tied to economical cycles than to technology. And some companies just fail to adapt. AI is going to be similar. Lots of companies are flirting with it but aren't taking it seriously yet. Adoption cycles are always longer than people seem to think.
AI prompting is just a form of higher level programming and being able to program is a non optional skill to be able to prompt effectively. I'd use the word meta programming but of course that's one of those improvements we already had.
Right. Because the demand for hand-written code will be high enough that you will keep your job?
Or did you mean that you expect to lose the current job (writing software) and have a new job (directing an agent to write software)?
You really expect to get paid the same doing a low-skill job (directing an agent) as you did the high-paid one (writing software)?
After all, your examples
> If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.
Were all, with the exception of the invention of high-level languages, an increase in skill requirements from the practitioners, not a decrease in skill requirements.
> That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
You might be right, but some of us haven't quite warmed to the idea that our new job description will be something like "high-level planner and bot-wrangler," with nary a line of code in sight.
I am becoming more and more convinced that AI cant be used to make something better than what could have built before AI.
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
I’m finding that the code LLMs produce is just average. Not great, not terrible. Which makes sense, the model is basically a complex representation of the average of its training data right? If I want what I consider ‘good code’ I have to steer it.
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
We've gone from "it's glorified auto-complete" to "the quality of working, end-to-end features, is average", in just ~2 years.
I think it goes without saying that they will be writing "good code" in short time.
I also wonder how much of this "I don't trust them yet" viewpoint is coming from people who are using agents the least.
Is it rare that AI one-shots code that I would be willing to raise as a PR with my name on it? Yes, extremely so (almost never).
Can I write a more-specified prompt that improves the AI's output? Also yes. And the amount of time/effort I spend iterating on a prompt, to shape the feature I want, is decreasing as I learn to use the tools better.
I think the term prompt-engineering became loaded to mean "folks who can write very good one-shot prompts". But that's a silly way of thinking about it imo. Any feature with moderate complexity involves discovery. "Prompt iteration" is more descriptive/accurate imo.
Is there a big enough dataset of 'good' code to train from though?
I (and lots of people) used to think the models would run out of training data and it would halt progress.
They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.
> They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.
It seemed to me that improvements due to training (i.e. the model) in 2025 were marginal. The biggest gains were in structuring how the conversation with the LLM goes.
> And they still improve.
But what asymptote are they approaching? Average code? Good code? Great code?
AHEM
Let me repeat myself.
I think it goes without saying that they will be writing "good code" in short time.
I think your kind of missing the point.
Think about it from a resource (calorie) expenditure stand point.
Are you expending more resources on writing the prompts vs just doing without it? Thats the real question.
If you are expending more, which is what Simon is indicating at - are you really better off? Id argue not, given that this cant be sustained for hours on end. Yet the expectation from management might be that you should be able to sustain this for 8 hours.
So again, are you better off? Not in the slightest.
Many things in life are counter-intuitive and not so simple.
P.s. youre not getting paid more for increasing productivity if you are still expected to work 8 hrs a day... lmao. Thankfully im not a SWE.
I don't think I'm missing the point and respectfully, I think your reply is completely unrelated to anything that I said.
Whether you are "better off or not" is a separate topic, and I never suggested one way or the other.
Simon's point is that engineers can be so productive with these tools that it is tempting to work (much) longer.
Simon: "I'm frequently finding myself with work on two or three projects running parallel. I can get so much done, but after just an hour or two my mental energy for the day feels almost entirely depleted."
Youre a time waster, stop posting and creating noise.
Time wasting would be not reading the comment I replied to, and then thinking I was replying to Simon/the article.
Does that sound familiar?
People often describe the models as averaging their training data, but even for base models predicting the most likely next token this is imprecise and even misleading, because what is most likely is conditional on the input as well as what has been generated so far. So a strange input will produce a strange output — hardly an average or a reversion to the mean.
On top of that, the models people use have been heavily shaped by reinforcement learning, which rewards something quite different from the most likely next token. So I don’t think it’s clarifying to say “the model is basically a complex representation of the average of its training data.”
The average thing points to the real phenomenon of underspecified inputs leading to generic outputs, but modern agentic coding tools don’t have this problem the way the chat UIs did because they can take arbitrary input from the codebase.
And Unix was mainly made by two people, it's astounding that as I get older, even tech managers don't know "the mythical man month", and how software production generally scales.
Sorry but 99.999% of developers could not have built Unix. Or Winamp.
Managers are crossing their fingers that devs they hire are no worse than average, and average isn't very good.
> Sorry but a 99.999% of developers could not have built Unix. Or Winamp.
> Managers are crossing their fingers that devs they hire are no worse than average, and average isn't very good.
The problem is that that's the same skill required to safely use AI tools. You need to essentially audit its output, ensure that you have a sensible and consistent design (either supplied as input or created by the AI itself), and 'refine' the prompts as needed.
AI does not make poor engineers produce better code. It does make poor engineers produce better-looking code, which is incredibly dangerous. But ultimately, considering the amount of code written by average engineers out there, it actually makes perfect sense for AI to be an average engineer — after all, that's the bulk of what it was trained on! Luckily, there's some selection effect there since good work propagates more, but that's a limited bias at best.
Agree completely. Where I'm optimistic about AI is that it can also help identify poorly written code (even it's own code), and it can help rewrite it to be better quality. Average developers can't do this part.
From what I've found it's very easy to ask the AI to look at code and suggest how to make the code maintainable (look for SRP violations, etc, etc). And it will go to work. Which means that we can already build this "quality" into the initial output via agent workflows.
Thats because the distribution of developer quality and capability is skewed.
What youre pointing at is the trade off between concentration of understanding vs fragmented understanding across more people.
The former is always preferred in the context of product development but poses a key person risk. Apple in its current form is a representation of this - Steve did enough work to keep the company going for a decade after his death. Now its sort-of lost on where to go next. But on the flip side look at its market cap today vs 2000.
, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product.
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
I will use my right to disagree. Maybe not 4 people everywhere, but if you have product with well thought feature set you create those and then you really don't need 1000s people to just keep it alive and add features one by one.
I - of course - am talking about perfect approach with everyone focused to not f** it up ;)
But big projects are where the quality of LLM contributions fall the most, and require (continuous, exhausting, thankless) supervision!
Big projects can still be highly modular, and projects built by "1000s of devs" typically are. If your desired change can be described clearly without needing too much unrelated context, the LLM will probably get it right.
> percentage of good, well planned, consistent and coherent software is going to approach zero
So everything stays exactly the same?
> So everything stays exactly the same?
No, we get applications so hideously inefficient that your $3000 developer machine feels like it it's running a Pentium II with 256 MB of RAM.
We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.
Do not insult P-II w/256 MB of RAM. That thing used to run this demo[0] at full speed without even getting overwhelmed.
Except some very well maintained software, some of the mundane things we do today waste so much resources it makes me sad.
Heck, the memory use of my IDE peaks at VSCode's initial memory consumption, and I'd argue that my IDE will draw circles around VSCode while sipping coffee and compiling code.
> for no reason other than our own arrogance and apathy.
I'll add greed and apparent cost-reduction to this list. People think they win because they reduce time to market, but that time penalty is delegated to users. Developers gain a couple of hours for once, we lose the same time every couple of days while waiting our computers.
Once I have read a comment by a developer which can be paraphrased as "I won't implement this. It'll take 8 hours. That's too much". I wanted to plant my face to my keyboard full-force, not kidding.
Heck, I tuned/optimized an algorithm for two weeks, which resulted in 2x-3x speedups and enormous memory savings.
We should understand that we don't own the whole machine while running our code.
[0]: https://www.pouet.net/prod.php?which=1221
No insult intended!
Thanks for sharing the demo!
> No insult intended!
Haha, I know. Just worded like that to mean that even a P-II can do many things if software is written well enough.
You're welcome. That demo single-handedly thrown me down the high performance computing path. I thought, if making things this efficient is possible, all the code I'll be writing will be as optimized as it can be as the constraints allow.
Another amazing demo is Elevated [1]. I show its video to someone and ask about the binary and resources size. When they hear the real value, they generally can't believe it!
Cheers!
[1]: https://www.pouet.net/prod.php?which=52938
> No, we get applications so hideously inefficient that your $3000 developer machine feels like it it's running a Pentium II with 256 MB of RAM.
> We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.
I feel like I read this exact same take on this site for the past 15 years.
I think the error was to say "as fast"
https://news.ycombinator.com/item?id=36446933
Because it's been true for the past 15 years.
In a bizarre way all these new datacentre build outs may have 'fake demand' because of how inefficiently software gets produced.
Much of the new datacenter capacity is for GPU-based training or inference, which are highly optimized already. But there's plenty of scope for optimizing other, more general workloads with some help from AI. DRAM has become far more expensive and a lot of DRAM use on the server is just plain waste that can be optimized away. Same for high-performance SSDs.
I find it hard to disagree with this (sadly).
I do feels things in general are more "snappy" at the OS level, but once you get into apps (local or web), things don't feel much better than 30 years ago.
The two big exceptions for me are video and gaming.
I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
> I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
I’d let you know how I feel but I’m too busy restarting Solidworks after its third crash of the day. I pay thousands a year for the privilege.
Well that's sad to hear. Also kinda makes me glad I didn't wander down the path of learning Solidworks during the pandemic.
> I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
I would assume (generally speaking) that CAD and video editing applications are carefully designed for efficiency because it's an important differentiator between different applications in the same class.
In my experience, these applications are some of the most exciting to use, because I feel like I'm actually able to leverage the power of my hardware.
IMO the real issue are bloated desktop apps like Slack, Discord, Spotify, or Claude's TUI, which consume massive amounts of resources without doing much beyond displaying text or streaming audio files.
Moving data for 'heavy' workflows into the cloud is the most common performance bottleneck I see.
I get this comment everytime I say this but there are levels to this. What you think is bad today could be considered artisan when things become worse than today.
I mean, you've never used the desktop version of Deltek Maconomy, have you? Somehow I can tell.
My point here is not to roast Deltek, although that's certainly fun (and 100% deserved), but to point out that the bar for how bad software can be and still, somehow, be commercially viable is already so low it basically intersects the Earth's centre of gravity.
The internet has always been a machine that allows for the ever-accelerated publishing of complete garbage of all varieties, but it's also meant that in absolute terms more good stuff also gets published.
The problem is one of volume not, I suspect, that the percentages of good versus crap change that much.
So we'll need better tools to search and filter but, again, I suspect AI can help here too.
Underrated comment. The reason that everyone complains about code all the time is because most code is bad, and it’s written by humans. I think this can only be a step up. Nailing validation is the trick now.
Validation was always the hard part, outside of truly novel areas - think edges of computer science (which generally happen very rarely and only need to be explored once or a handful of times).
Validation was always the hard part because great validation requires great design. You can't validate garbage.
No, wealth gets more concentrated. Fewer people on the team will be able to afford a comfortable lifestyle and save for retirement. More will edge infinitesimally closer to "barely scraping by".
“You never needed 1000s of engineers to build software anyway”
What is the point of even mentioning this? We live in reality. In reality, there are countless companies with thousands of engineers making each piece of software. Outside of reality, yes you can talk about a million hypothetical situations. Cherry picking rare examples like Winamp does nothing but provide an example of an exception, which yes, also exists in the real world.
Completely agree. There is a common misunderstanding/misconception in product development, that more features = better product.
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
It’s more about operational resilience and serving customers than product development. If you run early WhatsApp like organisation just 1 person leaving can create awful problems. Same for serving customers especially big clients need all kinds of reports and resources that skeleton organisation can not provide.
Yeah, that’s a misconception too based on my experience.
I’ve seen many people (even myself) thinking the same: if I quit/something happens to me, there will be no one who knows how this works/how to do this. Turned out the businesses always survived. There was a tiny inconvenience, but other than that: nothing. There is always someone who is willing to pick up/take over the task in zero amount of time.
I mean I agree with you, in theory. But that’s not what I’ve seen in practice.
Its also about marketing. People buy because of features.
The people making the buying decisions may not have a good idea of what maximises "meaningful value" but they compare feature sets.
> I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.
> In agile methodologies we measure the output of the developers.
No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?
There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.
I didn’t mean the Agile Manifesto prescribes individual productivity measurement. I meant what often happens in “agile in the wild”: we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success, while the harder question (“did this deliver user/business value?”) is weakly measured or ignored.
Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments. Even in Scrum, you still have roles with accountability, and teams still need some form of prioritization and product decision-making (otherwise you just get activity without direction).
So yeah: agile ideals don’t say “measure dev output.” But many implementations incentivize output/throughput, and that’s the misconception I was pointing at.
> we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success
That sounds more like scrum or something in that wheelhouse, which isn't agile, but what I earlier called pre-agile. They are associated with agile as they are intended to be used as a temporary transitionary tool. One day up and telling your developers "Good news, developers. We fired all the managers. Go nuts!" obviously would be a recipe for disaster. An organization wanting to adopt agile needs to slowly work into it and prove that the people involved can handle it. Not everyone can.
> Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments.
That's the pre-agile step. You don't get rid of managers immediately, you put them to work stepping in when necessary and helping developers learn how to manage without a guiding hand. "Business people" remain involved in agile. Perhaps you were thinking of that instead? Under agile they aren't managers, though, they are partners who work together with the developers.
Wait, surely adding 10x more agents to my project will speed up development, improve the end product, and make me more productive by that same proportion, right?
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
/s
I just build a programming language in couple of hours, complete with interpreter with claude code. I know nothing about designing and implementing programming languages: https://github.com/m-o/MoonShot.
Yes, my point is that it was possible to build it before AI and in much less effort than people imagine. People in college build an interpreter in the less than couple weeks anyway and that probably has more utility.
Consider two scenarios:
1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.
2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.
Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.
Yes, but you can combine the solutions. Aka, you know what you are working on.You can make it much faster. Or you builds something and learn from it.
I think there will be a lot of slop and a lot of usefull stuff. But also, what i did was just an experiment to see if it is possible, i don't think it is usable, nor do i have any plans to make it into new language. And it was done in less than 3 hours total time.
So for example, if you want to try new language features. Like let's say total immutability, or nullability as a type. Then you can build small language and try to write a code in it. Instead of writing it for weeks, you can do it in hours.
Took a quick look, this seems like a copy of writing an interpreter in go book by Thorsten Ball, but just much worse.
Also using double equals to mutate variables, why?
Also i didn't read that book, if there are similarities in language it must be accident or claude steering me to what he knows. And if its the interpreter design, than it probably if from that book. And they told us, that they don't memorise the material.
Just because i wanted it to. I made some design choices that i found interesting.
You built something.
Now comes the hard or impossible part: is it any good? I would bet against it.
Oh, thank you for informing me.
I feel agentic development is a time sink.
Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.
With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
I still need to figure out how to deal with that, for now I just time box these sessions.
But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.
Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.
That's weird, I'm the opposite. Previously I would start coding immediately, because writing the code helps me figure out the what and the how, and because I'd end up with modular/reusable bits that will be helpful later anyway.
Now I sit on an idea for a long time, writing documentation/specs/requirements because I know that the code generation side of things is automated and effortlessly follows from exhaustive requirements.
>With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
How much of this is because you don't trust the result?
I've found this same pattern in myself, and I think the lack of faith that the output is worth asking others to believe in is why it's a throwaway for me. Just yesterday someone mentioned a project underway in a meeting that I had ostensibly solved six months ago, but I didn't even demo it because I didn't have any real confidence in it.
I do find that's changing for myself. I actually did demo something last week that I 'orchestrated into existence' with these tools. In part because the goal of the demo was to share a vision of a target state rather than the product itself. But also because I'm much more confident in the output. In part because the tools are better, but also because I've started to take a more active role in understanding how it works.
Even if the LLMs come to a standstill in their ability to generate code, I think the practice of software development with them will continue to mature to a point where many (including myself) will start to have more confidence in the products.
> With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
My experience with LLMs is that they will call any idea a good idea, one feasible enough to pursue!
Their training to be a people-pleaser overrides almost everything else.
it might depend how you word it, i specifically asked about a caldav and a firefox sync solution, explaining how much difficulty-adverse i was, and i have been berated both times
> Previously, I'd have an idea, sit on it for a while.
> With agentic development, I have an idea, waste a few hours chasing it,
What's the difference between these 2 periods? Weren't you wasting time when sitting on it and thinking about your idea?
Sitting on an idea doesn’t have to mean literally sitting and staring at the ceiling, thinking about it. It means you have an idea and let it stew for a while, your mind coming back to it on its own while you’re taking a shower, doing the dishes, going for a walk… The idea which never comes back is the one you abandon and would’ve been a waste of time to pursue. The idea which continues to be interesting and popping into your head is the worthwhile one.
When you jump straight into execution because it’s easy to do so, you lose the distinction.
They for sure weren't vaporating hundreds of litres of water and wasting a bunch of electricity while doing it
Sitting on an idea doesn't necessarily mean being inactive. You can think at the same time as doing something else. "Shower thoughts" are often born of that process.
I know, and letting an agent/LLM "think" about some ideas does not waste your time either. Yes, it "wastes" energy and you need to read and think about the results after, we don't have neural interfaces to computer, so the inner thinking feedback loop will always be faster. But I keep thinking GP comment was unfair: you can just have your idea in the background to check whether it is good or not exactly the same, and after that time "discuss" it with an LLM, or ask it to implement the idea because you think it's solid enough. It's a false dichotomy.
If you do not know what you want to build, how to ask the AI what you want and are unable to tell what the correct requirements are; then it becomes a waste of time and money.
More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.
Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.
The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.
> If you do not know what you want to build
That describes the majority of cases actually worth working on as a programmer in the traditional sense of the word. You build something to begin to discover the correct requirements and to picture the real problem domain in question.
> You build something to begin to discover the correct requirements and to picture the real problem domain in question.
You lose that if the agent builds it for you, though; there is no iteration cycle for you, only for the agent. This means you are missing out on a bunch of learning that you would previously had gotten from actually writing something.
Prior to agents, more than once a week I'd be writing some code and use some new trick/technique/similar. I expect if you feel that there is no programming skills and tricks left for you to learn, then sure, you aren't missing out on anything.
OTOH, I've been doing this a long time, and I still learn new things (for implementation, not design) on each new non-trivial project.
> You build something to begin to discover the correct requirements and to picture the real problem domain in question.
That's one way, another way is to keep the idea in your head (both actively and "in the background) for days/weeks, and then eventually you sit down and write a document, and you'll get 99% of the requirements down perfectly. Then implementation can start.
Personally I prefer this hammock-style development and to me it seems better at building software that makes sense and solves real problems. Meanwhile "build something to discover" usually is best when you're working with people who need to be able to see something to believe there is progress, but the results are often worse and less well-thought out.
This.
It's better to have a solid concrete idea written down of the entire system that you know you want to build which has ironed out the limitations, requirements and the constraints first before jumping into the code implementation or getting the agent to write it for you.
The build-something-to-discover approach is not for building robust solutions in the long run. By starting with the code first without knowing what it is you are solving or just getting the AI to generate something half-working but breaks easily and changing it once again for it to become even more complicated just wastes more time and tokens.
Someone still has to read the code and understand why the project was built on a horrible foundation and needs to know how to untangle the AI vibe-coded mess.
with agentic development, I've finally considered doing open source work for no reason aside from a utility existing
before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day
back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money
TBH, I have found AI addictive, you use it for the first time, and its incredible. You get a nice kick of dopamine. This kick of dopamine, is decreasing with every win you get. What once felt incredible, is just another prompt today.
Those things don't excite you any more. Plus, the fact that you no longer exercise your brain at work any more. Plus, the constant feeling of FOMO.
It deflates you, faster.
I think the randomness is addicting. While writing a prompt often doesn't result in the perfect outcome, it very well could. Pressing the "prompt lever" (again and again), waiting for the result to show up looks a lot like gambling.
What felt incredible was getting the setup and prompting right and then producing reasonable working code at 50x human speed. And you're right, that doesn't excite after a while.
But I've found my way to what, for me, is a more durable and substantial source of satisfaction, if not excitement, and that is value. Excuse the cliche, but its true.
My life has been filled with little utilities that I've been meaning to put together for years but never found the time. My homelab is full of various little applications that I use, that are backed up and managed properly. My home automation does more than it ever did, and my cabin in the countryside is monitored and adaptive to conditions to a whole new degree of sophistication. I have scripts and workflows to deal with a fairly significant administrative load - filing and accounting is largely automated, and I have a decent approximation of an always up-to-date accountant and lawyer on hand. Paper letters and PDFs are processed like its nothing.
Does all the code that was written at machine-speed to achieve these things thrill me? No, that's the new normal. Is the fact that I'm clawing back time, making my Earthly affairs orderly in a whole new way, and breathing software-life into my surroundings without any cloud or big-tech encroachment thrilling? Yes, sometimes - but more importantly it's satisfying and calming.
As far as using my brain - I devote as much of my cognitive energy to these things as I ever have, but now with far more to show for it. As the agents work for me, I try to learn and validate everything they do, and I'm the one stitching it all into a big cohesive picture. Like directing a film. And this is a new feeling.
Isn't it just like programming?
Many of programmers became programmers because they find the idea of programming fascinating, probably in their middle school days. And then they went to be professionals. Then they burned out and if they were lucky, transited to management.
Of course not everyone is like that, but you can't say it isn't common, right.
If what once felt incredible is just another prompt today, what is incredible today? Addictive personalities usually double down to get a bigger dopamine kick - that's why they stay addicted. So I don't think you truly found it addictive in the conventional sense of the term. Also excercising the brain has been optional in software for quite a while tbh.
Apart from the addicts, AI also helps the liars, marketeers and bloggers. You can outsource the lies to the AI.
Yeah if you want to keep your edge you have to find other ways to work your programming brain.
But as far as output - we all have different reasons for enjoying software development but for me it's more making something useful and less in the coding itself. AI makes the fun parts more fun and the less fun parts almost invisible (at small scale).
We'll all have to wrestle with this going forward.
If you use an LLM you've given up your edge.
If you use a compiler you've given up your edge.
This is not a technology problem. AI intensifies work because management turns every efficiency gain into higher output quotas. The solution is labor organization, not better software.
Labor organization yes! I don't quite know how to achieve it. I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...
> I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
Nah. A labor organization of any meaningful size needs management. And a tech union would have to be humongous given how easy it is to move the work around.
> I don't quite know how to achieve it.
Definitely not by posting on right-wing social media websites.
> I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
It is.
Why does management turn efficiency gains into higher output quotas? Because competition forced them to. This is a feature of free market capitalism. A single participant can't decide to keep output as is when efficiency improves, because it will lose the competition to those that increase output. Labor organization could be the solution if it was global. Labor organizations that are based in a single country will just lead to work moving to countries without labor organization.
This problem of efficiency gains never translating to more free time is a problem deep in our economic system. If we want a fix, we need to change the whole system.
The driving force is not management or even developers, it's always the end users. They get to do more with less, thanks to the growing output. This is something to the celebrated, not a problem to be "solved" with artificial quotas.
I am all for labor organization. I just don’t see how it would be of benefit in this particular case.
If I'm not mistaken it would appear that you're saying that you are in fact *not* for labor organization in this case.
No, absolutely not. I would even be for labor organization if it had no impact on this matter primarily because I don't see why it would be a negative.
The leftist thought process never ceases to amaze me:
"This time, its going to be the correct version of socialism."
Labor organization is the solution on the right. The left believes in regulation.
You can be libertarian and a capitalist and still be pro-union. At the end of the day, a Collective Bargaining Agreement is just a private contract between two parties. It can be a way to raise wages without government setting a minimum price for labor.
While I'd agree most of its proponents (like myself) also favor other left-wing policies, I'm just saying it doesn't need to be.
This argument has been used against every new technology since forever.
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
Repeat.
> And the initial gut reaction is to resist by organizing labor.
Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.
Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.
Labor organizing is (obviously) banned on HackerNews.
This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.
I'm curious as to what previous group you're comparing yourself (and the rest of us) to.
I'm also curious as to what you do, where you do it, and who you work for that makes you feel like you have zero power.
Just a regular senior SDE at one of the Mag7. I can tell you everyone at these companies is replaceable within a day. Even within an hour. Even the head of depts have no power above them, they can be fired on short notice.
What would your version of fair balance of power look like?
This website is literally a place for capitalists (mostly temporarily embarrassed) to brag about how they're going to cheat and scam their way to the top.
So race to the bottom where you work more and make less per unit of work? Great deal, splendid idea.
The only winners here are CEOs/founders who make obscene money, liquidate/retire early while suckers are on the infinite treadmill justifying their existence.
You're describing all technological advances.
I can harvest crops by hand, but a machine can do it 100x faster. I'm not paid 100x though so it's a bad deal - destroy the machines.
The real advantage now that code is cheaper to write is who can imagine the best product.
I'm happy to compete at that level.
Thank you for proving the point for me. Productivity goes up, 70% of people are fired and salary barely grows up. Amazing deal.
https://ers.usda.gov/sites/default/files/_laserfiche/publica...
> The real advantage now that code is cheaper to write is who can imagine the best product. I'm happy to compete at that level.
I’d probably be as happy as you, if I had such big ego.
Do you like working 8 hours a day instead of 12? 5 days a week instead of 7? You can thank organized labor.
Those same jobs are no where to be found in the places that the organized labor started.
It moved outside the US.
because of a concerted effort to break labor power… you’re making my point.
So your solution is to temporarily make big $$$ until the capitalists move overseas?
What is wrong with a the status quo? A competitive market that keeps labor here.
Maybe society shouldnt be optimising for that.
Over the past month, with vibe-coding, I've:
* Made Termux accessible enough for me to use.
* Made an MUD client for Emacs.
Gotten Emacs and Emacspeak working on Termux.
Gotten XFCE to run with Orca and AT-SPI communicating to make the desktop environment accessible.
None of this would have happened without AI. Of course, it's only useful for a few people that are blind, use Android, and love Linux and Emacs and such. But it's improved my life a ton. I can do actual work on my phone. I've got Org-mode, calendar, Org-journal, desktop chromium, ETC. all on my phone. And if AI dies tomorrow, I'll still have it. The code is all there for me to learn from, tweak, and update.
I just use one agent, Codex. I don't do the agent swarms yet.
I would learn more about air combat by listening to a 12 minutes conversation with a jet fighter pilot, than I will from 3-day seminar by air force journalists.
Tell me about what’s the LLM impact on your work, on account your work is not wiring about AI.
Or if one wish for a more explicit noise filter: Don’t tell me what AI can do. Show me what you shipped with it that isn’t about AI.
> Show me what you shipped with it that isn’t about AI.
From this weekend: https://github.com/simonw/sqlite-history-json and https://github.com/datasette/datasette-sqlite-history-json
Impressive indeed. Was not aware. The comments below all flagged.
A couple of historical notes that come to mind.
When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.
When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.
Had a similar experience recently, set up Claude code, wrote plans, CLAUDE.md etc. The plan was to end up with a nice looking hugo/bootstrap/ website.
Long story short, it was ugly and didn't really work as I wanted. So I'm learning Hugo myself now... The whole experience was kind of frustrating tbh.
When I finally settled in en did some hours of manual work I felt much better because of it. I did benefit from my planning with Claude though...
I find it's most valuable doing stuff I already know how to do, but would take me a long time.
You can watch what it's doing and eyeball the code to know if it's going in the right direction, and then steer it towards what you want.
If you have it doing something you have no clue about then it's total gamble
It sort of sounds link you expected a one-shot miracle or am I misreading? Try this: start from scratch and use openspec explore - talk to it about what you are going for, tell it to call the built-in frontend-design skill and install a hugo skill first (https://skills.sh/?q=hugo), context7 for docs and playwright to check it's work via screenshots etc. Also optionally share any websites with the ascetic / layout you are going for. Be very descriptive. Also ask it to teach you about hugo as it goes and explain it's decisions, I'v learned a lot this way.
Now that you get accustomed with Hugo, I wonder if the way you plan & prompting now will produce better result or not
I probably will, I use AI extensively, but mostly when I can't remember tedious syntax or suspect something can be done in a better way. And that work well for me... If I go too much towards vibe coding, the fun is sucked away for me.
thought so. I find that too much vibe coding (less spec) will make the AI perform worse, even with 4.6 opus. Pseudocode is obviously the best they perform, having a good lower-level specs usually provide a good result too.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".
Can't say I would put off sleep for it but I get the sentiment for sure.
What kills me personally is that I'm constantly 80% there, but the remaining 20% can be just insurmountable. It's really like gambling: Just one more round and it'll be useful, OK, not quite, just one more, for hours.
Gambling is a great analogy, my lucks going to turn around, just one more prompt
Do you mean in terms of adding one more feature or in terms of how a feature you're adding almost works but not quite right?
I find the latter a lot more challenging to cut my losses when it's on a good run (and often even when I know I could just write this by hand), especially because there's as much if not more intrigue about whether the tool can accomplish it or not. These are the moments where my mind has drifted to think about it the exact way you describe it here.
If you think it's getting 80% right on its own, you're a victim of Anthropic and OpenAI's propaganda.
No I kind of see this too, but the 80% is very much the more simple stuff. AI genuinely saves me some time, but I always notice that if I try to "finish" a relatively complex task that's a bit unique in some regards, when a bit more complex work is necessary, something slightly domain-related maybe, I start prompting and prompting and banging my head against the terminal window to make it try to understand the issue, but somehow it still doesn't turn out well at all, and I end up throwing out most of the work done from that point on.
Sometimes it looks like some of that comes from AI generally being very very sure of its initial idea "The issue is actually very simple, it's because..." and then it starts running around in circles once it tries and fails, you can pull it out with a bit more prompting, but it's tough. The thing is, it is sometimes actually right, from the very beginning, but if it isn't...
This is just my own perspective after working with these agents for some time, I've definitely heard of people having different experiences.
And let's get real: AI companies will not be satisfied with you paying $20 or even $200 month if you can actually develop your product in a few days with their agents. They are either going to charge a lot more or string you along chasing that 20%.
That's an interesting business model actually : "Oh hey there, I see you're almost finished your project and ready to launch, watch theses adverts and participate in this survey to get the last 10% of your app completed"
I'm also coming to the conclusion that LLMs have basically the same value as when I tried them out with GPT-3 : good for semantic search / debugging. Bad for generation as you constantly have to check it, correct it, and the parts you trust it to get "right" are often those that are biting you afterwards - or if right introduce gaps in your own knowledge that make you slowly inefficient in your "generation controller" role.
I've been saying since 2024 that these things are not getting meaningfully better at all.
I think these companies have been manipulating social media sentiment for years in order to cover up their bunk product.
Yes, and I’m convinced AI companies either pay or brainwash these people to put out blog posts like this to spread the idea that it actually works. It doesn’t.
As someone who prefers to do one task at a time, using AI tools makes me feel productive and unproductive at the same time: productive because I am able to do finish my task faster, unproductive because I feel like I am wasting my time while I am waiting for the AI to respond.
Productivity aside, what I notice most at work is that our offshore resources are submitting basically 100% AI generated work. Beyond the code itself, ever since we rolled out Copilot, their English has improved immensely. I have to wonder what is the point of keeping them on if they’re just sub-par prompting all their work.
When developers who were comfortable as individual contributors start using agentic AI they necessarily start to work somewhat as managers.
The workflow and responsibilities are very different. It can be a painful transition.
There has always been a strong undercurrent of developers feeling superior to managers and PMs and now those develoeprs are being forced to confront the reality of a manager or PM's experience.
Work is changing, and the change is only going to accelerate.
I like working on my own projects, and where I found AI really shone was by having something there to bounce ideas off and get feedback.
That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.
People are a gas, and they expand to fill the space they're in. Tools that produce more work do make people's lives easier, they mean an individual just needs to do more work using their tools to do so. This is a disposition that most people have, and therefore it's unavoidable. AI is not exciting to me. I only need to use it so I don't fall behind my peers. Why would I ever be interested in that?
https://en.wikipedia.org/wiki/Parkinson%27s_law
"Word expands to fill the time available"
Bad typo there "work expands to fill the time available"
I've been saying this since ChatGPT first came out: AI enables the lazy to dig intellectual holes they cannot dig out, while also enables those with active critical analysis and good secondary considerations to literally become the fabled 10x or more developer / knowledge worker. Which creates interesting scenarios as AI is being evaluated and adopted: the short sighted are loudly declaring success, which will be short term success, and they are bullying their work-peers that they have the method they all should follow. That method being intellectually lazy, allowing the AI to code for them, which they then verify with testing and believe they are done. Meanwhile, the quiet ones are figuring out how to eliminate the need for their coworkers at all. Managers are observing productivity growth, which falters with the loud ones, but not with those quiet ones... AI is here to make the scientifically minded excel and the short cut takers can footgun themselves out of there.
This is a cope. Managers are not magicians who will finally understand who is good and who is just vibe coding demos. In fact now its gonna become even harder to understand differences for the managers. In fact its more likely that the managers are at the same risk because without a clique of software engineers, they would have nothing to manage.
Surely managers will finally recognize the contributions of the quiet ones! I cannot believe what I read here.
We just saw the productivity growth in the vibe coded GitHub outages.
Don't bet on it. Those managers are the previously loud short sighted thinkers that finagled their way out of coding. Those loud ones are their buddies.
This intensification is really a symptom of the race to the bottom. It only feels 'exhausting' for people who don't want to lose their job or business to an agent; for everyone else, the AI is just an excuse to do less.
The way you avoid losing your job to an AI agent is not 'intensifying' its use, but learning to drive it better. Much of what people are calling 'intensification' here is really just babysitting and micromanaging their agent because it's perpetually running on vibes and fumes instead of being driven effectively with very long, clearly written (with AI assistance!) prompts and design documents. Writing clear design documentation for your agent is a light, sustainable and even enjoyable activity; babysitting its mistakes is not.
I'm sorry but if you're losing your job to this shit you were too dumb to make it in the first place.
Edit: Not to mention, this is what you get for not unionizing earlier. Get good or get cut.
That’s a simplistic take. Displacement isn't about being "dumb", it's about unit economics. A company will replace a brilliant person with a good enough AI if it costs 10% of the salary. The "smart" people who are keeping their jobs are exactly the ones Simon is talking about. They’re being "forced" to work more to prove their value against a machine that never sleeps. That’s the intensification.
Start a union or stop complaining
I’m convinced all these blog posts on AI productivity are some kind of high level propaganda trying to create a Folie à deux that AI is much better than it is.
The worst part is that it’s so convincing: not only does everyone who can’t make it work feel gaslit about it, but some people even pretend that it works for them so they don’t feel like they’re missing out.
I remember the last time this happened and people were convinced (for like 2 years) that a gif of an ape could somehow be owned and was worth millions of dollars.
It certainly feels like a lot of the crypto bros have moved on to being AI bros.
I'm chalking my poor experience to being too cheap to pay $200 a month for Claude Max 20x so I can run the multiple agents that need to supervise each other.
Tell that to my family with whom I have been spending a lot more time recently having benefited a lot from increased productivity.
If increased productivity means your output is the same but now you produce it quicker, sure! But lets be honest, this is rarely the case and the “free” time is now filled with more work.
Doesn't have to be that way, it's about managers being realistic and not pushing people too far
Managers don’t even need to push anything. FOMO does all the work.
Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.
I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.
Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.
Don't worry about these people.
These are internet cult victims.
You do it to yourself, you do, and that's why it really hurts.
> Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
Computer languages were the lathe for shaping the machines to make them do whatever we want, AI is a CNC. Another abstraction layer for making machines do whatever we want them to do.
AI is one of those early-2000s SUVs that gets 8 miles to the gallon and has a TV screen in the back of every seat.
It's about presenting externally as a "bad ass" while:
A) Constantly drowning out every moment of your life with low quality background noise.
B) Aggressively polluting the environment and depleting our natural resources for no reason beyond pure arrogance.
I feel that the popularization of bloated UI "frameworks", like React and Electron, coupled with the inefficiency tolerated in the "JS ecosystem" have been precursors to this dynamic.
It seems perfectly fitting to me that Anthropic is using a wildly overcomplicated React renderer in their TUI.
React devs are the perfect use case for "AI" dev tools. It is perfectly tolerated for them to write highly inefficient code, and these frameworks are both:
A) Arcane and inconsistently documented
B) Heavily overrepresented in open-source
Meaning there are meaningful gains to be had from querying these "AI" tools for framework development.
In my opinion, the shared problem is the acceptance of egregious inefficiency.
I don't disagree with the concept of AI being another abstraction layer (maybe) but I feel that's an insult to a CNC machine which is a very precise and accurate tool.
LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM. I would say this is extremely precise text generation, much better than most humans.
Just like with CNC though, you need to feed it with the correct instructions. It's still on you for the machined output to do the expected thing. CNC's are also not perfect and their operators need to know the intricacies of machining.
> LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM.
What domains do you work in? This description does not match my experience whatsoever.
I found that LLMs are quite accurate given a proper pseudo-code & not using library. For business-logic or vague instruction they're bad.
Still not as accurate as CNC machine, maybe early model typewriter?.
I'm primarily into mobiles apps these days but using the LLMs I'm able to write software in languages that I don't know with tech that I don't understand well(like bluetooth).
What did you try to do and the LLM failed you?
The guy is a troll. LLM coding posts are flame bait now.
I'm not trolling. You're peeved that you don't have a rebuttal.
Who said I was peeved. You are trolling. We all have a better use of our time than this.
> Just like with CNC though, you need to feed it with the correct instructions.
CNC relies on precise formal languages like G-code, whereas an LLM relies on the imprecise natural languages
100% disagree. CNC is a precision machine while AI is the literal opposite of precision.
Tell them again.
My two cents that this is part of the learning curve. With collective experience this type of work will be more understood, shared and explored. It is intense in the beginning because we are still discovering how to work with it. I think the other part being that this is a non-deterministic tool which does increase some cognitive load.
Is a blog reposting other content worth its own post?
Previous discussion of the original article: https://news.ycombinator.com/item?id=46945755
In this case I don't think so. My post here even links back to the original Hacker News conversation from the "via" link.
the circular economy of AI
I have found that attending to one task keeps me going for longer.
I prompt and sit there. Scrolling makes it worse. It's a good mental practice to just stay calm and watch the AI work.
Isn't the point of AI that you can scroll endlessly while something else works for you?
If you're going to stay single-minded, why wouldn't you just write the code yourself? You're going to have to double check and rewrite the AI's shitty work anyway
Kinda tangental, but the headline gave me a laugh, even has the emdash
> the productivity boost these things can provide is exhausting.
What I personally find exhausting is Simon¹ constantly discovering the obvious. Time after time after time it’s just “insights” every person who smoked one blunt in college has arrived at.
Stop for a minute! You don’t have to keep churning out multiple blog posts a day, every day. Just stop and reflect. Sit back in your chair and let your mind rest. When a thought comes to you, let it go. Keep doing that until you regain your focus and learn to distinguish what matters from what is shiny.
Yes, of course, you’re doing too much and draining yourself. Of course your “productivity” doesn’t result in extra time but is just filled with more of the same, that’s been true for longer than you’ve been alive. It’s a variation of Parkinson’s law.
https://en.wikipedia.org/wiki/Parkinson%27s_law
¹ And others, but Simon is particularly prevalent on HN, so I bump into these more often.
I find Simon’s blog and TILs to be some of the highest signal to noise content on the internet. I’ve picked up an incredible number of useful tips and tricks. Many of them I would not have found if he did not share things as soon as he discovered something that felt “obvious.” I also love how he documents small snippets and gists of code that are easy to link to and cross-reference. Wish I did more of that myself.
> Many of them I would not have found if he did not share things as soon as he discovered something
You don’t know that. For all you know, your life would’ve been richer if you’ve read those thoughts after they’ve been left to stew for longer. For all you know, if that happened you would’ve said “most” instead of “many”. Or maybe not, no one can say for sure until it happens
> that felt “obvious.”
It’s not about feeling obvious. There is value in exploring obvious concepts when you’ve thought about them for longer, maybe researched what others before you had to say on the matter, and you can highlight and improve on all of that. Everyone benefits from a thoughtful approach.
> I’ve picked up an incredible number of useful tips and tricks. (…) I also love how he documents small snippets and gists of code that are easy to link to and cross-reference.
That is (I think clearly, but I may be wrong) not what I’m talking about. A code snippet is very far removed in meaning from a human insight. What I wrote doesn’t just concern Simon’s readers, but Simon as a person. Being constantly “on” isn’t good, it leads to exhaustion (as reported), which leads to burnout. While my first paragraph in the previous comment was a criticism, it was merely an introduction to the rest of the post which was given in empathy. I want us all to do and be better.
> There is value in exploring obvious concepts when you’ve thought about them for longer, maybe researched what others before you had to say on the matter, and you can highlight and improve on all of that.
That's exactly what I try to do.
I wrote more about my approach to that here: https://simonwillison.net/2024/Dec/22/link-blog/#trying-to-a...
Respectfully, I think you are wholly missing the point. Running a link along and adding commentary to what other people have written is just social media, it is not at all the same thing as reflecting on a concept thoroughly. We’re discussing two different concepts.
My blog consists of different types of content.
I have a link blog which is links plus commentary. Each post takes 10-30 minutes to write. They're exactly like social media, though I try to add something new rather than just broadcast other people's content. https://simonwillison.net/blogmarks/
I collect quotations, which are the quickest form of content, probably just two minutes each. https://simonwillison.net/quotations/
I recently added "notes" which are effectively my link blog without a link. Very social media! I use those for content that doesn't deserve more than a couple of paragraphs: https://simonwillison.net/notes/
And then there are "entries". That's my long form content, each taking one to several hours (or occasionally more, eg ny annual LLM roundups). Those are the pieces of long-form writing where I aim to "reflect on a concept thoroughly": https://simonwillison.net/entries/
They have established themselves as a reliable communicator of the technology, they are read far and wide, that means they are in a great position to influence the industry-wide tone, and I'm personally glad they are bringing light to this issue. If it upsets you that someone else wrote about something you understood, perhaps consider starting a blog of your own.
Writing about "the obvious" is a useful service. Often people doubt what their own experience is telling them until someone else helps confirm their suspicions and put them into words.
Seems you wrote this at the same time I was responding to someone else. Since I addressed that point in the other reply, I link there and quote the relevant section.
https://news.ycombinator.com/item?id=46955703#46958713
> There is value in exploring obvious concepts when you’ve thought about them for longer, maybe researched what others before you had to say on the matter, and you can highlight and improve on all of that. Everyone benefits from a thoughtful approach.
I’m not saying “don’t share the obvious”, because what is obvious to one person won’t be for someone else. What I am advocating for is thinking more before doing so. In your posts I have repeatedly seen you advocate for opposing ideas at different (but not too distant) points in time. Often you also share a half-baked thought which only later gets the nuance it requires.
More often than not, it’s clear the thoughts should have been stewed for longer to develop into better, more powerful and cohesive ideas. Furthermore, that approach will truly give you back time and relaxation. I take no pleasure in you being exhausted, that is a disservice to everyone.
> In your posts I have repeatedly seen you advocate for opposing ideas at different (but not too distant) points in time.
One of my core beliefs is that "two things can be true at the same time". I write about opposing ideas because they have their own merits.
I believe that most of the criticisms of generative AI are genuine problems. I also believe that generative AI provides incredible value to people who learn how to use it effectively.
I like to think I'm consistent about most of the topics I write about though. Got any examples that stood out to you of my inconsistency?
> One of my core beliefs is that "two things can be true at the same time".
Which is, of course, true in some cases and false in others. But again, not what I’m talking about.
> Got any examples that stood out to you of my inconsistency?
Sorry, I don’t. You publish too often and obviously I’m not going to trawl through a sea of posts to find specific examples. I’m not trying to attack you. Again, my initial post was written in empathy; you’re of course free to take it in earnest and reflect on it or ignore it.
Also, I haven’t called you inconsistent. You’re using that word. I’m not saying you’re constantly flip-flopping or anything like that, and it’s not inconsistent to change one’s mind or evolve one’s ideas.
It feels like you’re doing in these comments what I have just described: going in too fast with the replies without really thinking it through, without pausing to understand what the argument is. It’s difficult to have a proper honest conversation if I’m trying to be deliberate towards you but you’re being solely reactive. That is, frankly, exhaustive, and that’s precisely what I’m advocating against.
OK, I think I get it.
Your primary argument here is that it's better to sit with ideas for a while before writing about them.
My counter-argument is that's what I do... for my long form writing (aka "entries"). My link blog is faster reactions and has different standards - while I try to add value to everything I write there it's still a high volume of content where my goal is to be useful and accurate and interesting but not necessarily deep and thoughtful.
And yeah, you're absolutely right that the speed at which I comment here is that same thing again. I treat comments like they were an in-person conversation. They're how I flesh out ideas.
I wrote about my philosophy around blogging in one of my long-form pieces a few years ago: https://simonwillison.net/2022/Nov/6/what-to-blog-about/
> I’ve definitely felt the self-imposed pressure to only write something if it’s new, and unique, and feels like it’s never been said before. This is a mental trap that does nothing but hold you back.
That's why I like having different content types - links and quotes and notes and TILs - that reduce the pressure to only publish if I have something deep, thoughtful and unique to say.
No his primary argument is that you are a fraudulent internet spammer masquerading as a software expert.
The opposite is also true. People often follow people off of a (figurative) cliff because that's what everyone is doing. We have copious toxic online communities to show for that. Most of the conversations around AI are falling into that type of cultish aspect. Look no further than YouTube to find how many born-again-AI-zealots emerged with ClawdBot/MoltBot/OpenClaw. It's just not as obvious in the blogosphere. The thing that is obvious is the constant "findings" that are nothing more than opinions. There's no historical evidence you won't change your mind in 10 minutes. And that's why I feel, as I read them, that these types of blog posts are foundationally built in sand.
> You don’t have to keep churning out multiple blog posts a day, every day.
The way Simon offers to send you less content if you sign up for their paid newsletter always made me suspicious that goal could be to overwhelm on purpose.
You pay for the filter before FOMO sets in.
Haha, worst business model ever. I'm going to write things daily so people will pay me to write less!
> You don’t have to keep churning out multiple blog posts a day, every day.
How do you know that? You don't think he's being paid for all this marketing work?
I'm paid by my GitHub sponsors, who get a monthly summary of what I've been writing about on the basis that I don't want to put a paywall on my content but I'm happy for people to pay me to send them less stuff.
I also make ~$600/month from the ads on my site - run by EthicalAds.
I don't take payment to write about anything. That goes against my principals. It would also be illegal in the USA (FTC rules) if I didn't disclose it - and most importantly it would damage my credibility as a writer, which is the thing I value most.
The big potential money maker here is private consulting based on the expertise (and credibility) I've developed and demonstrated over time. I should do more of that!
I have a set of disclosures here: https://simonwillison.net/about/#disclosures
Accusing people of being paid shill from a new account. Ironically.
Simon is a new form of troll and you hit the nail on the head of what: soapboxing the obvious all in the name of AI. Just like the OpenClaw article that hit the FP yesterday, these types of folks are either doing this for marketing or they're really elated by the mediocre. Has Simon actually produced anything novel or compelling? His blog posts surely aren't - so if that's any indication of his work output I wouldn't be surprised if the answer is a hard no.
And, who wants to be working on 3 projects simultaneously? This is the new "multitasking" agenda from generations ago with a new twist: now I just manage prompts and agents, bro! But the reality is: you think you're doing more than you actually are. Maybe Simon is just placating to his inevitable AGI overlords that he will still be useful in the coming Altmania revolution? No idea. Either way half the time I read his posts (only because they're posted here and I'm excited for his new discoveries) I can't stand to stomach his drivel.
> Has Simon actually produced anything novel or compelling?
Here are some of my recent posts which I self-evaluate as "novel and compelling".
- Running Pydantic’s Monty Rust sandboxed Python subset in WebAssembly https://simonwillison.net/2026/Feb/6/pydantic-monty/ - demonstrating how easy and useful it is to be able to turn Rust code into WASM that can run independently or be used inside a Python wheel for Pyodide in order to provide interactive browser demos of Rust libraries.
- Distributing Go binaries like sqlite-scanner through PyPI using go-to-wheel https://simonwillison.net/2026/Feb/4/distributing-go-binarie... - I think my go-to-wheel utility is really cool, and distributing Go CLIs through PyPI is a neat trick.
- ChatGPT Containers can now run bash, pip/npm install packages, and download files https://simonwillison.net/2026/Jan/26/chatgpt-containers/ - in which I reverse engineered and documented a massive new feature of ChatGPT that OpenAI hadn't announced or documented anywhere
I remain very proud of my current open source projects too - https://datasette.io and https://llm.datasette.io and https://sqlite-utils.datasette.io and a whole lot more: https://github.com/simonw/simonw/blob/main/releases.md
Are you ready to say none of that is "novel or compelling", in good faith?
If we revisit these posts in a week, a month and then a year my question is: was it useful? Are others building off of this, still?
My is answer right now is: you can't answer that question yet and the fact that you are looking for immediate validation showcases you're just building random things. Which is great if that's what you want to do. But is it truly novel or compelling? Given you just move on to the next thing, there seems to be a lack of direction and in that regard I would say: no.
Just because you're doing more doesn't mean anything unless it's truly useful for you or others. I just don't think that's the case here. It's a new of form of move fast and break things. And while that can have net positives, we also are very aware it has many net negatives.
My major open source projects get a lot of use. I wouldn't classify them as "just random things".
I don't think you're familiar with my work at all.
edited: quote misattributed
That was me quoting the Harvard Business Review article.
apologies, comment withdrawn
It’s called the market. If you can compete while providing better life balance, go ahead and compete with those bad companies
It's called the market. If you can compete while not employing eight year olds on your assembly lines and dumping carcinogens in the river, go ahead and compete with those bad companies.
It's called the market. Get back to work, slave.
> It’s called the market. If you can compete while providing better life balance, go ahead and compete with those bad companies
With friends like you, who needs enemies? Imagine if we said that about everything. Go ahead and start a garment factory with unlocked exit doors and see if you can compete against these bad garment companies. Go ahead and start your own coal mines that pay in real money and not funny money only redeemable at the company store. Go ahead and start your own factory and guarantee eight hours work, eight hours sleep, eight hours recreation. It is called a market, BRO‽
> help avoid burnout
Yeah, good luck with that.
Corporations have tried to reduce employee burnout exactly never times.
That’s something that starts at the top. The execs tend to be “type A++” personalities, who run close to burnout, and don’t really have much empathy for employees in the same condition.
But they also don’t believe that employees should have the same level of reward, for their stress.
For myself, I know that I am not “getting maximum result” from using LLMs, but I feel as if they have been a real force multiplier, in my work, and don’t feel burnt out, at all.
I just build my k8s homelab with AI.
It’s insane how productive I am.
I used to have “breaks” looking for specific keywords or values to enter while crafting a yaml.
Now the AI makes me skip all of that, essentially.
Comments on the original article: https://news.ycombinator.com/item?id=46945755
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
alpha sigma grindset
It's like the invention of the power loom, but for knowledge workers. Might be interesting to look at the history of industrialisation and the reactions to it.
Discussed https://news.ycombinator.com/item?id=46945755
40 years ago when I was a history major in college one of my brilliant professors gave us a book to read called "the myth of domesticity".
In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.
This happens with any technology. AI is no different.
Maybe it would be much better to just link to the original article instead of somewhere else for the full context to read. [0]
Also this post should link to the original source as well.
As per the submission guidelines [1]:
”Please submit the original source. If a post reports on something found on another site, submit the latter.”
[0] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...
[1] https://news.ycombinator.com/newsguidelines.html
It is in the title if you open the page.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Literal work junkies.
And what’s the point? If you’re working on your own project then “just one more feature, bro” isn’t going to make next Minecraft/Photopea/Stardew Valley/name your one man wonder. If you’re working for someone, then you’re a double fool, because you’re doing work of two people for the pay of one.
There's a word for that mindset: https://en.wikipedia.org/wiki/Karoshi
AI is described as a partner to help.
In reality, it's a partner who helps with the dishes by bringing home 3 neighbours worth of dirty dishes. Then says, "You're doing a great job with how fast you're scrubbing those dishes."
An older fairytale goes this way... humans invented tools to transport them from a place to place faster than ever, and also communication systems that transmit messages faster than ever. Now, everything can be done in less time and they can enjoy the rest. Right?
It does reduce some work. Obvious examples would be copy-writing and fashion modelling.
intensification = productivity for me.
this matches my experience
it's good that people so quickly see it as impulsive and addicting, as opposed to the slow creep of doomscrolling and algorithm feeds
Hopefully it will be like an Ebola virus, so that everyone will see how deadly it is instead of like smoking where you die of cancer 40 years down the line.
I think I'll have a happier medium when I get additional inputs set up, such as just talking to a CLI running my full code base on a VPS, but through my phone and airpods, only when it needs help
at least I won't be vegetating at a laptop, or shirking other possible responsibilities to get back to a laptop
I’ll never get “I wish I could control it from y phone” types, or “I’ve been working while walking my dog, amazing”. Why would you want to subject yourself to work outside of working hours?
The beauty of work laptop is that either I work or I don’t. Laptop open - work time, laptop closed - goodbye, see you on Monday.
Frankly, it seems more like the Crack epidemic to me.
Save some for the rest of us, king.
That's the bane of all productivity increasing tools, any time you free up immediately gets consumed by more work.
People keep on making the same naive assumption that the total amount of work is a constant when you mess with the cost of that work. The reality is that if you make something cheaper, people will want more of it. And it adds up to way more than what was asked before.
That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.
And every time you have people being afraid to lose their jobs. Sometimes jobs indeed disappear because that particular job ceases to exist because technique X got replaced with technique Y. But mostly people just keep their jobs and learn the new thing on the job. Or they change jobs and skill up as they go. People generally only lose their jobs when companies fail or start shrinking. It's more tied to economical cycles than to technology. And some companies just fail to adapt. AI is going to be similar. Lots of companies are flirting with it but aren't taking it seriously yet. Adoption cycles are always longer than people seem to think.
AI prompting is just a form of higher level programming and being able to program is a non optional skill to be able to prompt effectively. I'd use the word meta programming but of course that's one of those improvements we already had.
> That's why I'm not worried about losing my job.
Right. Because the demand for hand-written code will be high enough that you will keep your job?
Or did you mean that you expect to lose the current job (writing software) and have a new job (directing an agent to write software)?
You really expect to get paid the same doing a low-skill job (directing an agent) as you did the high-paid one (writing software)?
After all, your examples
> If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.
Were all, with the exception of the invention of high-level languages, an increase in skill requirements from the practitioners, not a decrease in skill requirements.
> That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
You might be right, but some of us haven't quite warmed to the idea that our new job description will be something like "high-level planner and bot-wrangler," with nary a line of code in sight.