"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.
When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."
And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
yes, this is maybe it's my preference to jump directly to coding, instead of canva to draw the gui and stuff.
i would not know what to draw because the involvemt is not so deep ...or something
I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.
I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.
I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
I think I understand what the author is trying to say.
We miss thinking "hard" about the small details. Maybe "hard" isn't the right adjective, but we all know the process of coding isn't just typing stuff while the mind wanders. We keep thinking about the code we're typing and the interactions between the new code and the existing stuff, and keep thinking about potential bugs and issues. (This may or may not be "hard".)
And this kind of thinking is totally different from what Linus Torvalds has to think about when reviewing a huge patch from a fellow maintainer. Linus' work is probably "harder", but it's a different kind of thinking.
You're totally right it's just tools improving. When compilers improved most people were happy, but some people who loved hand crafting asm kept doing it as a hobby. But in 99+% cases hand crafting asm is a detriment to the project even if it's fun, so if you love writing asm yourself you're either out of work, or you grudgingly accept that you might have to write Java to get paid. I think there's a place for lamenting this kind of situation.
I get your point that wetware stills matter, but I think it's a bit much to contend that more than a handful of people (or everyone) is on the level of Linus Torvalds now that we have LLMs.
I should have been clearer. It was a pun, a take, a joke. I was referring to his day-to-day activity now, where he merges code, doesn't write hardly any code for the linux kernel.
I didn't imply most of use can do half the thing he's done. That's not right.
> his day-to-day activity now, where he merges code
But even then...don't you think his insight into and ability to verify a PR far exceeds that of most devs (LLM or not)? Most of us cannot (reasonably) aspire to be like him.
Like I said if you didn't know what you were doing before, you won't know what you're doing with today.
Agentic coding in general only amplify your ability (or disability).
You can totally learn how to build an OS and invest 5 years of your life doing so. The first version of Linux I'm sure was pretty shoddy. Same for a SCM.
Like I said that's temporary. It's janky and wonky but it's a stepping stone.
Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.
I am thinking harder than ever due to vibe coding. How will markets shift? What will be in demand? How will the consumer side adapt? How do we position? Predicting the future is a hard problem... The thinker in me is working relentlessly since December. At least for me the thinker loves an existential crisis like no other.
I haven't reduced my thinking! Today I asked AI to debug an issue. It came with a solution that it was clearly correct, but it didn't explain why the code was in that state. I kept steering AI (which just wanted to fix) toward figuring out the why and it digged through git and github issue at some point,in a very cool way.
And finally it pulled out something that made sense. It was defensive programming introduced to fix an issue somewhere else, which was also in turn fixed, so useless.
At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.
Shipped a fix plus 2 fixes for bugs yet to be discovered.
I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.
Yeah, but thinking with an LLM is different. The article says:
> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.
The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.
The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.
I very much think its possible to use LLMs as a tool in this way. However a lot of folks are not. I see people, both personally and professionally, give it a problem and expect it to both design and implement a solution, then hold it as a gold standard.
I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it:
- build one to throw away: give me a quick prototype to get stakeholder feedback
- straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review
- tab-completion code-gen
- If I want leads for looking into something (libraries, tools) and Googling isn't cutting it
I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.
I echo this sentiment. Even though I'm having Claude Code write 100% of the code for a personal project as an experiment, the need for thinking hard is very present.
In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.
I'm with you, thinking about architecture is generally still a big part of my mental effort. But for me most architectural problems are solve in short periods of thought and a lot of iteration. Maybe its an skill issue, but not now nor in the pre-LLM era I've been able to pre-solve all the architecture with pure thinking.
That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.
I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
Ya, they are programming languages after all. Language is really powerful when you really how to use it. Some of us are more comfortable with the natural variety, some of us are more comfy with code ¯\_(ツ)_/¯
It's certainly a different style of thinking hard. I used to really stress myself over coding - i.e. I would get frustrated that solving an issue would cause me to introduce some sort of hack or otherwise snowball into a huge refactor. Now I spend most of my time thinking about what cool new features I am going to build and not really stressing myself out too much.
Reading this comment and other similar comments there's definitely a difference between people.
Personally I agree and resonate a lot with the blog post, and I've always found designs of my programs to come sort of naturally. Usually the hard problems are the technical problems and then the design is figured out based on what's needed to control the program. I never had to think that hard about design.
Aptitude testing centers like Johnson O'Connor have tests for that. There are (relatively) huge differences between different people's thinking and problem solving styles. For some, creating an efficient process feels natural, while others need stability and redundancy. Programmers are by and large the latter.
I feel this too. I suspect its a byproduct of all the context switching I find myself doing when I'm using an LLM to help write software. Within a 10 minute window, I'll read code, debug a problem, prompt, discuss the design, test something, do some design work myself and so on.
When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.
It wasn't until I read your comment that I was able to pinpoint why the mental exhaustion feels familiar. It's the same kind (though not degree) of exhaustion as formal methods / proofs.
Except without the reward of an intellectual high afterwards.
I use Claude Code a lot, and it always lets me know the moment I stopped thinking hard, because it will build something completely asinine. Garbage in, garbage out, as they say...
its how you use the tool... reminds me of that episode of simpsons when homer gets a gun lic... he goes from not using it at all, to using it a little, to using it without thinking about what hes doing and for ludicrous things...
thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.
Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.
there's no such thing as right or wrong , so the following isn't intended as any form of judgement or admonition , merely an observation that you are starting to sound like an llm
My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.
Many people here might be in a similar situation to me, but I took an online masters program that allowed for continuing education following completion of the degree. This has become one of my hobbies; I can take classes at my own expense, not worry about my grades, and just enjoy learning. I can push myself as much as I want and since the classes are hard, just completing 1 assignment is enough to force me to "think". Just sharing my experience for people who might be looking for ways to challenge themselves intellectually.
I've found that it's often useful to spend the time thinking about the way I would architect the code (down to a fair level of minutia) before letting the agent have a go.
That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?
Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.
A lot of productive thinking happens when asleep, in the shower, in flow walking or cycling or rowing.
It's hard to rationalise this as billable time, but they pay for outcome even if they act like they pay for 9-5 and so if I'm thinking why I like a particular abstraction, or see analogies to another problem, or begin to construct dialogues with mysel(ves|f) about this, and it happens I'm scrubbing my back (or worse) I kind of "go with the flow" so to speak.
Definitely thinking about the problem can be a lot better than actually having to produce it.
Personally: technical problems I usually think for a couple days at most before I need to start implementing to make progress. But I have background things like future plans, politics, philosophy, and stories, so I always have something to think about. Close-up technical thinking is great, but sometimes step back and look at the bigger picture?
I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.
I believe it is a type of burnout. AI might have accelerated both the work and that feeling.
I found that doing more physical projects helped me. Large woodworking, home improvement, projects. Built-in bookshelves, a huge butcher block bar top (with 24+ hours of mindlessly sanding), rolling workbenches, and lots of cabinets. Learning and trying to master a new skill, using new design software, filling the garage with tools...
Thinking harder than I have in a long time with AI assisted coding.
As I'm providing context I get to think about what an ideal approach would look like and often dive into a research session to analyze pros and cons of various solutions.
I don't use agents much because it's important to see how a component I just designed fits into the larger codebase. That experience provides insights on what improvements I need to make and what to build next.
The time I've spent thinking about the composability, cohesiveness, and ergonomics of the code itself have really paid off. The codebase is a joy to work in, easy to maintain and extend.
The LLMs have helped me focus my cognitive bandwidth on the quality and architecture instead of the tedious and time consuming parts.
I miss the thrill of running through the semi-parched grasslands and the heady mix of terror triumph and trepidation as we close in on our meal for the week.
I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
I've thought harder about problems this last year than I have in a long time.
People here seem to be conflating thinking hard and thinking a lot.
Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.
Good highlight of the struggle between Builder and Thinker, I enjoyed the writing.
So why not work on PQC? Surely you've thought about other avenues here as well.
If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.
I feel like AI has given me the opportunity to think MORE, not less. I’m doing so much less mindless work, spending most of my efforts critically analyzing the code and making larger scale architectural decisions.
The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”
The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.
I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.
Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.
I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.
I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.
> I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
These people are miserable to work with if you need things done quickly and can tolerate even slight imperfection.
That operating regime is, incidentally, 95% of the work we actually get paid to do.
Give the AI less responsibility but more work. Immediate inference is a great example: if the AI can finish my lines, my `if` bodies, my struct instantiations, type signatures, etc., it can reduce my second-by-second work significantly while taking little of my cognitive agency.
These are also tasks the AI can succeed at rather trivially.
Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.
Another example is automatic test generation or early correctness warnings.
If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day.
Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.
Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.
...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.
These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.
I generally feel the same. But in addition, I also enjoy the pure act of coding. At least for me that’s another big part why I feel left behind with all this Agent stuff.
I agree, that's another factor. Definitely the mechanical act of coding specially if your are good at it gives the type of joy that I can imagine an artisan or craftsman having when doing his work.
Eventually I always get to a problem I can't solve by just throwing an LLM at it and have to go in and properly debug things. At that point knowing the code base helps a hell of a lot, and I would've been better off writing the entire thing by hand.
I really don't believe AI allows you to think less hard. If it did, it would be amazing, but the current AI hasn't got to that capability. It forces you to think about different things at best.
What a bizarre claim. If you can solve anything by thinking, why don't you become a scientist? Think of a theory that unites quantum physics and general relativity.
If it's this easy to convince you to stop being creative, to stop putting in effort to think critically, then you don't deserve the fulfilment that creativity and critical thinking can give you. These vibe coding self pity articles are so bizarre.
> At the end of the day, I am a Builder. I like building things. The faster I build, the better.
This I can’t relate to. For me it’s “the better I build, the better”. Building poor code fast isn’t good: it’s just creating debt to deal with in the future, or admitting I’ll toss out the quickly built thing since it won’t have longevity. When quality comes into play (not just “passed the tests”, but is something maintainable, extensible, etc), it’s hard to not employ the Thinker side along with the Builder. They aren’t necessarily mutually exclusive.
Then again, I work on things that are expected to last quite a while and aren’t disposable MVPs or side projects. I suppose if you don’t have that longevity mindset it’s easy to slip into Build-not-Think mode.
Cognitive skills are just like any other - use them and they will grow, do not and they will decline. Oddly enough, the more one increases their software engineering cognition, the less the distance between "The Builder" and "The Thinker" becomes.
I think this problem existed before AI. At least in my current job, there is constant, unrelenting demand for fast results. “Multi-day deep thinking” sounds like an outrageous luxury, at least in my current job.
Even 30 years ago when I started in the industry, most jobs required very little deep thinking. All of mine has been done on personal projects. Thats just the reality of the typical software engineering job.
I feel that AI doesn't necessarily replace my thinking, but actually helps to explore deeper - on my behalf - alternative considerations in the approach to solving a problem, which in turn better informs my thinking.
I feel like I'm doing much nicer thinking now, I'm doing more systems thinking, not only that I'm iterating on system design a lot more because it is a lot easier to change with AI
yes but you solved problems already solved by someone else.
how about something that hasn't been solved, or yet even noticed? that gives the greatest satisfaction
I definitely relate to this. Except that while I was in the 1% in university who thought hard, I don't think my success rate was that high. My confidence in the time was quite high, though, and I still remember the notable successes.
And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:
The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.
I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.
Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)
> I have tried to get that feeling of mental growth outside of coding
A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.
I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.
All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.
My point is, you might want to reconsider how much you blame AI.
The problem with the "70% solution" is that it creates a massive amount of hidden technical debt. You aren't thinking hard because you aren't forced to understand the edge cases or the real origin of the problem. It used to be the case that you will need plan 10 steps ahead because refactoring was expensive, now people just focus in the next problem ahead, but the compounding AI slop will blow up eventually.
Instant upvote for a Philiip Mainlander quote at the end. He's the OG "God is Dead" guy and Nietzsche was reacting (very poorly) to Mainlander and other pessimists like Schopenhauer when he followed up with his own, shittier version of "god is dead"
Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.
Dude, I know you touched on this but seriously. Just don't use AI then. It's not hard, it's your choice to use it or not. It's not even making you faster, so the pragmatism argument doesn't really work well! This is a totally self inflicted problem that you can undo any time you want.
I have a Claude code set up in a folder with instructions on how to access iMessage. Ask it questions like “What did my wife say I should do next Friday?”
Reads the SQLite db and shit. So burn your tokens on that.
It's not hard to burn tokens on random bullshit (see moltbook). If you really can deliver results at full speed without AI, it shouldn't be hard to keep cover.
> I tried getting back in touch with physics, reading old textbooks. But that wasn’t successful either. It is hard to justify spending time and mental effort solving physics problems that aren’t relevant or state-of-the-art
I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.
With AI, I now think much harder. Timelines are shorter, big decisions are closer together, and more system interactions have to be "grokked" in my head to guide the model properly.
I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.
Pre-processed food consumer complains about not cooking anymore. /s
... OK I guess. I mean sorry but if that's revelation to you, that by using a skill less you hone it less, you were clearly NOT thinking hard BEFORE you started using AI. It sure didn't help but the problem didn't start then.
This March 2025 post from Aral Balkan stuck with me:
https://mastodon.ar.al/@aral/114160190826192080
"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.
When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."
And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
yes, this is maybe it's my preference to jump directly to coding, instead of canva to draw the gui and stuff. i would not know what to draw because the involvemt is not so deep ...or something
I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.
I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.
Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.
Source?
I don't get it.
I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
I think I understand what the author is trying to say.
We miss thinking "hard" about the small details. Maybe "hard" isn't the right adjective, but we all know the process of coding isn't just typing stuff while the mind wanders. We keep thinking about the code we're typing and the interactions between the new code and the existing stuff, and keep thinking about potential bugs and issues. (This may or may not be "hard".)
And this kind of thinking is totally different from what Linus Torvalds has to think about when reviewing a huge patch from a fellow maintainer. Linus' work is probably "harder", but it's a different kind of thinking.
You're totally right it's just tools improving. When compilers improved most people were happy, but some people who loved hand crafting asm kept doing it as a hobby. But in 99+% cases hand crafting asm is a detriment to the project even if it's fun, so if you love writing asm yourself you're either out of work, or you grudgingly accept that you might have to write Java to get paid. I think there's a place for lamenting this kind of situation.
> We're all Linus Torvalds now.
So...where's your OS and SCM?
I get your point that wetware stills matter, but I think it's a bit much to contend that more than a handful of people (or everyone) is on the level of Linus Torvalds now that we have LLMs.
I should have been clearer. It was a pun, a take, a joke. I was referring to his day-to-day activity now, where he merges code, doesn't write hardly any code for the linux kernel.
I didn't imply most of use can do half the thing he's done. That's not right.
> his day-to-day activity now, where he merges code
But even then...don't you think his insight into and ability to verify a PR far exceeds that of most devs (LLM or not)? Most of us cannot (reasonably) aspire to be like him.
Like I said if you didn't know what you were doing before, you won't know what you're doing with today.
Agentic coding in general only amplify your ability (or disability).
You can totally learn how to build an OS and invest 5 years of your life doing so. The first version of Linux I'm sure was pretty shoddy. Same for a SCM.
except the thing does not work as expected and it just makes you worse not better
Like I said that's temporary. It's janky and wonky but it's a stepping stone.
Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.
It's only time.
Why is image generation the same as code generation?
Comments like these are why I don't browse HN nearly ever anymore
I am thinking harder than ever due to vibe coding. How will markets shift? What will be in demand? How will the consumer side adapt? How do we position? Predicting the future is a hard problem... The thinker in me is working relentlessly since December. At least for me the thinker loves an existential crisis like no other.
I haven't reduced my thinking! Today I asked AI to debug an issue. It came with a solution that it was clearly correct, but it didn't explain why the code was in that state. I kept steering AI (which just wanted to fix) toward figuring out the why and it digged through git and github issue at some point,in a very cool way. And finally it pulled out something that made sense. It was defensive programming introduced to fix an issue somewhere else, which was also in turn fixed, so useless.
At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.
Shipped a fix plus 2 fixes for bugs yet to be discovered.
I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.
Yeah, but thinking with an LLM is different. The article says:
> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.
The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.
The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.
Its different.
I very much think its possible to use LLMs as a tool in this way. However a lot of folks are not. I see people, both personally and professionally, give it a problem and expect it to both design and implement a solution, then hold it as a gold standard.
I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it: - build one to throw away: give me a quick prototype to get stakeholder feedback - straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review - tab-completion code-gen - If I want leads for looking into something (libraries, tools) and Googling isn't cutting it
> then hold it as a gold standard
I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.
I echo this sentiment. Even though I'm having Claude Code write 100% of the code for a personal project as an experiment, the need for thinking hard is very present.
In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.
I'm with you, thinking about architecture is generally still a big part of my mental effort. But for me most architectural problems are solve in short periods of thought and a lot of iteration. Maybe its an skill issue, but not now nor in the pre-LLM era I've been able to pre-solve all the architecture with pure thinking.
That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.
I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
And thinking of how to convey all of that to Claude without having to write whole books :)
tfw you start expressing your thoughts as code because its shorter instead
Ya, they are programming languages after all. Language is really powerful when you really how to use it. Some of us are more comfortable with the natural variety, some of us are more comfy with code ¯\_(ツ)_/¯
It's certainly a different style of thinking hard. I used to really stress myself over coding - i.e. I would get frustrated that solving an issue would cause me to introduce some sort of hack or otherwise snowball into a huge refactor. Now I spend most of my time thinking about what cool new features I am going to build and not really stressing myself out too much.
Reading this comment and other similar comments there's definitely a difference between people. Personally I agree and resonate a lot with the blog post, and I've always found designs of my programs to come sort of naturally. Usually the hard problems are the technical problems and then the design is figured out based on what's needed to control the program. I never had to think that hard about design.
Aptitude testing centers like Johnson O'Connor have tests for that. There are (relatively) huge differences between different people's thinking and problem solving styles. For some, creating an efficient process feels natural, while others need stability and redundancy. Programmers are by and large the latter.
[1]: https://www.jocrf.org/how-clients-use-the-analytical-reasoni...
I think OP's post is an attempt to move us past this stage of the discussion, which is frankly an old hat.
The point they are making is that using AI tools makes it a lot harder for them to keep up the discipline to think hard.
This may or may not be true for everyone.
It is a different kind of thinking, though.
I'd go as far as to say I think harder now – or at least quicker. I'm not wasting cycles on chores; I can focus on the bigger picture.
I've never felt more mental exhaustion than after a LLM coding session. I assume that is a result of it requiring me to think harder too.
I feel this too. I suspect its a byproduct of all the context switching I find myself doing when I'm using an LLM to help write software. Within a 10 minute window, I'll read code, debug a problem, prompt, discuss the design, test something, do some design work myself and so on.
When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.
It wasn't until I read your comment that I was able to pinpoint why the mental exhaustion feels familiar. It's the same kind (though not degree) of exhaustion as formal methods / proofs.
Except without the reward of an intellectual high afterwards.
I use Claude Code a lot, and it always lets me know the moment I stopped thinking hard, because it will build something completely asinine. Garbage in, garbage out, as they say...
its how you use the tool... reminds me of that episode of simpsons when homer gets a gun lic... he goes from not using it at all, to using it a little, to using it without thinking about what hes doing and for ludicrous things...
thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.
Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.
there's no such thing as right or wrong , so the following isn't intended as any form of judgement or admonition , merely an observation that you are starting to sound like an llm
> you are starting to sound like an llm
My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.
Yes, if anything I think harder because I know it's on the frontier of whatever I'm building (so i'm more motivated and there's much more ROI)
Many people here might be in a similar situation to me, but I took an online masters program that allowed for continuing education following completion of the degree. This has become one of my hobbies; I can take classes at my own expense, not worry about my grades, and just enjoy learning. I can push myself as much as I want and since the classes are hard, just completing 1 assignment is enough to force me to "think". Just sharing my experience for people who might be looking for ways to challenge themselves intellectually.
I've found that it's often useful to spend the time thinking about the way I would architect the code (down to a fair level of minutia) before letting the agent have a go.
That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?
Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.
You were walking to your destination which was three miles away
You now have a bicycle which gets you there in a third of the time
You need to find destinations that are 3x as far away than before
A lot of productive thinking happens when asleep, in the shower, in flow walking or cycling or rowing.
It's hard to rationalise this as billable time, but they pay for outcome even if they act like they pay for 9-5 and so if I'm thinking why I like a particular abstraction, or see analogies to another problem, or begin to construct dialogues with mysel(ves|f) about this, and it happens I'm scrubbing my back (or worse) I kind of "go with the flow" so to speak.
Definitely thinking about the problem can be a lot better than actually having to produce it.
Personally: technical problems I usually think for a couple days at most before I need to start implementing to make progress. But I have background things like future plans, politics, philosophy, and stories, so I always have something to think about. Close-up technical thinking is great, but sometimes step back and look at the bigger picture?
I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.
I believe it is a type of burnout. AI might have accelerated both the work and that feeling.
I found that doing more physical projects helped me. Large woodworking, home improvement, projects. Built-in bookshelves, a huge butcher block bar top (with 24+ hours of mindlessly sanding), rolling workbenches, and lots of cabinets. Learning and trying to master a new skill, using new design software, filling the garage with tools...
Thinking harder than I have in a long time with AI assisted coding.
As I'm providing context I get to think about what an ideal approach would look like and often dive into a research session to analyze pros and cons of various solutions.
I don't use agents much because it's important to see how a component I just designed fits into the larger codebase. That experience provides insights on what improvements I need to make and what to build next.
The time I've spent thinking about the composability, cohesiveness, and ergonomics of the code itself have really paid off. The codebase is a joy to work in, easy to maintain and extend.
The LLMs have helped me focus my cognitive bandwidth on the quality and architecture instead of the tedious and time consuming parts.
I miss the thrill of running through the semi-parched grasslands and the heady mix of terror triumph and trepidation as we close in on our meal for the week.
I think that feeling is fairly common across the entire population. Play more tag, it’ll help.
I think harder because of AI.
I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
I've thought harder about problems this last year than I have in a long time.
People here seem to be conflating thinking hard and thinking a lot.
Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.
Good highlight of the struggle between Builder and Thinker, I enjoyed the writing. So why not work on PQC? Surely you've thought about other avenues here as well.
If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.
I feel like AI has given me the opportunity to think MORE, not less. I’m doing so much less mindless work, spending most of my efforts critically analyzing the code and making larger scale architectural decisions.
The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”
The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.
I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.
Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.
I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.
I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.
> I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
These people are miserable to work with if you need things done quickly and can tolerate even slight imperfection.
That operating regime is, incidentally, 95% of the work we actually get paid to do.
Give the AI less responsibility but more work. Immediate inference is a great example: if the AI can finish my lines, my `if` bodies, my struct instantiations, type signatures, etc., it can reduce my second-by-second work significantly while taking little of my cognitive agency.
These are also tasks the AI can succeed at rather trivially.
Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.
Another example is automatic test generation or early correctness warnings. If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day. Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.
Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.
...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.
These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.
I generally feel the same. But in addition, I also enjoy the pure act of coding. At least for me that’s another big part why I feel left behind with all this Agent stuff.
I agree, that's another factor. Definitely the mechanical act of coding specially if your are good at it gives the type of joy that I can imagine an artisan or craftsman having when doing his work.
Eventually I always get to a problem I can't solve by just throwing an LLM at it and have to go in and properly debug things. At that point knowing the code base helps a hell of a lot, and I would've been better off writing the entire thing by hand.
I really don't believe AI allows you to think less hard. If it did, it would be amazing, but the current AI hasn't got to that capability. It forces you to think about different things at best.
Make sure you start every day with the type of confidence that would allow you to refer to yourself as an intellectual one-percenter
What a bizarre claim. If you can solve anything by thinking, why don't you become a scientist? Think of a theory that unites quantum physics and general relativity.
If it's this easy to convince you to stop being creative, to stop putting in effort to think critically, then you don't deserve the fulfilment that creativity and critical thinking can give you. These vibe coding self pity articles are so bizarre.
Would like to follow your blog, is there an rss feed?
> At the end of the day, I am a Builder. I like building things. The faster I build, the better.
This I can’t relate to. For me it’s “the better I build, the better”. Building poor code fast isn’t good: it’s just creating debt to deal with in the future, or admitting I’ll toss out the quickly built thing since it won’t have longevity. When quality comes into play (not just “passed the tests”, but is something maintainable, extensible, etc), it’s hard to not employ the Thinker side along with the Builder. They aren’t necessarily mutually exclusive.
Then again, I work on things that are expected to last quite a while and aren’t disposable MVPs or side projects. I suppose if you don’t have that longevity mindset it’s easy to slip into Build-not-Think mode.
Cognitive skills are just like any other - use them and they will grow, do not and they will decline. Oddly enough, the more one increases their software engineering cognition, the less the distance between "The Builder" and "The Thinker" becomes.
I think this problem existed before AI. At least in my current job, there is constant, unrelenting demand for fast results. “Multi-day deep thinking” sounds like an outrageous luxury, at least in my current job.
Which is a reason for software becoming worse across the board. Just look at Windows. The "go go go" culture is ruinous to products.
Even 30 years ago when I started in the industry, most jobs required very little deep thinking. All of mine has been done on personal projects. Thats just the reality of the typical software engineering job.
this is why productivity is a word that should really just be reserved for work contexts, and personal time is better used for feeding "The Thinker"
I feel that AI doesn't necessarily replace my thinking, but actually helps to explore deeper - on my behalf - alternative considerations in the approach to solving a problem, which in turn better informs my thinking.
Just work on more ambitious projects?
I feel like I'm doing much nicer thinking now, I'm doing more systems thinking, not only that I'm iterating on system design a lot more because it is a lot easier to change with AI
"Sometimes you have to keep thinking past the point where it starts to hurt." - Fermi
yes but you solved problems already solved by someone else. how about something that hasn't been solved, or yet even noticed? that gives the greatest satisfaction
I definitely relate to this. Except that while I was in the 1% in university who thought hard, I don't think my success rate was that high. My confidence in the time was quite high, though, and I still remember the notable successes.
And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:
The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.
I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.
Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)
> I have tried to get that feeling of mental growth outside of coding
A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.
I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.
All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.
My point is, you might want to reconsider how much you blame AI.
Why not find a subfield that is more difficult and requires some specialization then?
The problem with the "70% solution" is that it creates a massive amount of hidden technical debt. You aren't thinking hard because you aren't forced to understand the edge cases or the real origin of the problem. It used to be the case that you will need plan 10 steps ahead because refactoring was expensive, now people just focus in the next problem ahead, but the compounding AI slop will blow up eventually.
would you agree that there's more time to think about what problems are worth solving?
I think hard all the time, AI can only solve problems for me that don't require thinking hard. Give it anything more complex and it's useless.
I use AI for the easy stuff.
At the day job there was a problem with performance loading data in an app.
7 months later waffling on it on and off with and without ai I finally cracked it.
Author is not wrong though, the number of times i hit this isnt as often since ai. I do miss the feeling though
I think AI didn't do this. Open source, libraries, cloud, frameworks and agile conspired to do this.
Why solve a problem when you can import library / scale up / use managed kuberneted / etc.
The menu is great and the number of problems needing deep thought seems rare.
There might be deep thought problems on the requirements side of things but less often on the technical side.
Instant upvote for a Philiip Mainlander quote at the end. He's the OG "God is Dead" guy and Nietzsche was reacting (very poorly) to Mainlander and other pessimists like Schopenhauer when he followed up with his own, shittier version of "god is dead"
Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.
https://en.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder
https://dokumen.pub/the-philosophy-of-redemption-die-philoso...
Max Stirner and Mainlander would have been friends and are kindred spirits philosophically.
https://en.wikipedia.org/wiki/Bibliography_of_philosophical_...
Dude, I know you touched on this but seriously. Just don't use AI then. It's not hard, it's your choice to use it or not. It's not even making you faster, so the pragmatism argument doesn't really work well! This is a totally self inflicted problem that you can undo any time you want.
Spoken like someone who doesn't have their company measuring their AI usage and regularly laying people off.
If you can't figure out how to game this, you're both not thinking hard and not using AI effectively.
Need to be in the top 5% of AI users while staying in your budget of $50/month!
I have a Claude code set up in a folder with instructions on how to access iMessage. Ask it questions like “What did my wife say I should do next Friday?”
Reads the SQLite db and shit. So burn your tokens on that.
It's not hard to burn tokens on random bullshit (see moltbook). If you really can deliver results at full speed without AI, it shouldn't be hard to keep cover.
> Yes, I blame AI for this.
Just don't use it. That's always an option. Perhaps your builder doesn't actually benefit from an unlimited runway detached from the cost of effort.
> I tried getting back in touch with physics, reading old textbooks. But that wasn’t successful either. It is hard to justify spending time and mental effort solving physics problems that aren’t relevant or state-of-the-art
I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.
With AI, I now think much harder. Timelines are shorter, big decisions are closer together, and more system interactions have to be "grokked" in my head to guide the model properly.
I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.
Pre-processed food consumer complains about not cooking anymore. /s
... OK I guess. I mean sorry but if that's revelation to you, that by using a skill less you hone it less, you were clearly NOT thinking hard BEFORE you started using AI. It sure didn't help but the problem didn't start then.