I had access to GitHub Copilot as a student in early 2022 while learning Haskell and immediately realised that it would hinder my learning if I didn't turn it off and implicitly follow this understand, plan, execute, reflect loop.
AI products like Cursor have the notion of an 'autonomy slider' [1] that can fortunately be turned all the way down (disable Cursor Tab) but relying on this discipline seems fickle when with the right agentic loops [2] and context engineering, thousands of lines of code can be churned out with minimal supervision.
I've considered always working on two projects over a long timespan, one with no AI assistance, possibly in a separate IDE like Zed, and one in Vibe Kanban (my current daily driver) but this feels like an inefficient proxy to accelerating this four step learning loop with a tool like solveit.
Since the solveit product isn't released and seemingly isn't competing with solutions, is there an opportunity to convey how AI product developers should be thinking about amplifying their users and keeping them in the learning loop?
So far, I've seen Claude Code's Learning output style [3], and also ChatGPT's study mode but in these cases, the only product change is a prompt and solveit is more than that.
I'm sure I'm not the only one confused by this, but can you give details on why you decided that a course was necessary to learn this new way of working with AI?
Maybe it's more of a alpha thing, but with millions using chatbots every day, was it not possible to develop a UI?
It's not just about adoption, who has the time to spend 5 weeks learning a new tool? Particularly when you're competing with the existing tools?
The course is about a methodology, not a product. It's the ideas Eric Ries and I have been working on for decades. 5 weeks is a crash course that can only touch on the ideas. And it covers learning data structures and algorithms, foundations of web programming, system administration, startup creating, and much more.
It's really a rapid "how do to <x> the solveit way" for a variety of x. Each of those x is likely to become a full course in the future.
(We actually built the tool for ourselves, and only decided to make it publicly available when we realised how much it's helping us. We're a PBC so our mission is not entirely financial. We're not trying to compete with existing tools, but provide an alternative direction.)
We made the tool, and that will eventually be available on its own. But the method requires some discipline and 'unlearning'. It's very hard to show someone an AI tool and not have them treat it just like ChatGPT/Claude/... - that's the part that takes the time, and having a community of people working through different examples and case studies together is a lot more motivating for this than just staring at an empty prompt box :)
That's a good explanation, but I think the expectation when someone says they are "launching something" that is an antidote to AI fatigue, it may be better to say it's a course and a methodology. You aren't launching a tool.
I think if you give it a try, you'll be surprised. It is a course and a tool and a way of thinking. We often struggle to find concise language to describe something that is fundamentally new. Maybe after you've tried it, you'll be able to help us explain it better.
I'm deep in category creation myself, so I know exactly where you're at.
But as I'm sure you know, you need to get the language right in order to create the desire to try.
I don't personally have AI fatigue. Nor do I have the time to spend 5 weeks taking a course to use a tool that I don't have enough context for.
Being in Australia timezone wise, and launching a start-up doesn't help.
This doesn't mean in any way that I'm not rooting for your success. But as you know, the language of understanding something new is a long iterative process.
Hey everyone, Eric Ries here. solveit is the AI environment I personally have been using every day for months, not just for code but for writing and research, too.
it’s solved all the problems and frustrations I’ve had with both vibecoding and the limitations of the chatbot interface for doing deep work that requires concentration + the ability to understand the artifacts you are producing
and, as a special bonus, people in this course will get a sneak preview of the new book I’m working on. we’re going to use it both to teach some of the concepts from it (on how to create mission-driven long-term companies) and how to use solveit for longform writing projects
happy to answer any questions here, for folks that want to learn more,
(For those that don't recognise the name, Eric is the creator of The Lean Startup, and also founded the Long Term Stock Exchange. He's the co-founder of Answer.AI, which has built the solveit platform, and which fast.ai is now part of.)
And this prompted me to record a video showing some of my random non-work usage recently, to give a feeling for what the app looks like :) https://youtu.be/Y2B27hdKMMA
Here is a video showing off using solveit for creating a web app. https://youtu.be/DgPr3HVp0eg?t=3120 To reiterate other comments, this is more about the methodology than the tool, but it is fun to see the tool in action too.
Thanks Eric (and Jeremy and Johno). The course details are a bit sparse on the sign-up site. What's the expected time commitment for the course over the 5 weeks? And how useful would the course be if you missed a few of the courses and had to catch up later?
Everything is recorded so it doesn't matter at all if you catch up later. Some people in the preview course didn't start until we finished! Obviously you'll get better interaction with the community if you're following along at the same time, but that's the only real issue.
I'd say budget a minimum 4 hours homework + 3-4 hours lesson watching time.
I thought it might be helpful to post a link to one of my favorite writeups from the beta cohort for solveit (last year). It's written by Chris Thomas:
Being among the first 1000 people to experience SolveIt has felt like witnessing the early days of a significant shift in how we work with AI. As someone who is a seasoned programmer, I have seen many programming paradigms and the advance of AI coding tools. What makes SolveIt different is not just another tool or framework - it is a fundamental rethinking of the human-AI relationship.
As I look at my experience with SolveIt, I think this is a better more sustainable approach to AI-assisted development. The current trend of ever more powerful models generating ever larger blocks of code feels unsustainable. SolveIt offers a different path. By maintaining human agency, working in comprehensible increments and building genuine understanding at each step, it creates a positive feedback loop where both human and AI capabilities grow stronger over time. This represents a partnership model that builds competence over time rather than creating dependence.
The implications extend far beyond programming. Whether I am implementing computer vision algorithms, exploring culinary science, or writing technical articles - the same principles apply. Small steps, continuous understanding, iterative refinement and always keeping the human as the agent in the process.
As a self taught hobbyist I progressed pretty far in advent of code 2023 until I gave up and less so in 2024 but my approach seems to be close to the one described (if you dig a little deeper into the signup page) or so I imagine. I was disciplined about not asking for help with the problem itself and went to chatgpt for help with components or syntax I needed to build a function I already had in mind (which was closer to the state of the art - especially in 2023 anyway). I think the advent of code problems are really interesting and have enjoyed solving them and watching others solve the ones I couldn’t. They are a fun way to frame the course. However the real value to me is learning how to approach more ambiguous problem spaces. I am definitely interested.
I participated in the first cohort and I'll be doing it again because I enjoyed it so much the first time around. The course focuses on teaching a robust problem solving approach, rather than explicitly teaching people how to program with AI. It's not a dream or a scam! If you digest the course concepts, the takeaways can be put into practice even if you aren't programming directly in SolveIt. But the SolveIt application certainly greases the wheels by making this problem-solving approach easier and fun! My growth as a programmer has been supercharged by what I learned and applied in the past year since taking the first course.
I'd urge folks here to atleast go through https://www.youtube.com/watch?v=DgPr3HVp0eg before jumping to conclusions about what SolveIt or this course is about. Its the polar opposite of vibe coding IMO.
I watched the video and it’s basically a “Jupyter notebook” type of app where you can type code and chat with the AI.
It’s nice cause it makes the interaction more dynamic and iterative. Honestly the “changing the answer” thing is something I always did on LM Studio when I wanted to change course. Definitely better than the limited interfaces of chatbots today, but I’m not sure it’s “revolutionary” by any means.
Still, it’s something I’d prefer until someone finds a better way to interact with LLMs. The ability to add stuff, remove stuff, move things around etc. probably help a lot when you’re creating something. It better matches the state of our minds. Also I appreciate the “using AI for learning instead of producing slop or — like many posts I’ve seen before — optimized spam.
I’m not sure what’s up with the course though. Seems more like a way to try to monetize something that wouldn’t be monetizable in any other way.
In fact I’m sure someone more knowledgeable than me could just create a Jupyter Notebook plugin that would replicate most of what this is?
Yep, asking questions directly in the notebook, then editing the answer and executing it would be very powerful. I think `jupyter-ai` is close to this, judging by description, but I didn't try it.
Why does this article not mention what solveit is at all? It talks about what they did, then that they made this tool, then that it's great, but what is it? Watch this video!
No, give me a sentence or two about what it does. I'm not watching a video about a tool while reading a blog post about it because you couldn't be bothered to write a line or two about it.
As someone who participated in the first cohort but is not part of their team, i would say it’s a programming environment for AI assisted literate programming.
It’s like an intelligent notebook. That means you could use this for many different things but at least to me the high order bit is „AI assisted literate programming“
Considering how the folks at answer.ai have been using (successive versions of) it to build this tool itself and judging by student projects and showcases, it definitely goes beyond exploratory. You can build big stuff with it.
Personally I’m using it to learn the whole fastai ecosystem.
That's fair! I guess since it's a new thing that doesn't quite neatly fit in a category, we were perhaps too shy about trying to define it. Also, we really want to focus on the methodology, rather than the platform. But yes, you're right we should explain the platform too. :) I'll have a go here, and will then go and add it to Johno's article.
So basically, you can think of the platform as combining all these: ChatGPT; Jupyter Notebook + nbdev; Bits of vscode/cursor (we embed the same Monaco editor and add similar optional AI and non-AI autocompletion); a VPS (you get your own persistent full VPS running Linux with a URL you can share for public running applications); Claude Code (all the same tools are available); a persistent terminal.
Then there's some bits added that don't exist elsewhere AFAIK: something like MCP, but way simpler, where any Python function can be instantly used as an AI tool; the ability to refer directly to any live Python variable in AI context (but optional, so it doesn't eat up your context window); full metaprogramming of the environment (you can through code or AI tools modify the environment itself or the dialog); context editing (you can -- and should -- directly edit AI responses instead of tell the AI it's wrong); collaborative notebook coding (multiple people can edit the dialog, run code, etc, and all see live updates).
The combination of these things is rather different (and IMO better!) than the sum of its parts, but hopefully this helps a bit?
This is such a steep price tag. I loved what Jeremy Howard put up on fast.ai and respect the heck out of his team, but I've seen too many people scammed by online courses that sell a dream. This one seems to be selling a dream as well.
I'll be purchasing the course to try it out but I think my concern is not a one-off thing.
I participated in the first batch and am not a shill, look through my comment history.
The dismissive comments here pain me as Ive seen them work hard on this over the last year as they integrated many of our feature requests and built out the platform. I’ve also had time to let the ideas sink in.
You definitely cant hang back and expect some magic ai to do all the work for you.
I also cant say „you will definitely benefit“ since everybody is difft.
But i can honestly say it‘s the real deal, no ifs and buts.
Hi all - Jeremy Howard from Answer.AI here. Really excited to share with you all what we've learned over the past year about how to work with AI in a way that's entirely human-centered.
Whilst most folks seem focused on how to remove humans from the loop entirely and make AIs do all the work, we've concentrated 100% on how to make humans part of the loop in a way that makes us more and more capable and engaged.
I've enjoyed building and using our tool, "solveit", for the last year, and do basically all my coding, writing, reading, research, etc in it nowadays. I use small fast iterative steps and work to maximize my learning at each step.
As the post mentions, a year ago we did a trial of it, and have been working with that group of 1000 users since then.
The course is about a methodology, not a product. It's the ideas Eric Ries and I have been working on for decades. 5 weeks is a crash course that can only touch on the ideas. And it covers learning data structures and algorithms, foundations of web programming, system administration, startup creating, and much more.
It's really a rapid "how do to <x> the solveit way" for a variety of x. Each of those x is likely to become a full course in the future.
You obviously see a ton of value here, but a bunch of industry professionals still aren’t getting it. This is a communication problem. Y’all probably should consider investing in a (different?) marketing or communications consultant.
It's not a course on how to learn how to use the product. It's a course on how to think and solve problems, which makes you more effective in using the platform.
Speed reading, so I hope I have not missed existing answers to these questions:
1. Does the 400 cover access to the platform and if so, for what duration and what usage quotas, etc.
2. Can we purchase access to the platform without the course, e.g let us play with it for N months (course seems aimed at junior level devs)?
3. The tools and method, as described, don't mention any tools for minimizing divergence/conflation/hallucination and maximizing repeatability/verifiability etc. Do you have a prompt optimization tool (ala DSPy), a built-in pipeline that uses one LLM to cross check another, or that uses deterministic tools to verify output of LLM (see Campbell, Wimsatt heuristic for managing/triangulating in uncertainty)? I'd be more interested in this new methodology if you were tackling head on the inherent limitations of LLMs (e.g. Yann LeCunn's vision)
The 400 includes platform access from signup until a few weeks after course finishes. No quota - use whatever you need. You can't purchase access to the platform directly. It's not aimed at junior devs - e.g here's a post from a 25 year experience dev: https://christhomas.co.uk/blog/2025/09/24/the-human-is-the-a... . Alums from the preview course include professors, senior VPs, etc. Our approach to using AIs is very different to automated approaches like DSPy -- it is all about human-in-the-loop; the human is the agent. We do tackle head on the inherent limitations of LLMs, but not using anything like JEPA (LeCun's approach).
A great course and helps with learning proactively and not reactively. Important in this age of ai. I was part of cohort 1 and have enrolled for the next one.
Without the intent of hijacking whatever it is you are trying to achieve: I've found the best antidote to the AI-fatigue is to rely less on it. There is no way I am going to spend my day, or my employees day for that matter in reviewing thousands of LoC of bad AI-slop. In my teams we've dialed it back to just consulting mode and asking suggestions,e.g. replacement for good old search, because once you ask the GenAI to write a few thousands of LoC for you, you're also abandoning a lot architecture decisions (which you then have to figure out ad-hoc again, when you do the review of the slop, or at the latest when you notice the "smart" tool has yet again put a secret in plaintext or something similar). So yeah, if you have the so-called AI-fatigue, just use less of the so-called AI.
Absolutely right! Although I've found AI great for learning. It doesn't write my code for me, or my prose, or my devops scripts, but it's part of the process of me learning as I write them.
Ok so after watching the latent space pod and going through their docs my understanding of what they are doing:
- solve it is mostly an approach less so an app they also build some Jp notebook style ide with branching and deep chat integration + a stateful vm which looks neat though
- my definition of the philosophical approach is: don’t let ai ever generate code you don’t understand be there every step of the way and build things “bottom up” in a very incremental way (basically exactly how Jp notebooks have always worked)
Now my view on the philosophy I actually don’t think I agree. With Claude code and codex I feel like my preferred development flow is “top down” to declare what you want on a high level and then let ai build some complete construct (making sure everything is type safe) and then dig in and understand this construct and iron out all of the details. I don’t think I need to understand fully each intermediate result.
I absolutely fundamentally agree that every line of the final output has to be fully understood and signed off by you.
Honestly, their videos and posts are very difficult to understand so if I misunderstood something I’m super happy to be corrected. I have high respect for the team and it’s nice to see theme excited.
Yes I think you've done a good job of capturing the key ideas. Thank you for taking the time to look into it!
I think you might be somewhat under-estimating the amount of novel ideas in, and the significance of, the solveit platform. But "chatgpt + vscode editor + jupyter + a persistent Linux server" is a reasonable starting point for thinking about what it consists of.
It's totally fine to disagree on the philosophy. I suspect it'll turn out that both the approach you describe and our very different approach can both work, and are probably suited for different kinds of people and tasks.
It's very early days and there's no established methodologies for AI-assisted software dev. I know our approach works and scales across a team of 10 and over a period of >1 year (and across multiple connected projects), since that's what we do in-house.
I couldn't love more all the intentions behind this, but I have no idea what it is. Why would someone who already knows and loves Polya and iterative programming and human-centered technology... use this? What is the value add?
but, honestly, I hope that list makes you see why we had a hard time figuring out how to summarize what solveit is. For example, I use it for research and writing all the time, but you'd have a hard time seeing why a notebook plus a private VPS would do much for that use case. But it does! Having a general-purpose computing environment is just very, very useful in a wide range of situations.
I personally think it is as a platform as a tool for thought. Building your solution and your knowledge at the same time by leaning on what LLMs do well, while avoiding the common pitfalls.
One thing I loved about the first solveit course was how create the community is. It goes back to fast.ai too, but everyone is super kind, smart, and has diverse backgrounds.
That testimonial page looks super fake. Why does the same Mathew Miller have 7 testimonials? Similarly, Pol Alvarez Vecino, Pierre Porcher and Duane Milne all have 6 testimonials. And almost all other users also have multiple testimonials. This can't be organic.
At the end of the last course we put out a survey and asked for feedback. There were multiple questions in the feedback form. That's why there are multiple quotes from each person.
I apologise folks that we did a bad job of explaining exactly what we're launching! My bad. :( I've added this to the top of the article now -- I hope this does a better job of explaining things:
----
*tldr from Jeremy:* You can now sign up for Solveit, which a course in how to solve problems (including coding, writing, sysadmin, and research) using fast short iterations, and also provides a platform that makes this approach easier and more effective. The course shows how to use AI in small doses to help learn as you build, but doesn't rely on AI at all -- you can totally avoid AI if you prefer. The approach we teach is based on decades of research and practice from Eric Ries and I, the founders of Answer.AI. It's basically the opposite of "vibe coding"; it's all about small steps, deep understanding, and deep reflection. We wrote the platform because we didn't find anything else sufficient for doing work the "solveit way", so we made something for ourselves, and then decided to make it available more widely. You can follow the approach without using our platform, although it won't be as smooth an experience.
The platform combines elements of all these: ChatGPT; Jupyter Notebook + nbdev; Bits of vscode/cursor (we embed the same Monaco editor and add similar optional AI and non-AI autocompletion); a VPS (you get your own persistent full VPS running Linux with a URL you can share for public running applications); Claude Code (all the same tools are available); a persistent terminal. Then there's some bits added that don't exist elsewhere AFAIK: something like MCP, but way simpler, where any Python function can be instantly used as an AI tool; the ability to refer directly to any live Python variable in AI context (but optional, so it doesn't eat up your context window); full metaprogramming of the environment (you can through code or AI tools modify the environment itself or the dialog); context editing (you can -- and should -- directly edit AI responses instead of tell the AI it's wrong); collaborative notebook coding (multiple people can edit the dialog, run code, etc, and all see live updates).
I read most of this before understanding that I wasn't reading about some new agent or IDE or something, I was reading a sales pitch for a coding course for would-be vibe coders, with AI training wheels in the form of ... a dialog box to talk to an LLM.
I should have noticed the camp counselor / cultish / tedx vibes, throwin around REPL and feedback loops. I feel that it's somewhat misleading to present this as some amazing self-building software or server platform here, and bury the lede that what's being sold is an experimental tutoring method. It's almost like those "I built an AI agent that builds AI agents" posts, only instead of selling the sixty lines of python, it's selling a set of lectures that goes with them.
Let me see.
Fastmail?
The first LLM which actually used transfer learning for NLP?
One of the most wildly useful and successful deep learning courses [available for free]?
A big chunk of what is Kaggle?
Can you provide some links? Because I see that Eric Ries has a resume on Wikipedia that mainly highlights his book, "The Lean Startup." I see that he was adjacent to some dot-com-bubble-era startups. I see he has a handsome photo. I don't see where he actually founded a successful startup; if anything, reading his resume makes me think he stopped coding and discovered a more successful career in selling promises to young people that his methodologies would turn them into successful entrepreneurs. To me, that does sound like a grift. I mean, why bother doing the actual work of starting a startup, coding and solving lots of problems, when you can present yourself as a guru in how startups work, right? Smart. Also, shady.
Apparently he also runs a stock exchange with two companies on it. And a lot of "core principles". Lol. Speaking as someone who coded and ran the first Bitcoin casino and was around a lot of early crypto bullshit in the nascent years of BTC when lots of dudes like this had crazy plans to commoditize it all sorts of ways, this is juvenile boiler room stuff that would have been laughed out of the Bitcoin Business Association Skype chat in 2010. (And yes, dread pirate roberts was there for a sec, and the general level of dialog was a far sight more intelligent than this dreck).
I got hell-banned here for criticizing PG for running this very site basically to achieve the same grift - to form a cult of young people who'd worship him in exchange for pie in the sky promises that they would become successful startup founders. But to be fair, PG has both actual experience and a ton of investment capital to prove it, so his cult followers have at least some chance of receiving an investment (or a gift, if you think kissing his ass is the essential requirement) that will catapult them into another echelon.
These guys are just living the mantra of "fake it til you make it." This reminds me of the $500 I spent when I turned 21 to take a bartending class for two weeks. Loads of fun. End result: There was one job on their board for graduees, for a bar that had been closed for a couple years. Turned out the best way to become a bartender was to learn on the job.
Turned out that was also the best way to become a software engineer.
The AI is an integral part of the platform. My impression from reading the reviews on their website is that you won’t go anywhere using Solveit if you don’t let the LLM give you feedback.
I was in the first batch, I'll be in the second and I'll probably try to be in any of the following.
This is not a course about AI only. It is about learning how to learn and develop yourself specially with the coming AI age. The concepts learned here apply to solve other problems in life. I would recommend it to even non-programmers.
All the industry is moving into vibe-coding, where we delegate more and more into AI, while we do less and less ourselves. The big issue with this is of course that you stop learning. You stop using the ability to put effort into things, and those things, like muscles, go away without use.
It is very tempting to dismiss the solveIT approach as an outlier, at the end of the day if all industry is moving towards full automation instead of careful coding together with the AI, they are probably right, right?
Well, AI getting better at coding, does not mean YOU are getting better at it. In fact it's actually the opposite, without practicing your ability to learn and solve problems, you will lose it. On another level, the kind of one-shotting where you give the AI a problem and come back later after 30 min (or more!) to check if it's done and throw it away if it went of the rails is psychologically. It is the same pattern on which slot machines operate, and gambling is about.
I'm not saying vibe-coding is always bad at all, just there are a lot of caveats and people do not seem to be concerned about.
Those concerns are front and foremost in the solveIT approach, Jeremy has a very thorough understanding about meta-learning for example, the slot machines insight was shared by Johno, and the full team has range of experiences that are rare in the narrow-minded AGI at all costs world of Silicon Valley.
Your psychological health, learning abilities, and in general happiness in life are rarely the main concern of VCs but it rans firsts and foremost in Answer.ai (they are even a public benefit corporation, look it up if you don't know what is that).
With that in mind, I recommend everyone to take the course because it is a multi-dimensional experience that will make you grow as a person and as an engineer. You just gotta look at previous fast.ai course & students.
And finally, if you end up joining because you read this message, I'd love if you get in touch in the Discord server, my name is pol_avec.
I participated in the first cohort in January 2025 and have been using SolveIt ever since. I would like to share some thoughts about my experience of the course, platform and the community of makers & users.
Explaining SolveIt is hard
How do you explain to someone who's never written a line of code what programming feels like? When I'd visit my father-in-law, he'd ask me at breakfast what I was going to do that day. Every time I would give him the same answer: I will stare at the same blue screen all day long like I have been doing on every average workday since I started programming ~15 years ago. It's become our little family joke, but it's also true - it's difficult to explain to someone who has never done it.
The question is what does that blue screen work actually involve? What does programming feel like? How do you live through it? What is your experience when you solve problems - is it painful or agreeable? Do you finish knowing more about the domain of a problem you tackled, or did you just get something working fast and hope it wouldn't collapse on you later?
The SolveIt experience is similarly hard to explain to someone who hasn't tried it. What's interesting is that using the SolveIt method has had positive effects on all these areas - how my work feels, how well I understand what I build, and the quality of what I produce.
For my first bigger, work related project in SolveIt I built an annotation interface for YouTube dialogue transcripts for documentary movie footage classification. I'd used Angular before but not FastHTML and HTMX. I had a working app in production within two weeks going from 0->users annotating, which might not sound very impressive in this new agentic era. But when using SolveIt right, the result isn't just working code, it's code that you know, and most importantly it's a much better understanding of your initial problem and the potential solutions/technologies to solve it.
Plus, the SolveIt approach made me examine each component individually - what it represents, what it means for my system, how it could integrate more broadly into our organization. That exploration led to something far more general than my initial annotation app: now we are using it for browsing/editing dataframes through a UI, where columns and cells can have different templates/layout, edit history and several other features I hadn't even thought of initially. The exploration phase took longer, but I gained a foundation I understand deeply and can build on. For core business problems, that knowledge living in your head matters a lot.
My only problem at the time was git integration - I wanted my work synced with my codebase and manual copy pasting was too much friction. This has been resolved since: I have my git repos accessible from SolveIt and AFAIK you can serve apps directly from the platform without needing to deploy it (though I haven't tested this functionality yet).
Cult disclaimer
I was surprised reading through this thread by comments about religious cults, paid marketing campaigns, and fake testimonials. It's true that I registered to Hacker News today just to share my experience with SolveIt. I am a real person though and have not received any money for writing this post.
Instead, for several years now I've been learning from Jeremy Howard and the answer.ai team through their free resources - the fast.ai courses, talks, podcasts, and open source projects. Hamel Husain's work on nbdev also had a huge impact on how I work. All of this is available for free to anyone interested in learning. I hope that many others will discover them and SolveIt. I believe many successful projects, businesses and open source solutions will grow on the SolveIt ground, the same way it happened with fast.ai courses.
On $$$ concerns: I paid $200 for the first prototype cohort. I thought it was worth much more than that - the course itself, plus unlimited 11+ month platform access (included Claude most recent model), plus the community discussions where I learned from other practitioners how they tackled specific problems from very diverse domains. When cohort 2 was announced, I knew I was gonna sign up even before I knew the price. The ideas you take and friends you make in these communities are priceless.
I was part of the first cohort and I will be taking the course again. SolveIt has a unique take on how to best work with AI that helps produce better end products and helps you learn along the way. The platform is tailored to implementing the methodology taught in the course but the lessons can be taken when working in other AI tools to obtain better results.
I took the 1st course and really loved it. I got a little distracted by focusing on building web apps with fasthtml(which was covered in the course), but I think the practices that are tough are extremely important for anyone using LLMs on a daily basis. I am really excited to learn about all the new features in solveIt.
I was part of the first SolveIt course cohort and have signed up for the second one too. SolveIt and its methodology really changed how I approach programming and problem solving with AI. It gives you a collaborative partner that helps you think through problems without just handing you answers. You still do the hard thinking, but you are never completely stuck. It has also allowed me to tackle tasks, ideas and experiments that I had previously put to the side knowing they would have taken too much time and effort to explore.
Although it can take more steps and iterations than other tools, that is part of the difference, they are more considered and thought through steps. It makes me more productive overall. The ability to edit the conversation and work in short steps lets you create better context, together with sharing your thought process and building genuine understanding that is useful in the future.
It is refreshing to find an approach that makes me better rather than just faster at producing code without full understanding of it. I find it jarring to use chat with other LLMs now and that typical code completion can be frustrating, that is how different it is.
Upfront : As a member of the first student cohort and a non-user of Hacker News this is my opinion. I've relied on this tool consistently over the past year. While it's not the only AI I use, it's the only one that provides a comfortable and effective workflow. The core strengt, for me, lies on my agency. I control the decision-making and discuss the implementation steps. while I may generate fewer lines of code than my peers, the work is solid, well-explained, and easily iterable. Also I knwo the code i just wrote, the ability to articulate the function of every line of code is a crucial benefit that I deeply value. It it's the sweet spot for me and i fell more productive.
I've been using SolveIt for about a month and going through the previous cohort content. It's really hard to not to try to vomit out solutions and to go about something in a methodical way. It makes your work visible, maintainable and usable by others (and frankly yourself). It takes a lot of practice, but it is very much intangible and always in the process of becoming. If you want to focus on solving pretty much any problem, and a path to using AI effectively in any endeavor, then this is a really fulfilling path. You will find that there is more magic in AI than you suspect. Many praises to Jeremy and Johno and the SolveIt team. But don't take my word for it. I'm a fresh convert. Go check it out for yourself. It is subtle and requires nuance, so not for everyone.
Strong recommend. I did the trial course which meant I could complete Advent of Code and also build an app to take you to the moment in the course videos where topics were discussed. With SolveIt you can have your hand-held while you go from idea to result; learning as you go.
These achievements may not sound much to hard-core tech bros but that's the beauty of the problem solving method - it meets you where you are. After working through your problem you'll have learnt something and be able to tackle harder tasks next.
It's been fun watching the team continuously evolve the product over the last year. Looking forward to round 2.
OK thanks. We've moved those comments (and this one) to a hidden "stub" subthread.
A reminder to everyone that the FAQ includes this:
Can I ask people to comment on my submission?
No, for the same reason. It's also not in your interest: HN readers are sensitive to this and will detect it, flag it, and use unkind words like 'spam'.
==
For all we know the Solveit guys didn't ask anyone to come and comment here, so we don't want to judge them (and they're good guys who have contributed positively to HN in the past). Sometimes people in a community – particularly an engaged and loyal one like Solveit’s seems to be – can be "too helpful" and act as though they're astroturfing, even though they're just sharing their excitement about the product and community.
The problem on HN is that it can be hard to tell the difference between authentic excitement and astroturfing.
Thanks for the assumption of good faith. We had no intent to astroturf. Some of the students from cohort one were discussing the negativity of the HN thread on the course discord server. It seems that several took it upon themselves to come share their experience. From the comments whose names I recognize, I can assure that these are all real people who have first-hand knowledge of what they're posting about.
Solveit community is so nice and wholesome that I see many comments being dismissed as astroturfing, well maybe HN is not used to this but I can tell you all the people in there are for real.
You can take my case, I started writing and posting videos after taking the solveIT course because you are encouraged to do so. Many people on the community have started to do so, and that's why they might not have much of a history yet.
Moreover, the course is such a good experience that when there's a chance to share it publicly and invite more people to it (after a year of private beta), members go out of their way (into HN) to try to invite more people.
I have seen all the people you mention in the discord and chat with some of them so I can tell you they are real.
Because once again, my comment might sound like astroturfing, you can have a look at my youtube & X where I've been posting about this stuff for quite a while
I had access to GitHub Copilot as a student in early 2022 while learning Haskell and immediately realised that it would hinder my learning if I didn't turn it off and implicitly follow this understand, plan, execute, reflect loop.
AI products like Cursor have the notion of an 'autonomy slider' [1] that can fortunately be turned all the way down (disable Cursor Tab) but relying on this discipline seems fickle when with the right agentic loops [2] and context engineering, thousands of lines of code can be churned out with minimal supervision.
I've considered always working on two projects over a long timespan, one with no AI assistance, possibly in a separate IDE like Zed, and one in Vibe Kanban (my current daily driver) but this feels like an inefficient proxy to accelerating this four step learning loop with a tool like solveit.
Since the solveit product isn't released and seemingly isn't competing with solutions, is there an opportunity to convey how AI product developers should be thinking about amplifying their users and keeping them in the learning loop?
So far, I've seen Claude Code's Learning output style [3], and also ChatGPT's study mode but in these cases, the only product change is a prompt and solveit is more than that.
[1] https://www.latent.space/i/166191505/part-a-autonomy-sliders [2] https://simonwillison.net/2025/Sep/30/designing-agentic-loop... [3] https://docs.claude.com/en/docs/claude-code/output-styles#bu...
I'm sure I'm not the only one confused by this, but can you give details on why you decided that a course was necessary to learn this new way of working with AI?
Maybe it's more of a alpha thing, but with millions using chatbots every day, was it not possible to develop a UI?
It's not just about adoption, who has the time to spend 5 weeks learning a new tool? Particularly when you're competing with the existing tools?
The course is about a methodology, not a product. It's the ideas Eric Ries and I have been working on for decades. 5 weeks is a crash course that can only touch on the ideas. And it covers learning data structures and algorithms, foundations of web programming, system administration, startup creating, and much more.
It's really a rapid "how do to <x> the solveit way" for a variety of x. Each of those x is likely to become a full course in the future.
(We actually built the tool for ourselves, and only decided to make it publicly available when we realised how much it's helping us. We're a PBC so our mission is not entirely financial. We're not trying to compete with existing tools, but provide an alternative direction.)
We made the tool, and that will eventually be available on its own. But the method requires some discipline and 'unlearning'. It's very hard to show someone an AI tool and not have them treat it just like ChatGPT/Claude/... - that's the part that takes the time, and having a community of people working through different examples and case studies together is a lot more motivating for this than just staring at an empty prompt box :)
That's a good explanation, but I think the expectation when someone says they are "launching something" that is an antidote to AI fatigue, it may be better to say it's a course and a methodology. You aren't launching a tool.
I think if you give it a try, you'll be surprised. It is a course and a tool and a way of thinking. We often struggle to find concise language to describe something that is fundamentally new. Maybe after you've tried it, you'll be able to help us explain it better.
I'm deep in category creation myself, so I know exactly where you're at.
But as I'm sure you know, you need to get the language right in order to create the desire to try.
I don't personally have AI fatigue. Nor do I have the time to spend 5 weeks taking a course to use a tool that I don't have enough context for.
Being in Australia timezone wise, and launching a start-up doesn't help.
This doesn't mean in any way that I'm not rooting for your success. But as you know, the language of understanding something new is a long iterative process.
Hey everyone, Eric Ries here. solveit is the AI environment I personally have been using every day for months, not just for code but for writing and research, too.
it’s solved all the problems and frustrations I’ve had with both vibecoding and the limitations of the chatbot interface for doing deep work that requires concentration + the ability to understand the artifacts you are producing
and, as a special bonus, people in this course will get a sneak preview of the new book I’m working on. we’re going to use it both to teach some of the concepts from it (on how to create mission-driven long-term companies) and how to use solveit for longform writing projects
happy to answer any questions here, for folks that want to learn more,
Eric
(For those that don't recognise the name, Eric is the creator of The Lean Startup, and also founded the Long Term Stock Exchange. He's the co-founder of Answer.AI, which has built the solveit platform, and which fast.ai is now part of.)
Oops that should say "which fast.ai is now part of". s/not/now/
I fixed it :)
do you have a video of someone using it?
And this prompted me to record a video showing some of my random non-work usage recently, to give a feeling for what the app looks like :) https://youtu.be/Y2B27hdKMMA
Here is a video showing off using solveit for creating a web app. https://youtu.be/DgPr3HVp0eg?t=3120 To reiterate other comments, this is more about the methodology than the tool, but it is fun to see the tool in action too.
Thanks Erik - I've added that video to the article now.! :)
We also showed it as part of Hamel's course: https://x.com/HamelHusain/status/1956514524628127875 (https://www.youtube.com/watch?v=DgPr3HVp0eg) which is a longer example of the tool in action
Latent Space just released an interview about Solveit: https://www.youtube.com/watch?v=01ybLOH1fnU
I loved this interview, I can't wait to learn more about the tool use aspect.
Thanks Eric (and Jeremy and Johno). The course details are a bit sparse on the sign-up site. What's the expected time commitment for the course over the 5 weeks? And how useful would the course be if you missed a few of the courses and had to catch up later?
Everything is recorded so it doesn't matter at all if you catch up later. Some people in the preview course didn't start until we finished! Obviously you'll get better interaction with the community if you're following along at the same time, but that's the only real issue.
I'd say budget a minimum 4 hours homework + 3-4 hours lesson watching time.
I thought it might be helpful to post a link to one of my favorite writeups from the beta cohort for solveit (last year). It's written by Chris Thomas:
https://christhomas.co.uk/blog/2025/09/24/the-human-is-the-a...
a few quotes/excerpts:
As a self taught hobbyist I progressed pretty far in advent of code 2023 until I gave up and less so in 2024 but my approach seems to be close to the one described (if you dig a little deeper into the signup page) or so I imagine. I was disciplined about not asking for help with the problem itself and went to chatgpt for help with components or syntax I needed to build a function I already had in mind (which was closer to the state of the art - especially in 2023 anyway). I think the advent of code problems are really interesting and have enjoyed solving them and watching others solve the ones I couldn’t. They are a fun way to frame the course. However the real value to me is learning how to approach more ambiguous problem spaces. I am definitely interested.
I participated in the first cohort and I'll be doing it again because I enjoyed it so much the first time around. The course focuses on teaching a robust problem solving approach, rather than explicitly teaching people how to program with AI. It's not a dream or a scam! If you digest the course concepts, the takeaways can be put into practice even if you aren't programming directly in SolveIt. But the SolveIt application certainly greases the wheels by making this problem-solving approach easier and fun! My growth as a programmer has been supercharged by what I learned and applied in the past year since taking the first course.
I'd urge folks here to atleast go through https://www.youtube.com/watch?v=DgPr3HVp0eg before jumping to conclusions about what SolveIt or this course is about. Its the polar opposite of vibe coding IMO.
I watched the video and it’s basically a “Jupyter notebook” type of app where you can type code and chat with the AI.
It’s nice cause it makes the interaction more dynamic and iterative. Honestly the “changing the answer” thing is something I always did on LM Studio when I wanted to change course. Definitely better than the limited interfaces of chatbots today, but I’m not sure it’s “revolutionary” by any means.
Still, it’s something I’d prefer until someone finds a better way to interact with LLMs. The ability to add stuff, remove stuff, move things around etc. probably help a lot when you’re creating something. It better matches the state of our minds. Also I appreciate the “using AI for learning instead of producing slop or — like many posts I’ve seen before — optimized spam.
I’m not sure what’s up with the course though. Seems more like a way to try to monetize something that wouldn’t be monetizable in any other way.
In fact I’m sure someone more knowledgeable than me could just create a Jupyter Notebook plugin that would replicate most of what this is?
Yep, asking questions directly in the notebook, then editing the answer and executing it would be very powerful. I think `jupyter-ai` is close to this, judging by description, but I didn't try it.
Why does this article not mention what solveit is at all? It talks about what they did, then that they made this tool, then that it's great, but what is it? Watch this video!
No, give me a sentence or two about what it does. I'm not watching a video about a tool while reading a blog post about it because you couldn't be bothered to write a line or two about it.
As someone who participated in the first cohort but is not part of their team, i would say it’s a programming environment for AI assisted literate programming.
It’s like an intelligent notebook. That means you could use this for many different things but at least to me the high order bit is „AI assisted literate programming“
I see, thank you. Is this more of an exploratory programming tool, then? A Jupyter notebook with AI features?
Considering how the folks at answer.ai have been using (successive versions of) it to build this tool itself and judging by student projects and showcases, it definitely goes beyond exploratory. You can build big stuff with it.
Personally I’m using it to learn the whole fastai ecosystem.
I'll give it a shot, thanks!
That's fair! I guess since it's a new thing that doesn't quite neatly fit in a category, we were perhaps too shy about trying to define it. Also, we really want to focus on the methodology, rather than the platform. But yes, you're right we should explain the platform too. :) I'll have a go here, and will then go and add it to Johno's article.
So basically, you can think of the platform as combining all these: ChatGPT; Jupyter Notebook + nbdev; Bits of vscode/cursor (we embed the same Monaco editor and add similar optional AI and non-AI autocompletion); a VPS (you get your own persistent full VPS running Linux with a URL you can share for public running applications); Claude Code (all the same tools are available); a persistent terminal.
Then there's some bits added that don't exist elsewhere AFAIK: something like MCP, but way simpler, where any Python function can be instantly used as an AI tool; the ability to refer directly to any live Python variable in AI context (but optional, so it doesn't eat up your context window); full metaprogramming of the environment (you can through code or AI tools modify the environment itself or the dialog); context editing (you can -- and should -- directly edit AI responses instead of tell the AI it's wrong); collaborative notebook coding (multiple people can edit the dialog, run code, etc, and all see live updates).
The combination of these things is rather different (and IMO better!) than the sum of its parts, but hopefully this helps a bit?
That helps a lot, thanks!
This is such a steep price tag. I loved what Jeremy Howard put up on fast.ai and respect the heck out of his team, but I've seen too many people scammed by online courses that sell a dream. This one seems to be selling a dream as well.
I'll be purchasing the course to try it out but I think my concern is not a one-off thing.
I participated in the first batch and am not a shill, look through my comment history.
The dismissive comments here pain me as Ive seen them work hard on this over the last year as they integrated many of our feature requests and built out the platform. I’ve also had time to let the ideas sink in.
You definitely cant hang back and expect some magic ai to do all the work for you.
I also cant say „you will definitely benefit“ since everybody is difft.
But i can honestly say it‘s the real deal, no ifs and buts.
Is this an ad?
I'd say the fact that I'm unable to show purchase intention without being accused of being a shill proves my point.
I was unaware that you had a point.
Hi all - Jeremy Howard from Answer.AI here. Really excited to share with you all what we've learned over the past year about how to work with AI in a way that's entirely human-centered.
Whilst most folks seem focused on how to remove humans from the loop entirely and make AIs do all the work, we've concentrated 100% on how to make humans part of the loop in a way that makes us more and more capable and engaged.
I've enjoyed building and using our tool, "solveit", for the last year, and do basically all my coding, writing, reading, research, etc in it nowadays. I use small fast iterative steps and work to maximize my learning at each step.
I'm slightly confused. From your comment I expected an AI chat tool, but solveit appears to be more a class?
It's both! We built a new platform designed to be a good place for using the approach we've designed. So it's a course, plus access to the platform.
Sorry it is a bit confusing tbh!
No worries, keen to try out your approach.
I see a $400 price tag on a five week course. If it takes 5 weeks to learn how to use your product, I am skeptical that it has legs.
Side note: supposedly this is the first cohort of this course, so how do you already have testimonials?
As the post mentions, a year ago we did a trial of it, and have been working with that group of 1000 users since then.
The course is about a methodology, not a product. It's the ideas Eric Ries and I have been working on for decades. 5 weeks is a crash course that can only touch on the ideas. And it covers learning data structures and algorithms, foundations of web programming, system administration, startup creating, and much more.
It's really a rapid "how do to <x> the solveit way" for a variety of x. Each of those x is likely to become a full course in the future.
You obviously see a ton of value here, but a bunch of industry professionals still aren’t getting it. This is a communication problem. Y’all probably should consider investing in a (different?) marketing or communications consultant.
+1
and a product manager
It's not a course on how to learn how to use the product. It's a course on how to think and solve problems, which makes you more effective in using the platform.
I guess I was lucky. I'm old enough that this was taught to me in school for free. That was way back in the day before it was outlawed.
Speed reading, so I hope I have not missed existing answers to these questions:
1. Does the 400 cover access to the platform and if so, for what duration and what usage quotas, etc. 2. Can we purchase access to the platform without the course, e.g let us play with it for N months (course seems aimed at junior level devs)? 3. The tools and method, as described, don't mention any tools for minimizing divergence/conflation/hallucination and maximizing repeatability/verifiability etc. Do you have a prompt optimization tool (ala DSPy), a built-in pipeline that uses one LLM to cross check another, or that uses deterministic tools to verify output of LLM (see Campbell, Wimsatt heuristic for managing/triangulating in uncertainty)? I'd be more interested in this new methodology if you were tackling head on the inherent limitations of LLMs (e.g. Yann LeCunn's vision)
The 400 includes platform access from signup until a few weeks after course finishes. No quota - use whatever you need. You can't purchase access to the platform directly. It's not aimed at junior devs - e.g here's a post from a 25 year experience dev: https://christhomas.co.uk/blog/2025/09/24/the-human-is-the-a... . Alums from the preview course include professors, senior VPs, etc. Our approach to using AIs is very different to automated approaches like DSPy -- it is all about human-in-the-loop; the human is the agent. We do tackle head on the inherent limitations of LLMs, but not using anything like JEPA (LeCun's approach).
dang. thought this was a service for people tired of reading about AI.
in a way, it is
Ironically AI would've done a better job at summarizing what is this about than thousands of words of this post.
A great course and helps with learning proactively and not reactively. Important in this age of ai. I was part of cohort 1 and have enrolled for the next one.
Without the intent of hijacking whatever it is you are trying to achieve: I've found the best antidote to the AI-fatigue is to rely less on it. There is no way I am going to spend my day, or my employees day for that matter in reviewing thousands of LoC of bad AI-slop. In my teams we've dialed it back to just consulting mode and asking suggestions,e.g. replacement for good old search, because once you ask the GenAI to write a few thousands of LoC for you, you're also abandoning a lot architecture decisions (which you then have to figure out ad-hoc again, when you do the review of the slop, or at the latest when you notice the "smart" tool has yet again put a secret in plaintext or something similar). So yeah, if you have the so-called AI-fatigue, just use less of the so-called AI.
Absolutely right! Although I've found AI great for learning. It doesn't write my code for me, or my prose, or my devops scripts, but it's part of the process of me learning as I write them.
Ok so after watching the latent space pod and going through their docs my understanding of what they are doing:
- solve it is mostly an approach less so an app they also build some Jp notebook style ide with branching and deep chat integration + a stateful vm which looks neat though
- my definition of the philosophical approach is: don’t let ai ever generate code you don’t understand be there every step of the way and build things “bottom up” in a very incremental way (basically exactly how Jp notebooks have always worked)
Now my view on the philosophy I actually don’t think I agree. With Claude code and codex I feel like my preferred development flow is “top down” to declare what you want on a high level and then let ai build some complete construct (making sure everything is type safe) and then dig in and understand this construct and iron out all of the details. I don’t think I need to understand fully each intermediate result.
I absolutely fundamentally agree that every line of the final output has to be fully understood and signed off by you.
Honestly, their videos and posts are very difficult to understand so if I misunderstood something I’m super happy to be corrected. I have high respect for the team and it’s nice to see theme excited.
Yes I think you've done a good job of capturing the key ideas. Thank you for taking the time to look into it!
I think you might be somewhat under-estimating the amount of novel ideas in, and the significance of, the solveit platform. But "chatgpt + vscode editor + jupyter + a persistent Linux server" is a reasonable starting point for thinking about what it consists of.
It's totally fine to disagree on the philosophy. I suspect it'll turn out that both the approach you describe and our very different approach can both work, and are probably suited for different kinds of people and tasks.
It's very early days and there's no established methodologies for AI-assisted software dev. I know our approach works and scales across a team of 10 and over a period of >1 year (and across multiple connected projects), since that's what we do in-house.
I couldn't love more all the intentions behind this, but I have no idea what it is. Why would someone who already knows and loves Polya and iterative programming and human-centered technology... use this? What is the value add?
Jeremy attempted a summary here: https://news.ycombinator.com/item?id=45456928
but, honestly, I hope that list makes you see why we had a hard time figuring out how to summarize what solveit is. For example, I use it for research and writing all the time, but you'd have a hard time seeing why a notebook plus a private VPS would do much for that use case. But it does! Having a general-purpose computing environment is just very, very useful in a wide range of situations.
I personally think it is as a platform as a tool for thought. Building your solution and your knowledge at the same time by leaning on what LLMs do well, while avoiding the common pitfalls.
One thing I loved about the first solveit course was how create the community is. It goes back to fast.ai too, but everyone is super kind, smart, and has diverse backgrounds.
We've captured a slice of that on our main site. Testimonials: https://solve.it.com/testimonials Some blog posts: https://solve.it.com/#showcases on the main page
And on of the students even made a project dashboard page showcasing all the things everyone has built! https://solveit-project-showcase.pla.sh/
He even blogged about it : ) https://himalayanhacker.substack.com/p/how-i-built-solve-it-...
That testimonial page looks super fake. Why does the same Mathew Miller have 7 testimonials? Similarly, Pol Alvarez Vecino, Pierre Porcher and Duane Milne all have 6 testimonials. And almost all other users also have multiple testimonials. This can't be organic.
At the end of the last course we put out a survey and asked for feedback. There were multiple questions in the feedback form. That's why there are multiple quotes from each person.
If you google any of those names you'll see they are absolutely real people. E.g. here's Pol's Google Scholar page: https://scholar.google.com/citations?user=ayz0DtUAAAAJ&hl=en . (And I see he's posted here on this HN chat too in fact.) Here's Mathew Miller on Twitter: https://x.com/matdmiller?lang=en .
I apologise folks that we did a bad job of explaining exactly what we're launching! My bad. :( I've added this to the top of the article now -- I hope this does a better job of explaining things:
----
*tldr from Jeremy:* You can now sign up for Solveit, which a course in how to solve problems (including coding, writing, sysadmin, and research) using fast short iterations, and also provides a platform that makes this approach easier and more effective. The course shows how to use AI in small doses to help learn as you build, but doesn't rely on AI at all -- you can totally avoid AI if you prefer. The approach we teach is based on decades of research and practice from Eric Ries and I, the founders of Answer.AI. It's basically the opposite of "vibe coding"; it's all about small steps, deep understanding, and deep reflection. We wrote the platform because we didn't find anything else sufficient for doing work the "solveit way", so we made something for ourselves, and then decided to make it available more widely. You can follow the approach without using our platform, although it won't be as smooth an experience.
The platform combines elements of all these: ChatGPT; Jupyter Notebook + nbdev; Bits of vscode/cursor (we embed the same Monaco editor and add similar optional AI and non-AI autocompletion); a VPS (you get your own persistent full VPS running Linux with a URL you can share for public running applications); Claude Code (all the same tools are available); a persistent terminal. Then there's some bits added that don't exist elsewhere AFAIK: something like MCP, but way simpler, where any Python function can be instantly used as an AI tool; the ability to refer directly to any live Python variable in AI context (but optional, so it doesn't eat up your context window); full metaprogramming of the environment (you can through code or AI tools modify the environment itself or the dialog); context editing (you can -- and should -- directly edit AI responses instead of tell the AI it's wrong); collaborative notebook coding (multiple people can edit the dialog, run code, etc, and all see live updates).
fast.ai (some of the authors of this) was transformative for me, and the community was super nice. Cannot recommend looking into this highly enough.
I read most of this before understanding that I wasn't reading about some new agent or IDE or something, I was reading a sales pitch for a coding course for would-be vibe coders, with AI training wheels in the form of ... a dialog box to talk to an LLM.
I should have noticed the camp counselor / cultish / tedx vibes, throwin around REPL and feedback loops. I feel that it's somewhat misleading to present this as some amazing self-building software or server platform here, and bury the lede that what's being sold is an experimental tutoring method. It's almost like those "I built an AI agent that builds AI agents" posts, only instead of selling the sixty lines of python, it's selling a set of lectures that goes with them.
It's interesting, because the tool and the course is the very _opposite_ of vibe coding. But to each their own I guess.
In other words, a grift.
Same conclusion.
Maybe you should look a bit more into the folks who made this before dissing their work like this.
I did do that now, and I see a string of similar consultant-like training/course packages. It's a bit like when Deloitte comes by to pitch.
What have they built that impressed you?
Let me see. Fastmail? The first LLM which actually used transfer learning for NLP? One of the most wildly useful and successful deep learning courses [available for free]? A big chunk of what is Kaggle?
I love almost everything they have done, I would highly recommend looking a little deeper.
Can you provide some links? Because I see that Eric Ries has a resume on Wikipedia that mainly highlights his book, "The Lean Startup." I see that he was adjacent to some dot-com-bubble-era startups. I see he has a handsome photo. I don't see where he actually founded a successful startup; if anything, reading his resume makes me think he stopped coding and discovered a more successful career in selling promises to young people that his methodologies would turn them into successful entrepreneurs. To me, that does sound like a grift. I mean, why bother doing the actual work of starting a startup, coding and solving lots of problems, when you can present yourself as a guru in how startups work, right? Smart. Also, shady.
Apparently he also runs a stock exchange with two companies on it. And a lot of "core principles". Lol. Speaking as someone who coded and ran the first Bitcoin casino and was around a lot of early crypto bullshit in the nascent years of BTC when lots of dudes like this had crazy plans to commoditize it all sorts of ways, this is juvenile boiler room stuff that would have been laughed out of the Bitcoin Business Association Skype chat in 2010. (And yes, dread pirate roberts was there for a sec, and the general level of dialog was a far sight more intelligent than this dreck).
I got hell-banned here for criticizing PG for running this very site basically to achieve the same grift - to form a cult of young people who'd worship him in exchange for pie in the sky promises that they would become successful startup founders. But to be fair, PG has both actual experience and a ton of investment capital to prove it, so his cult followers have at least some chance of receiving an investment (or a gift, if you think kissing his ass is the essential requirement) that will catapult them into another echelon.
These guys are just living the mantra of "fake it til you make it." This reminds me of the $500 I spent when I turned 21 to take a bartending class for two weeks. Loads of fun. End result: There was one job on their board for graduees, for a bar that had been closed for a couple years. Turned out the best way to become a bartender was to learn on the job.
Turned out that was also the best way to become a software engineer.
Totally
So the antidote to AI fatigue is... more AI?
No, the first example shown doesn't use AI at all. The AI is an optional helper in the process, if needed/wanted.
The AI is an integral part of the platform. My impression from reading the reviews on their website is that you won’t go anywhere using Solveit if you don’t let the LLM give you feedback.
Sigh. I agree with the parent poster’s sentiment.
as someone who uses solveit all the time, this has not been my experience
I find I use the AI less and less the more I use SolveIt tbh
no definitely not the case.
I was in the first batch, I'll be in the second and I'll probably try to be in any of the following.
This is not a course about AI only. It is about learning how to learn and develop yourself specially with the coming AI age. The concepts learned here apply to solve other problems in life. I would recommend it to even non-programmers.
All the industry is moving into vibe-coding, where we delegate more and more into AI, while we do less and less ourselves. The big issue with this is of course that you stop learning. You stop using the ability to put effort into things, and those things, like muscles, go away without use.
It is very tempting to dismiss the solveIT approach as an outlier, at the end of the day if all industry is moving towards full automation instead of careful coding together with the AI, they are probably right, right?
Well, AI getting better at coding, does not mean YOU are getting better at it. In fact it's actually the opposite, without practicing your ability to learn and solve problems, you will lose it. On another level, the kind of one-shotting where you give the AI a problem and come back later after 30 min (or more!) to check if it's done and throw it away if it went of the rails is psychologically. It is the same pattern on which slot machines operate, and gambling is about.
I'm not saying vibe-coding is always bad at all, just there are a lot of caveats and people do not seem to be concerned about.
Those concerns are front and foremost in the solveIT approach, Jeremy has a very thorough understanding about meta-learning for example, the slot machines insight was shared by Johno, and the full team has range of experiences that are rare in the narrow-minded AGI at all costs world of Silicon Valley.
Your psychological health, learning abilities, and in general happiness in life are rarely the main concern of VCs but it rans firsts and foremost in Answer.ai (they are even a public benefit corporation, look it up if you don't know what is that).
With that in mind, I recommend everyone to take the course because it is a multi-dimensional experience that will make you grow as a person and as an engineer. You just gotta look at previous fast.ai course & students.
And finally, if you end up joining because you read this message, I'd love if you get in touch in the Discord server, my name is pol_avec.
See you there!
I participated in the first cohort in January 2025 and have been using SolveIt ever since. I would like to share some thoughts about my experience of the course, platform and the community of makers & users.
Explaining SolveIt is hard
How do you explain to someone who's never written a line of code what programming feels like? When I'd visit my father-in-law, he'd ask me at breakfast what I was going to do that day. Every time I would give him the same answer: I will stare at the same blue screen all day long like I have been doing on every average workday since I started programming ~15 years ago. It's become our little family joke, but it's also true - it's difficult to explain to someone who has never done it.
The question is what does that blue screen work actually involve? What does programming feel like? How do you live through it? What is your experience when you solve problems - is it painful or agreeable? Do you finish knowing more about the domain of a problem you tackled, or did you just get something working fast and hope it wouldn't collapse on you later?
The SolveIt experience is similarly hard to explain to someone who hasn't tried it. What's interesting is that using the SolveIt method has had positive effects on all these areas - how my work feels, how well I understand what I build, and the quality of what I produce.
For my first bigger, work related project in SolveIt I built an annotation interface for YouTube dialogue transcripts for documentary movie footage classification. I'd used Angular before but not FastHTML and HTMX. I had a working app in production within two weeks going from 0->users annotating, which might not sound very impressive in this new agentic era. But when using SolveIt right, the result isn't just working code, it's code that you know, and most importantly it's a much better understanding of your initial problem and the potential solutions/technologies to solve it.
Plus, the SolveIt approach made me examine each component individually - what it represents, what it means for my system, how it could integrate more broadly into our organization. That exploration led to something far more general than my initial annotation app: now we are using it for browsing/editing dataframes through a UI, where columns and cells can have different templates/layout, edit history and several other features I hadn't even thought of initially. The exploration phase took longer, but I gained a foundation I understand deeply and can build on. For core business problems, that knowledge living in your head matters a lot.
My only problem at the time was git integration - I wanted my work synced with my codebase and manual copy pasting was too much friction. This has been resolved since: I have my git repos accessible from SolveIt and AFAIK you can serve apps directly from the platform without needing to deploy it (though I haven't tested this functionality yet).
Cult disclaimer
I was surprised reading through this thread by comments about religious cults, paid marketing campaigns, and fake testimonials. It's true that I registered to Hacker News today just to share my experience with SolveIt. I am a real person though and have not received any money for writing this post.
Instead, for several years now I've been learning from Jeremy Howard and the answer.ai team through their free resources - the fast.ai courses, talks, podcasts, and open source projects. Hamel Husain's work on nbdev also had a huge impact on how I work. All of this is available for free to anyone interested in learning. I hope that many others will discover them and SolveIt. I believe many successful projects, businesses and open source solutions will grow on the SolveIt ground, the same way it happened with fast.ai courses.
On $$$ concerns: I paid $200 for the first prototype cohort. I thought it was worth much more than that - the course itself, plus unlimited 11+ month platform access (included Claude most recent model), plus the community discussions where I learned from other practitioners how they tackled specific problems from very diverse domains. When cohort 2 was announced, I knew I was gonna sign up even before I knew the price. The ideas you take and friends you make in these communities are priceless.
[stub for offtopicness]
I was part of the first cohort and I will be taking the course again. SolveIt has a unique take on how to best work with AI that helps produce better end products and helps you learn along the way. The platform is tailored to implementing the methodology taught in the course but the lessons can be taken when working in other AI tools to obtain better results.
I took the 1st course and really loved it. I got a little distracted by focusing on building web apps with fasthtml(which was covered in the course), but I think the practices that are tough are extremely important for anyone using LLMs on a daily basis. I am really excited to learn about all the new features in solveIt.
I was part of the first SolveIt course cohort and have signed up for the second one too. SolveIt and its methodology really changed how I approach programming and problem solving with AI. It gives you a collaborative partner that helps you think through problems without just handing you answers. You still do the hard thinking, but you are never completely stuck. It has also allowed me to tackle tasks, ideas and experiments that I had previously put to the side knowing they would have taken too much time and effort to explore.
Although it can take more steps and iterations than other tools, that is part of the difference, they are more considered and thought through steps. It makes me more productive overall. The ability to edit the conversation and work in short steps lets you create better context, together with sharing your thought process and building genuine understanding that is useful in the future.
It is refreshing to find an approach that makes me better rather than just faster at producing code without full understanding of it. I find it jarring to use chat with other LLMs now and that typical code completion can be frustrating, that is how different it is.
Upfront : As a member of the first student cohort and a non-user of Hacker News this is my opinion. I've relied on this tool consistently over the past year. While it's not the only AI I use, it's the only one that provides a comfortable and effective workflow. The core strengt, for me, lies on my agency. I control the decision-making and discuss the implementation steps. while I may generate fewer lines of code than my peers, the work is solid, well-explained, and easily iterable. Also I knwo the code i just wrote, the ability to articulate the function of every line of code is a crucial benefit that I deeply value. It it's the sweet spot for me and i fell more productive.
I've been using SolveIt for about a month and going through the previous cohort content. It's really hard to not to try to vomit out solutions and to go about something in a methodical way. It makes your work visible, maintainable and usable by others (and frankly yourself). It takes a lot of practice, but it is very much intangible and always in the process of becoming. If you want to focus on solving pretty much any problem, and a path to using AI effectively in any endeavor, then this is a really fulfilling path. You will find that there is more magic in AI than you suspect. Many praises to Jeremy and Johno and the SolveIt team. But don't take my word for it. I'm a fresh convert. Go check it out for yourself. It is subtle and requires nuance, so not for everyone.
This post and whole comment section is like a coordinated advertisement
yes, because we enjoyed the course and the community so much that we're very much looking forward to others joining
Strong recommend. I did the trial course which meant I could complete Advent of Code and also build an app to take you to the moment in the course videos where topics were discussed. With SolveIt you can have your hand-held while you go from idea to result; learning as you go.
These achievements may not sound much to hard-core tech bros but that's the beauty of the problem solving method - it meets you where you are. After working through your problem you'll have learnt something and be able to tackle harder tasks next.
It's been fun watching the team continuously evolve the product over the last year. Looking forward to round 2.
@dang Looks like astroturfing. There are several comments by users with very little history, all mentioning that they took the course before.
https://news.ycombinator.com/item?id=45456502 (0 karma)
https://news.ycombinator.com/item?id=45460800 (1 karma)
https://news.ycombinator.com/item?id=45456941 (4 karma)
https://news.ycombinator.com/item?id=45459683 (3 karma)
https://news.ycombinator.com/item?id=45460321 (1 karma)
OK thanks. We've moved those comments (and this one) to a hidden "stub" subthread.
A reminder to everyone that the FAQ includes this:
Can I ask people to comment on my submission?
No, for the same reason. It's also not in your interest: HN readers are sensitive to this and will detect it, flag it, and use unkind words like 'spam'.
==
For all we know the Solveit guys didn't ask anyone to come and comment here, so we don't want to judge them (and they're good guys who have contributed positively to HN in the past). Sometimes people in a community – particularly an engaged and loyal one like Solveit’s seems to be – can be "too helpful" and act as though they're astroturfing, even though they're just sharing their excitement about the product and community.
The problem on HN is that it can be hard to tell the difference between authentic excitement and astroturfing.
Thanks for the assumption of good faith. We had no intent to astroturf. Some of the students from cohort one were discussing the negativity of the HN thread on the course discord server. It seems that several took it upon themselves to come share their experience. From the comments whose names I recognize, I can assure that these are all real people who have first-hand knowledge of what they're posting about.
Solveit community is so nice and wholesome that I see many comments being dismissed as astroturfing, well maybe HN is not used to this but I can tell you all the people in there are for real.
You can take my case, I started writing and posting videos after taking the solveIT course because you are encouraged to do so. Many people on the community have started to do so, and that's why they might not have much of a history yet.
Moreover, the course is such a good experience that when there's a chance to share it publicly and invite more people to it (after a year of private beta), members go out of their way (into HN) to try to invite more people.
I have seen all the people you mention in the discord and chat with some of them so I can tell you they are real.
Because once again, my comment might sound like astroturfing, you can have a look at my youtube & X where I've been posting about this stuff for quite a while
www.youtube.com/@polavec7163 x.com/pol_avec
May I also add that the SolveIt Discord hosts a very welcoming community. Not all are.
[dead]
who tf would pay money for this??
who wouldn't? learning how to learn from one of the best, which additionally deeply cares about you
I will and did.