Counterargument: I can learn things at record speed (for me) because I can learn things in the order that makes sense to me. I find it much more motivating to start with: “why is AI bad at playing video games?”, than to start with “what is the chain rule?”
I will certainly need to learn about the chain rule eventually, but I find that I get lost in the details (and unmotivated to continue) without an end goal that is interesting to me.
AI loves to make vibe charts (sometimes with lots of extra steps), but that’s part of the process. It is nontrivial to wrangle LLMs through larger projects, and people need to learn that too
So this repo popped up in my newsfeed: https://github.com/necat101/Chronos-CLGCM
This isn't the first instance when I saw something like that, and it looks like a new trend, as the glamour of being an AI researcher with a high-paying job has started to affect ordinary people. It feels like a lottery ticket that you can win with vibe-coding.
How can we approach these situations in a good manner and help those people with constructive feedback?
> How can we approach these situations in a good manner and help those people with constructive feedback?
The same way we always have: mockery. Either they'll continue as they are or correct behavior. The lack of sensitivity towards dissonance might go against your wishes for "good manner", I disagree.
Something, something, participating in delusion.
A practical example: "managing upwards" by mocking the ideas, or more importantly/accurately: the outcome.
For some reason, most of the AI-focused subreddits, like r/OpenAI, r/Anthropic, etc., seem to convey a sense of mass mental illness or delusion. For example, r/OpenAI is 95% focused on how OpenAI is horrible for taking away their virtual girlfriend/therapist model, 4o.
r/Anthropic is 95% either rambling about usage limits in Claude Code or “showing off” by presenting a giant wall of AI generated text describing the stack for their latest zero utility vibe-coded project.
I personally find value in vibecoding, but the signal to noise ratio in these spaces is insanely poor compared to generally anywhere else on Reddit, and I’m not sure what that says.
Yep. When such a massive "yuge new ground floor opportunity!" hype hits the mass media zeitgeist it not only triggers an influx grifters but lots of lower-effort dilettantes. They're attracted not only by the promise of quick success but also a need to identify themselves with this 'hot new thing'. So they tend to seek out forums where they can express their new identity in the hope of receiving validation. In essence, they need a place to try out and practice their new identity before they trot it out to their friends and family IRL.
As long as it does not land on my desk for reviewing I don’t mind.
It’s also a good test for the LLM promise that they can do PhD level research. This means we will should be seeing something novel that works from these folks.
It's not unpossible to vibe code (I hate that term) good software, or well designed software, but if you don't come from a UX or software background, you can generally tell when you look at their product documentation.
Lots of Emojis, too many emojis. Lots of flow charts. Too many flow charts.
Counterargument: I can learn things at record speed (for me) because I can learn things in the order that makes sense to me. I find it much more motivating to start with: “why is AI bad at playing video games?”, than to start with “what is the chain rule?”
I will certainly need to learn about the chain rule eventually, but I find that I get lost in the details (and unmotivated to continue) without an end goal that is interesting to me.
AI loves to make vibe charts (sometimes with lots of extra steps), but that’s part of the process. It is nontrivial to wrangle LLMs through larger projects, and people need to learn that too
So this repo popped up in my newsfeed: https://github.com/necat101/Chronos-CLGCM This isn't the first instance when I saw something like that, and it looks like a new trend, as the glamour of being an AI researcher with a high-paying job has started to affect ordinary people. It feels like a lottery ticket that you can win with vibe-coding.
How can we approach these situations in a good manner and help those people with constructive feedback?
> How can we approach these situations in a good manner and help those people with constructive feedback?
The same way we always have: mockery. Either they'll continue as they are or correct behavior. The lack of sensitivity towards dissonance might go against your wishes for "good manner", I disagree.
Something, something, participating in delusion.
A practical example: "managing upwards" by mocking the ideas, or more importantly/accurately: the outcome.
For some reason, most of the AI-focused subreddits, like r/OpenAI, r/Anthropic, etc., seem to convey a sense of mass mental illness or delusion. For example, r/OpenAI is 95% focused on how OpenAI is horrible for taking away their virtual girlfriend/therapist model, 4o. r/Anthropic is 95% either rambling about usage limits in Claude Code or “showing off” by presenting a giant wall of AI generated text describing the stack for their latest zero utility vibe-coded project. I personally find value in vibecoding, but the signal to noise ratio in these spaces is insanely poor compared to generally anywhere else on Reddit, and I’m not sure what that says.
I imagine every gold rush mania is filled with kooks.
What’s the stereotype of the California and Yukon gold rushes? Toothless codgers and Yosemite Sam.
Tulip mania? Not as clear but surely not calm and rational.
NFTs? Hype Bros and Crypto Douchebags.
Social Media is the lens to current set of self selecting weirdos.
Yep. When such a massive "yuge new ground floor opportunity!" hype hits the mass media zeitgeist it not only triggers an influx grifters but lots of lower-effort dilettantes. They're attracted not only by the promise of quick success but also a need to identify themselves with this 'hot new thing'. So they tend to seek out forums where they can express their new identity in the hope of receiving validation. In essence, they need a place to try out and practice their new identity before they trot it out to their friends and family IRL.
As long as it does not land on my desk for reviewing I don’t mind.
It’s also a good test for the LLM promise that they can do PhD level research. This means we will should be seeing something novel that works from these folks.
It's not unpossible to vibe code (I hate that term) good software, or well designed software, but if you don't come from a UX or software background, you can generally tell when you look at their product documentation.
Lots of Emojis, too many emojis. Lots of flow charts. Too many flow charts.
[dead]