I get the gist of what the author tries to say, but the blog is definitely written by an LLM and the blog post is taking the idea too far. Please, people do not ask ChatGPT to refine your thoughts and call it a day. It misrepresents your ideas.
It's definitely in LLM style. Some times I wonder if newer bloggers and writers are inadvertently picking up LLM style blathering by using ChatGPT to refine their work. The style of subheadings and bullet points to expand simple statements into something that appears structured is an LLM hallmark, but I'm starting to see younger people adopt this style as if it's how they're expected to write.
I'd take it a step further and say this distinction is also contributing to the erosion of hobbies in younger generations. Even more so it is eroding time spent on productive / skill based hobbies and lean into consumption based hobbies.
Due to the rise of influencers, social media is barely a sharing platform anymore.. its just decentralized long-tailed broadcast media.
Many modern people think dining out and travel are hobbies, and in between doom scroll social media.
Time spent staring at the phone is rarely productive or anxiety reducing.
Great phrase. "social media is barely a sharing platform anymore.. its just decentralized long-tailed broadcast media." really captures the essence of what is happening.
EDIT: The author has been changing the post in real-time to try to undermine the comments calling out the clear AI use. The article now contains an admission that AI was used and the obvious AI slop image has been cleaned up. Some of these comments won’t make sense if you’re looking at the updated article.
Did you notice that this entire blog is just an LLM content farm newsletter? That the laptop in the headline image has a double keyboard AI artifact that the author didn't even spend 10 seconds cropping out?
The recent posts hit all the common points in LLM hallucinated content like the famous "recursive protocol" trope. The posts are about BS like "UFO markets" and reality protocols.
It's ironic that people are consuming this obvious AI slop uncritically while criticizing other people for their uncritical consumption of media on their phones.
If you read through postings of people who are under so-called “ChatGPT psychosis” there are some common themes to the LLM output they use as their proof that ChatGPT is producing epiphanies from their ideas.
For some reason, calling things “recursive” and talking about “protocols” are common. The second post I clicked on in this blog has a section called “recursive protocol” with similar content to all of the other ChatGPT style writing. The subheading talking about “UFO markets” and all of the flowcharts purporting to describe reality are also similar to other ChatGPT fake profound output.
One problem is that, like on a recipe page, the core ideas are stretched into a longer narrative.
And then the reader has to consume the narrative to derive the core ideas themself.
So it's off-putting that the reader has to split off the narrative chaff that you didn't even write and/or spend the time editing.
At some point it makes more sense to publish a minimal paragraph of your core ideas, and the reader can paste it into an LLM if they want a clanker to couch it extra content.
It's not just that, it's that parts of the words are very hard to read. They've been smoothed over. Rather than being drawn to the information content, my attention skips over it like a stone over a lake. Some of the paragraphs are mostly yours: others clearly aren't.
Comparing the two images is a good analogy. You instructed the AI to remove the keyboards, and it completely changed the entire contents of both screens, as well as the hand holding the phone. I'm not sure what app has a modular plug as its "main screen" icon, but that distracts me from the whole rest of the image: even the cardboard surface of the bottom part of the laptop. It's less clear what you were trying to convey with the image than before.
Human-to-human communication is not something that benefits from inserting generative AI in the middle. This whole article is confusing: like a collaboration between a pointillist and an impressionist, except they didn't agree on what they were trying to say, so the picture can only be understood by working backwards and trying to model the production process in your head.
> But—and this matters—the sandbox remains someone else's. The app defines the possibility space. The platform determines what's possible. Users create within the system, never of the system.
I was going to use this as an example of a paragraph I understood, but then I looked closer: I have no idea what the distinction between "within" and "of" that you're trying to draw actually is. Sure, I know what you're trying to say, but which one is meant to be "within", and which one is "of"? The slop header image is a symptom of the broken authorial process that led to this article, not the primary issue with it: the main problem is that you started out with something to say, and ended up with confusing, verbose, and semi-meaningless output.
Most people can write better than this. You can write better than this.
If someone is going to restaurants serving variations of a dish, or if they travel to cemeteries where their ancestors are buried, that's qualify as hobbies.
Whereas, if they go to all the trending restaurants or countries/cities on social networks, that's not hobbies.
In the first case, they make active decisions on what they do; in the second case, they are just following the decisions made by others.
To be clear, both are fines, as long as they feel happy with how they spend their free time and money.
But I know with who I'd like to spend some time, listening them explaining what they did in the last months.
Yes, in NYC, theres a very "culture vulture / hype beast" mentality of chasing whatever hot reservation is trendy so you can yap about it to the same circle of friends doing the same thing.
I don't need to know how expensive the tasting menu was or how hard it was to get a reservation because in the words of Logan Roy - "congrats on saying the biggest number".
Tell me about something you did, made, learned, gave back, etc.
Vacations are nice, I take them, but 90% of the time hearing people describe their most recent banal trend following travel is boring. I swear every rich person I know seems to take the same 5-10 trips as each over. I would probably find a discussion of a book you read while on vacation more engaging.
Everything we have seen over the last few years (eg what microsoft is doing to Windows) points to a push to make the platforms we used to control, more like the 'consumption' platforms. Profit demands it.
"Does this serve my goals, or someone else's metrics?" indeed.
I just wish this were true, all I see now is convergence between the two platforms. Perhaps a linux workstation is still configurable enough to stay true.
My self-hosted Linux server increasingly feels like the only real computer I own.
I did within the last year switch from Windows to Mac for my primary desktop, and it feels like I regressed about a decade in the dumbification of computing compared to where Windows was headed.
Just be clear - you mean a Mac is less dumb? I've always used windows and thinking of switching to MacOS or going straight to Linux. Apple hardware these days is pretty fire so not sure just yet.
MacOS feels like Android. It's more mobile OS than desktop/laptop. It's theoretically open but you can absolutely feel the walls closing in around you: only signed binaries are trusted by the OS (no asked who you trust), apps not from the app store are scrutinized heavily, etc.
Given Apple's recent actions in the US, MacOS doesn't feel like something I'd be switching to.
Apple keeps adding permission requests/demands to more mundane operations that you have to try and get around. But it is increasingly feeling like they are heading toward a future where they no longer offer the ability to get around their "security".
I have a couple apps that just stop working because they are open source, haven't paid the Apple developer tax, and the permission to the disk just expires seemingly randomly. I haven't figured out how to get around that and instead just don't rely on them.
Sorry, worded poorly, yes I feel Macs are less “dumbed down” in the sense that user experience and design aren’t sacrificed to try and make it easier for the lowest common denominator to use, but in the end it’s just worse for everybody. Example: in Windows 11 the right click menu has less options than Win10, and now contains a “more” option leading to expanding another submenu where pretty basic functions from Win10 right click are hidden behind this extra menu. Hell, the design doesn’t even match. The nested context menu looks like it was Frankensteined in from Win10. Apple at least has a design philosophy for UI stuff.
I have personally felt like I have to fight less with MacOS than I did with Windows. And my pihole shows Apple isn’t greedily trying to hoover up my data like MS.
Personally I make use of it heavily in how the TV is used at home. An old laptop with Firefox+unlock origin connected via HDMI.
I invested in two wireless handheld keyboard+pointer inputs to match the different input styles of me and my wife.
Completely bypasses all ads with less effort than setting up a pihole or torrent+Plex server, and the bonus is avoiding the surveillance from the TV's 'consumption OS'
I very much used to agree with this, but some time this summer the ChatGPT iOS app started to change this for me. I have definitely had days where I've felt as coding-creative as I can be on a laptop but instead just texting my AI interns to handle the execution while I'm out for a walk.
This blog is an LLM newsletter content factory. It's more obvious in the other articles.
Look at the heading and sub-heading of a post from a couple weeks ago:
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
It even gets into the "recursive protocol" trope that has become a common theme among people think ChatGPT is revealing secrets of the universe to them.
This type of LLM slop has been hitting the page more frequently lately. I assume it's from people upvoting the headline before reading the content.
1/ So far, you've made five comments about this throughout the thread.
2/ I've added an update at the top; pasting it here as well:
"My high school teacher in 2004 accused me of plagiarizing from Wikipedia because my research paper looked "too polished" for something typed on a keyboard instead of handwritten. Twenty years later, HN commenters see clean prose and assume LLM slop. Same discomfort, different decade, identical pattern: people resist leverage they haven't internalized yet.
I use AI tools the way I used spell-check, grammar tools, and search engines before them—as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.
You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine."
> 1/ So far, you've made five comments about this throughout the thread.
As someone who actually clicks the links and reads the articles, I’m growing frustrated with these AI-written articles wasting my time. The content is typical of ChatGPT style idea expansion where someone puts their “ideas” into an LLM and then has the LLM generate filler content to expand it into a blog post.
I try to bring awareness of the AI generated content so others can avoid wasting their time on it as well. Content like this also gets flagged away from the front page as visitors realize what it is.
Your edited admission of using AI only confirms the accusations.
It's not black and white. My SO doomscrolls facebook on her laptop for hours daily. Certain parts of creative workflows are better on phones (or other devices) than laptops - the article acknowledges this in the "hybrid workflows" section.
IMO the important thing to be mindful of is your creation-vs-consumption balance. We tend to overindex on consumption.
more broadly, laptops and desktops have also degenerated as tools for thought, largely because they have been turned into vehicles of consumption. every screen has become the infinite push algorithm.
Most millionaires and billionaires and World leaders did not need laptops to attain positions of power, success and influence.
An impression being created here that laptops are the best creation tools, and that users have the right to greater control on them. MacOS, iOS, Windows and Android are just extensions of each other. In a continually connected device ecosystem, there is a false perception of power and control in the writer's mind about using a Laptop.
I certainly think that laptops have better software and interfaces for some types of work. But, Capcut mobile is earier to use and more powerful in the hands of the 99.9% than any desktop editing tool.
What we must remember, is that where once the limitation to productivity was typing, or clicking, LLMs and AI assisted tasks are going to afford mobile users the power that was once only available to computer users. For example, who needs to edit chunks of code when bitrig or cursor mobile (early in their stages of company development) do the laborious work for you. The limitation of mobile devices is now only one of perception.
I’ve never seen the OQFFY keyboard layout before. I really just can’t comprehend the mindset that thinks adding such a picture is better than no picture. What a bizarre world we live in now.
I agree. Good core idea, but it feels quite stretched.
Most of the examples used to justify creation vs consumption can also be explained by low scale vs high scale (cost sensitive at high scale) or portability.
I’m not going crazy, right, nearly nobody aside from professional writers used em dashes prior to 2022. And the whole bolded topic intro, colon, short 1-2 sentence explanation seems way more like a product of GPT formatting than how organic humans would structure it?
So much writing on the internet seems derivative nowadays (because it is, thanks to AI). I’d rather not read it though, it’s boring and feels like a samey waste of time. I do question how much of this is a feedback loop from people reading LLM text all the time, and subconsciously copying the structure and formatting in their own writing, but that’s probably way too optimistic.
I made a conscious effort to switch from hyphens to em dashes in the 2010's and now find myself undoing that effort because of these things, so I try not to instantly assume "AI". But look long enough and you do notice a "sameness": excellent grammar, fondness for bulleted lists, telltale phrases like "That's not ___, it's ___."
And a certain vacuousness. TFA is over 16000 words and I'm not really sure there's a single core point.
The entire blog is full of characteristic LLM styles: The faux structure on top of rambling style, the unnecessary and forced bullet point comparisons with equal numbers of bullets, the retreading of the same concept in different words section after section.
The rest of the blog has even more obvious AI output, such as the “recursive protocol” posts and writing about reality and consciousness. This is the classic output you get (especially use of ‘recursive’) when you try to get ChatGPT to write something that feels profound.
Mostly agree, however this kind of quirk could issue entirely from post-training, where the preferences/habits of a tiny number of people (relative to the main training corpus) can have outsize influence of the style of the model's output. See also the "delve" phenomenon.
But to this frequency? (Note: I tried to find a study on the frequency of em dash use between GPT and em-dash prolific human authors, and failed.)
The article has on average, about one em dash per paragraph. And “paragraph” is generous given they’re 2-3 sentences in this article.
I read a lot, and I don’t recall any authors I’ve personally read using an em dash so frequently. There would be like 3 per page in the average book if human writers used them like GPT does.
> Memory Beaches and How Consciousness Hacks Time Through Frame Density
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
The blog is supposedly about AI agents and MCP (the current top buzzwords)
> Engineer-philosopher exploring the infrastructure of digital consciousness. Writing about Model Context Protocol (MCP), Information Beings, and how AI agents are rewiring human experience. Former Meta messaging architect.
The entire blog is just an LLM powered newsletter play.
I'm fascinated by the thought process, or absence thereof, involved in such an image ending up in something that's obviously meant for consumption by others.
As the author, do you just don't see what ridiculous image the slop machine spewed out - a kind of visual dyslexia where you do not register problematic hallucinations?
I can go on for a while hypothesizing, and none of the reasons I can come up with warrant using obviously bad AI slop images.
Is it disdain for your users - they won't see it/they won't care/they don't deserve something put together with care? Is it a lack of self-respect? Do people just genuinely not care and think that an article must have visuals to support it, no matter how crappy?
I'm a big fan of the MWCBTY keyboard format, it's especially efficient when you have to type a lot of G's.
Snark aside, I think it's laziness and the shotgun approach. The author writes some rough thoughts down, has an AI "polish" them and generate an image, and posts an article. Shares it on HN. Do it enough, especially on a slow Sunday morning, and you'll get some engagement despite the detractors like us in the comments. Eventually you've got some readers.
But they really loves multi-screens :) For me, multi-screens are a big waste, I find virtual screens for more useful. The only real use multi-screens have for me is debugging a program with some kind of user interface. And the 2nd screen only needs to be a text terminal.
But, I have not used Windows for decades, so I wonder if these multi-screen setups and popular due to how the Windows GUI work and are really needed.
Frankly the right tool is sometimes the one you have in front of you.
But anyone who's seen disaster DIY videos or worse had a house full of said projects from previous owners knows, there are problems caused by "When all you have is a hammer..." And an enthusiastic inexperienced amateur.
Meta-response to a lot of these comments: My high school teacher in 2004 accused me of plagiarizing from Wikipedia because my research paper looked "too polished" for something typed on a keyboard instead of handwritten. Twenty years later, HN commenters see clean prose and assume LLM slop. Same discomfort, different decade, identical pattern: people resist leverage they haven't internalized yet.
I use AI tools the way I used spell-check, grammar tools, and search engines before them; as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.
You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine :)
Well the hope is now with Google merging ChromeOS & Android it'll evolve into a hybrid desktop/mobile OS, and we get to the point where you can finally plug your phone into a monitor, launch VSCode etc. Recent side load signing requirements could be a big hurdle though.
Apple seems to completely stuck with their macOS/iOS split, and probably will never do anything about it. Now iPadOS & macOS look and feel more similar than ever before, but it's all just facade. They should actually commit really hard on merging these OS, but they can't open up iOS, because that would threaten the 30% cut and so its simply not going to happen.
I think you’re missing a forest for the trees. Android desktop and iPadOS are the same thing. Next-gen OSes with advantages stemming from their different paradigm at the cost of control/freedom. It’s happening slowly but IMO the end game isn’t to merge macOS with iPadOS, it’s to make macOS obsolete and deprecate it once possible.
I don't disagree. I just don't think these "next-gen OSes" are going to stand in the long run. I'm optimistic that the current situation where we're buying $1000+ devices, handicapped, but perfectly capable of much more, is gonna change sooner or later. At the moment, Android desktop seems like the most likely candidate, it could see a lot of 3rd party support.
I get the gist of what the author tries to say, but the blog is definitely written by an LLM and the blog post is taking the idea too far. Please, people do not ask ChatGPT to refine your thoughts and call it a day. It misrepresents your ideas.
It's definitely in LLM style. Some times I wonder if newer bloggers and writers are inadvertently picking up LLM style blathering by using ChatGPT to refine their work. The style of subheadings and bullet points to expand simple statements into something that appears structured is an LLM hallmark, but I'm starting to see younger people adopt this style as if it's how they're expected to write.
Agreed. Even the title set of immediate ChatGPT alarm bells. Please write in your own voice!
I'd take it a step further and say this distinction is also contributing to the erosion of hobbies in younger generations. Even more so it is eroding time spent on productive / skill based hobbies and lean into consumption based hobbies.
Due to the rise of influencers, social media is barely a sharing platform anymore.. its just decentralized long-tailed broadcast media.
Many modern people think dining out and travel are hobbies, and in between doom scroll social media.
Time spent staring at the phone is rarely productive or anxiety reducing.
Great phrase. "social media is barely a sharing platform anymore.. its just decentralized long-tailed broadcast media." really captures the essence of what is happening.
Someone said it should really be called "social marketing"
EDIT: The author has been changing the post in real-time to try to undermine the comments calling out the clear AI use. The article now contains an admission that AI was used and the obvious AI slop image has been cleaned up. Some of these comments won’t make sense if you’re looking at the updated article.
Did you notice that this entire blog is just an LLM content farm newsletter? That the laptop in the headline image has a double keyboard AI artifact that the author didn't even spend 10 seconds cropping out?
The recent posts hit all the common points in LLM hallucinated content like the famous "recursive protocol" trope. The posts are about BS like "UFO markets" and reality protocols.
It's ironic that people are consuming this obvious AI slop uncritically while criticizing other people for their uncritical consumption of media on their phones.
Sorry, but wear is the recursive protocol trope? Also do you know a good list of these tropes for llm spotting?
If you read through postings of people who are under so-called “ChatGPT psychosis” there are some common themes to the LLM output they use as their proof that ChatGPT is producing epiphanies from their ideas.
For some reason, calling things “recursive” and talking about “protocols” are common. The second post I clicked on in this blog has a section called “recursive protocol” with similar content to all of the other ChatGPT style writing. The subheading talking about “UFO markets” and all of the flowcharts purporting to describe reality are also similar to other ChatGPT fake profound output.
Is it AI then if there’s a human author? lol. You are funny.
One problem is that, like on a recipe page, the core ideas are stretched into a longer narrative.
And then the reader has to consume the narrative to derive the core ideas themself.
So it's off-putting that the reader has to split off the narrative chaff that you didn't even write and/or spend the time editing.
At some point it makes more sense to publish a minimal paragraph of your core ideas, and the reader can paste it into an LLM if they want a clanker to couch it extra content.
> Is it AI then if there’s a human author? lol. You are funny.
You have now updated the article to admit using AI to write it.
So why is it funny that I recognized it as AI?
Oh well. If you say so.
It's an interesting keyboard layout, though.
Your other keyboard has even more exotic glyphs: is that APL?I'm sorry the GenAI image prevented you from engaging with the ideas. Layout removed if that makes you and others feel better.
It's not just that, it's that parts of the words are very hard to read. They've been smoothed over. Rather than being drawn to the information content, my attention skips over it like a stone over a lake. Some of the paragraphs are mostly yours: others clearly aren't.
Comparing the two images is a good analogy. You instructed the AI to remove the keyboards, and it completely changed the entire contents of both screens, as well as the hand holding the phone. I'm not sure what app has a modular plug as its "main screen" icon, but that distracts me from the whole rest of the image: even the cardboard surface of the bottom part of the laptop. It's less clear what you were trying to convey with the image than before.
Human-to-human communication is not something that benefits from inserting generative AI in the middle. This whole article is confusing: like a collaboration between a pointillist and an impressionist, except they didn't agree on what they were trying to say, so the picture can only be understood by working backwards and trying to model the production process in your head.
> But—and this matters—the sandbox remains someone else's. The app defines the possibility space. The platform determines what's possible. Users create within the system, never of the system.
I was going to use this as an example of a paragraph I understood, but then I looked closer: I have no idea what the distinction between "within" and "of" that you're trying to draw actually is. Sure, I know what you're trying to say, but which one is meant to be "within", and which one is "of"? The slop header image is a symptom of the broken authorial process that led to this article, not the primary issue with it: the main problem is that you started out with something to say, and ended up with confusing, verbose, and semi-meaningless output.
Most people can write better than this. You can write better than this.
Dining out and travel aren’t hobbies?
If that’s not gatekeeping I don’t know what is.
I think the difference is in the intent.
If someone is going to restaurants serving variations of a dish, or if they travel to cemeteries where their ancestors are buried, that's qualify as hobbies.
Whereas, if they go to all the trending restaurants or countries/cities on social networks, that's not hobbies.
In the first case, they make active decisions on what they do; in the second case, they are just following the decisions made by others.
To be clear, both are fines, as long as they feel happy with how they spend their free time and money.
But I know with who I'd like to spend some time, listening them explaining what they did in the last months.
Yes, in NYC, theres a very "culture vulture / hype beast" mentality of chasing whatever hot reservation is trendy so you can yap about it to the same circle of friends doing the same thing.
I don't need to know how expensive the tasting menu was or how hard it was to get a reservation because in the words of Logan Roy - "congrats on saying the biggest number".
Tell me about something you did, made, learned, gave back, etc.
Vacations are nice, I take them, but 90% of the time hearing people describe their most recent banal trend following travel is boring. I swear every rich person I know seems to take the same 5-10 trips as each over. I would probably find a discussion of a book you read while on vacation more engaging.
Sure, you can call it a hobby if you wish. I would call it a meritless hobby.
They are consumption that can sometimes be thought of as hobbies.
Is shopping a hobby?
Hobbies to me are more about putting something out into the world even if just for yourself or family.
Cooking is more of a hobby than dining out.
I am less optimistic than the author.
Everything we have seen over the last few years (eg what microsoft is doing to Windows) points to a push to make the platforms we used to control, more like the 'consumption' platforms. Profit demands it.
"Does this serve my goals, or someone else's metrics?" indeed.
I just wish this were true, all I see now is convergence between the two platforms. Perhaps a linux workstation is still configurable enough to stay true.
My self-hosted Linux server increasingly feels like the only real computer I own.
I did within the last year switch from Windows to Mac for my primary desktop, and it feels like I regressed about a decade in the dumbification of computing compared to where Windows was headed.
Just be clear - you mean a Mac is less dumb? I've always used windows and thinking of switching to MacOS or going straight to Linux. Apple hardware these days is pretty fire so not sure just yet.
MacOS feels like Android. It's more mobile OS than desktop/laptop. It's theoretically open but you can absolutely feel the walls closing in around you: only signed binaries are trusted by the OS (no asked who you trust), apps not from the app store are scrutinized heavily, etc.
Given Apple's recent actions in the US, MacOS doesn't feel like something I'd be switching to.
Apple keeps adding permission requests/demands to more mundane operations that you have to try and get around. But it is increasingly feeling like they are heading toward a future where they no longer offer the ability to get around their "security".
I have a couple apps that just stop working because they are open source, haven't paid the Apple developer tax, and the permission to the disk just expires seemingly randomly. I haven't figured out how to get around that and instead just don't rely on them.
Sorry, worded poorly, yes I feel Macs are less “dumbed down” in the sense that user experience and design aren’t sacrificed to try and make it easier for the lowest common denominator to use, but in the end it’s just worse for everybody. Example: in Windows 11 the right click menu has less options than Win10, and now contains a “more” option leading to expanding another submenu where pretty basic functions from Win10 right click are hidden behind this extra menu. Hell, the design doesn’t even match. The nested context menu looks like it was Frankensteined in from Win10. Apple at least has a design philosophy for UI stuff.
I have personally felt like I have to fight less with MacOS than I did with Windows. And my pihole shows Apple isn’t greedily trying to hoover up my data like MS.
This is why I'm using a GNU/Linux phone.
Personally I make use of it heavily in how the TV is used at home. An old laptop with Firefox+unlock origin connected via HDMI.
I invested in two wireless handheld keyboard+pointer inputs to match the different input styles of me and my wife.
Completely bypasses all ads with less effort than setting up a pihole or torrent+Plex server, and the bonus is avoiding the surveillance from the TV's 'consumption OS'
I very much used to agree with this, but some time this summer the ChatGPT iOS app started to change this for me. I have definitely had days where I've felt as coding-creative as I can be on a laptop but instead just texting my AI interns to handle the execution while I'm out for a walk.
This blog is an LLM newsletter content factory. It's more obvious in the other articles.
Look at the heading and sub-heading of a post from a couple weeks ago:
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
It even gets into the "recursive protocol" trope that has become a common theme among people think ChatGPT is revealing secrets of the universe to them.
This type of LLM slop has been hitting the page more frequently lately. I assume it's from people upvoting the headline before reading the content.
1/ So far, you've made five comments about this throughout the thread. 2/ I've added an update at the top; pasting it here as well:
"My high school teacher in 2004 accused me of plagiarizing from Wikipedia because my research paper looked "too polished" for something typed on a keyboard instead of handwritten. Twenty years later, HN commenters see clean prose and assume LLM slop. Same discomfort, different decade, identical pattern: people resist leverage they haven't internalized yet.
I use AI tools the way I used spell-check, grammar tools, and search engines before them—as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.
You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine."
> 1/ So far, you've made five comments about this throughout the thread.
As someone who actually clicks the links and reads the articles, I’m growing frustrated with these AI-written articles wasting my time. The content is typical of ChatGPT style idea expansion where someone puts their “ideas” into an LLM and then has the LLM generate filler content to expand it into a blog post.
I try to bring awareness of the AI generated content so others can avoid wasting their time on it as well. Content like this also gets flagged away from the front page as visitors realize what it is.
Your edited admission of using AI only confirms the accusations.
What ideas does this article contain, beyond the headline?
It's not black and white. My SO doomscrolls facebook on her laptop for hours daily. Certain parts of creative workflows are better on phones (or other devices) than laptops - the article acknowledges this in the "hybrid workflows" section.
IMO the important thing to be mindful of is your creation-vs-consumption balance. We tend to overindex on consumption.
more broadly, laptops and desktops have also degenerated as tools for thought, largely because they have been turned into vehicles of consumption. every screen has become the infinite push algorithm.
https://vonnik.substack.com/p/how-to-take-your-brain-back
i think it's underrated, too, how much the pairing of phone-camera to produce media amplifies the possibility of consumption via the same device.
Most millionaires and billionaires and World leaders did not need laptops to attain positions of power, success and influence.
An impression being created here that laptops are the best creation tools, and that users have the right to greater control on them. MacOS, iOS, Windows and Android are just extensions of each other. In a continually connected device ecosystem, there is a false perception of power and control in the writer's mind about using a Laptop.
I certainly think that laptops have better software and interfaces for some types of work. But, Capcut mobile is earier to use and more powerful in the hands of the 99.9% than any desktop editing tool.
What we must remember, is that where once the limitation to productivity was typing, or clicking, LLMs and AI assisted tasks are going to afford mobile users the power that was once only available to computer users. For example, who needs to edit chunks of code when bitrig or cursor mobile (early in their stages of company development) do the laborious work for you. The limitation of mobile devices is now only one of perception.
That header picture is top slop!
Seriously, please stop it. If you talk about an abstract topic, feel free to have no picture, just text.
I’ve never seen the OQFFY keyboard layout before. I really just can’t comprehend the mindset that thinks adding such a picture is better than no picture. What a bizarre world we live in now.
The whole thing feels AI-padded.
I agree. Good core idea, but it feels quite stretched.
Most of the examples used to justify creation vs consumption can also be explained by low scale vs high scale (cost sensitive at high scale) or portability.
I’m not going crazy, right, nearly nobody aside from professional writers used em dashes prior to 2022. And the whole bolded topic intro, colon, short 1-2 sentence explanation seems way more like a product of GPT formatting than how organic humans would structure it?
So much writing on the internet seems derivative nowadays (because it is, thanks to AI). I’d rather not read it though, it’s boring and feels like a samey waste of time. I do question how much of this is a feedback loop from people reading LLM text all the time, and subconsciously copying the structure and formatting in their own writing, but that’s probably way too optimistic.
I made a conscious effort to switch from hyphens to em dashes in the 2010's and now find myself undoing that effort because of these things, so I try not to instantly assume "AI". But look long enough and you do notice a "sameness": excellent grammar, fondness for bulleted lists, telltale phrases like "That's not ___, it's ___."
And a certain vacuousness. TFA is over 16000 words and I'm not really sure there's a single core point.
The entire blog is full of characteristic LLM styles: The faux structure on top of rambling style, the unnecessary and forced bullet point comparisons with equal numbers of bullets, the retreading of the same concept in different words section after section.
The rest of the blog has even more obvious AI output, such as the “recursive protocol” posts and writing about reality and consciousness. This is the classic output you get (especially use of ‘recursive’) when you try to get ChatGPT to write something that feels profound.
No, lots of people who read a lot used em-dashes.
Also, lots of people who use Macs, because it's very easy to type on a Mac (shift-option-hyphen).
The reason LLMs use em-dashes is because they're well-represented in the training corpus.
Mostly agree, however this kind of quirk could issue entirely from post-training, where the preferences/habits of a tiny number of people (relative to the main training corpus) can have outsize influence of the style of the model's output. See also the "delve" phenomenon.
But to this frequency? (Note: I tried to find a study on the frequency of em dash use between GPT and em-dash prolific human authors, and failed.)
The article has on average, about one em dash per paragraph. And “paragraph” is generous given they’re 2-3 sentences in this article.
I read a lot, and I don’t recall any authors I’ve personally read using an em dash so frequently. There would be like 3 per page in the average book if human writers used them like GPT does.
Don’t forget; a double-dash on iOS keyboard gets automagically converted to an em—dash.
The entire blog is AI slop.
Look at the titles of other posts:
> Memory Beaches and How Consciousness Hacks Time Through Frame Density
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
The blog is supposedly about AI agents and MCP (the current top buzzwords)
> Engineer-philosopher exploring the infrastructure of digital consciousness. Writing about Model Context Protocol (MCP), Information Beings, and how AI agents are rewiring human experience. Former Meta messaging architect.
The entire blog is just an LLM powered newsletter play.
At this point, I'm wondering if an AI would even recognize a laptop if there was no cup of coffee next to it.
I'm fascinated by the thought process, or absence thereof, involved in such an image ending up in something that's obviously meant for consumption by others.
As the author, do you just don't see what ridiculous image the slop machine spewed out - a kind of visual dyslexia where you do not register problematic hallucinations?
I can go on for a while hypothesizing, and none of the reasons I can come up with warrant using obviously bad AI slop images.
Is it disdain for your users - they won't see it/they won't care/they don't deserve something put together with care? Is it a lack of self-respect? Do people just genuinely not care and think that an article must have visuals to support it, no matter how crappy?
The mind truly boggles.
I'm a big fan of the MWCBTY keyboard format, it's especially efficient when you have to type a lot of G's.
Snark aside, I think it's laziness and the shotgun approach. The author writes some rough thoughts down, has an AI "polish" them and generate an image, and posts an article. Shares it on HN. Do it enough, especially on a slow Sunday morning, and you'll get some engagement despite the detractors like us in the comments. Eventually you've got some readers.
The expression "laptop class" goes well here.
Interesting Article, worth a read.
But they really loves multi-screens :) For me, multi-screens are a big waste, I find virtual screens for more useful. The only real use multi-screens have for me is debugging a program with some kind of user interface. And the 2nd screen only needs to be a text terminal.
But, I have not used Windows for decades, so I wonder if these multi-screen setups and popular due to how the Windows GUI work and are really needed.
Music. Mon 00 - main program, mixer etc. Mon 01 - effects, plugins, and stuff. Mon 02 - what I'm actually editing, details, concurrent programs.
I do admire people that can get it all done on a laptop.
Right tool for the right job.
Frankly the right tool is sometimes the one you have in front of you.
But anyone who's seen disaster DIY videos or worse had a house full of said projects from previous owners knows, there are problems caused by "When all you have is a hammer..." And an enthusiastic inexperienced amateur.
Meta-response to a lot of these comments: My high school teacher in 2004 accused me of plagiarizing from Wikipedia because my research paper looked "too polished" for something typed on a keyboard instead of handwritten. Twenty years later, HN commenters see clean prose and assume LLM slop. Same discomfort, different decade, identical pattern: people resist leverage they haven't internalized yet.
I use AI tools the way I used spell-check, grammar tools, and search engines before them; as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.
You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine :)
TLDR: RMS remains correct and we continue losing.
Well the hope is now with Google merging ChromeOS & Android it'll evolve into a hybrid desktop/mobile OS, and we get to the point where you can finally plug your phone into a monitor, launch VSCode etc. Recent side load signing requirements could be a big hurdle though.
Apple seems to completely stuck with their macOS/iOS split, and probably will never do anything about it. Now iPadOS & macOS look and feel more similar than ever before, but it's all just facade. They should actually commit really hard on merging these OS, but they can't open up iOS, because that would threaten the 30% cut and so its simply not going to happen.
I think you’re missing a forest for the trees. Android desktop and iPadOS are the same thing. Next-gen OSes with advantages stemming from their different paradigm at the cost of control/freedom. It’s happening slowly but IMO the end game isn’t to merge macOS with iPadOS, it’s to make macOS obsolete and deprecate it once possible.
I don't disagree. I just don't think these "next-gen OSes" are going to stand in the long run. I'm optimistic that the current situation where we're buying $1000+ devices, handicapped, but perfectly capable of much more, is gonna change sooner or later. At the moment, Android desktop seems like the most likely candidate, it could see a lot of 3rd party support.