hey, ishaan here (kartik's cofounder). this post came out of a lot of back-and-forth between us trying to pin down what people actually mean when they say "async agents."
the analogy that clicked for me was a turn-based telephone call—only one person can talk at a time. you ask, it answers, you wait. even if the task runs for an hour, you're waiting for your turn.
we kept circling until we started drawing parallels to what async actually means in programming. using that as the reference point made everything clearer: it's not about how long something runs or where it runs. it's about whether the caller blocks on it.
The real question is what happens when the background job wants attention. Does that only happen when it's done? Does it send notifications? Does it talk to a supervising LLM. The author is correct that it's the behavior of the invoking task that matters, not the invoked task.
(I still think that guy with "Gas Town" is on to something, trying to figure out connect up LLMs as a sort of society.)
>The Society of Mind is both the title of a 1986 book and the name of a theory of natural intelligence as written and developed by Marvin Minsky.
>In his book of the same name, Minsky constructs a model of human intelligence step by step, built up from the interactions of simple parts called agents, which are themselves mindless. He describes the postulated interactions as constituting a "society of mind", hence the title. [...]
>The theory
>Minsky first started developing the theory with Seymour Papert in the early 1970s. Minsky said that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks.
>Nature of mind
>A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind – and any other naturally evolved cognitive system – as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.
>This idea is perhaps best summarized by the following quote:
>What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308
That puts Minsky either neatly in the scruffy camp, or scruffily in the neat camp, depending on how you look at it.
- I ask for butter and walk away. - It passes the butter to where I expect it to be when I return. - That is its purpose.
hey, ishaan here (kartik's cofounder). this post came out of a lot of back-and-forth between us trying to pin down what people actually mean when they say "async agents."
the analogy that clicked for me was a turn-based telephone call—only one person can talk at a time. you ask, it answers, you wait. even if the task runs for an hour, you're waiting for your turn.
we kept circling until we started drawing parallels to what async actually means in programming. using that as the reference point made everything clearer: it's not about how long something runs or where it runs. it's about whether the caller blocks on it.
Not to be all captain hindsight, but I was puzzled as I was skimming the post, as this seemed obvious to me:
Something is async when it takes longer than you're willing to wait without going off to do something else.
"Background job"?
The real question is what happens when the background job wants attention. Does that only happen when it's done? Does it send notifications? Does it talk to a supervising LLM. The author is correct that it's the behavior of the invoking task that matters, not the invoked task.
(I still think that guy with "Gas Town" is on to something, trying to figure out connect up LLMs as a sort of society.)
Marvin Minsky thought of it a long time before Gas Town, and yes, he was on to something.
https://en.wikipedia.org/wiki/Society_of_Mind
>The Society of Mind is both the title of a 1986 book and the name of a theory of natural intelligence as written and developed by Marvin Minsky.
>In his book of the same name, Minsky constructs a model of human intelligence step by step, built up from the interactions of simple parts called agents, which are themselves mindless. He describes the postulated interactions as constituting a "society of mind", hence the title. [...]
>The theory
>Minsky first started developing the theory with Seymour Papert in the early 1970s. Minsky said that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks.
>Nature of mind
>A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind – and any other naturally evolved cognitive system – as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.
>This idea is perhaps best summarized by the following quote:
>What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308
That puts Minsky either neatly in the scruffy camp, or scruffily in the neat camp, depending on how you look at it.
https://en.wikipedia.org/wiki/Neats_and_scruffies
Neuro-symbolic AI is the modern name for combining both; the idea goes back to the neat/scruffy era, the term to the 2010s.
https://en.wikipedia.org/wiki/Neuro-symbolic_AI
Read my post on this from 9 months ago: https://jdsemrau.substack.com/p/designing-agents-architectur...
^^ requires paid subscription.