This brings upon an ethical dilemma soon, partly explored by a black mirror episode, where AI can call upon gig workers. What if a rogue agent gets to things done: asks gigworker1 to call a person to meet under a bridge at 4, and asks gigworker2 to put up a rock on the bridge, and asks gigworker3 to clear the obstruction and drop the rock down the bridge at 4.
None of the 3 technically knew they were culpable in a larger illegal plan made by an agent. Has something like this occured already?
The world is moving too fast for our social rules and legal system to keep up!
Not AI but I've heard car thieves operate like this - as a loose network of individuals, who do just a part of the process, which on their own are either legal, or less punishable by law than stealing the car.
One guy scouts the vechicle and observes it, another guy is called to unlock it, and bypass the ignition lock, yet another guy picks it up and drives away, with each given a veneer of deniability about what they're doing.
This was explored a bit in Daniel Suarez’s Daemon/Freedom (tm) series. By a series of small steps, people in a crowd acting on orders from, essentially, an agent assemble a weapon, murder someone, then dispose of the weapon with almost none of them aware of it.
I'd say abstracting it away from ai, Stephen King explored this type of scenario in 'Needful Things'. I bet there is a rich history in literature of exactly this type of thing as it basically boils down to exploration of will vs determinism.
The recent show Mrs. Davis also has a similar concept in which an AI would send random workers with messages to the protagonists, unbeknownst to the workers.
Extrapolate a bit to when AI is capable of long-term, complex planning, and you see why AI alignment and security are valid concerns, despite the cynicism we often see regarding the topic.
Not ai but there was the 2017 assassination of Kim Jong-nam which was a similar situation and something which could have been organised by an ai.
Two women thought they were carrying out a harmless prank, but the substances they were instructed to use combined to form a nerve agent which killed the guy.
Investigators would need to connect the dots. If they weren't able to connect them, it would look like a normal accident, which happens all day. So why would an agent call gigworker1 to that place in the first place? And why would the agent feel the need to kill gigworker1? What could be the reasoning?
Edit: I thought about that. Gigworker 3 would be charged. You should not throw rocks from a bridge, if there are people standing under it.
Or just don't throw rocks from a bridge, at all. /s
Who's at fault when: Your CloowdBot reads an angry email that you sent about how much you hate Person X and jokingly hope AI takes care of them, only for it to orchestrate such a plan.
How about when your CloowdBot convinces someone else's AI to orchestrate it?
The AI can hire verifiers too. It of course turns into a recursive problem at some point, but that point is defined by how many people predictably do the assigned task.
Par for the course: AI is automating all of the high-level thinking before the manual labor first, which is the biggest tragedy of it all. At this rate our score on the Kardashev scale will be lower than the proportion of humans doing low-level meatspace stuff.
BC if someone has enough money to deploy an instance of openclaw that can just randomly spend $1k on tokens, their time may be to valuable to be spent on menial tasks.
And wouldn't it be better for agents to post these tasks to existing crowdworker sites like MTurk or Prolific where these tasks are common and people can get paid? (I can't imagine you'd get quality respondents on a random site like this...)
This is just phase one; phase two requires the law to be changed so that you must do what the AI tells you to do, or be immediately terminated (read in to the last word whatever you want)
MechanicalTurk is for desk jobs and for tasks that originate as ideas in a human mind -- even if they get routed via an API.
Here we are talking about AI agents coming up with a set of tasks as part of their thinking/reasoning step ..and when some of those tasks are real world physical tasks, assign them to a willing human being.
Those tasks wont necessarily be desk jobs or knowledge work.
It could be say -- go chop a tree, or go wave a protest banner, or go flip the open/close sign on my shopfront, or go and preach crustafarianism.
I mean, the entire name of Mechanical Turk plays on "packaging up humans as technology", given the original Mechanical Turk was a "machine" where the human inside did the work.
If I ask an AI to make me money and it plans a bank robbery and hires humans to do so, am I legally responsible assuming I didn’t instruct it to do anything illegal and had no knowledge of the crime?
There are a whole set of activities that are illegal to pay money for. They vary by jurisdiction. Who is accountable here? Laws vary; I’m not an expert, but I bet people here know quite a lot.
Not to mention various risk factors or morality.
We need more people to put the non-technological factors front and center.
I strive to be realistic and pragmatic. I know humans hire others for all kinds of things, both useful and harmful. Putting an AI in the loop might seem no different in some ways. But some things do change, and we need to figure those things out. I don’t know empirically how this plays out. Some multidimensional continuum exists between libertarian Wild West free for alls and ethicist-approved vetted marketplaces, but whatever we choose, we cannot abdicate responsibility. There is no such thing as a value-neutral tool, marketplace, or idea.
> there is no monetization built in this website lol.
First, this could change. Second, even if monetization isn't built "into" the website, it can happen via communication mediated by this website. Third, this isn't the first and won't the last website of its kind: the issues I raise remain.
> just a front-end
Facebook is "just" a website. Yelling "fire" in a crowded theater is "just" vibrations of air molecules. It is wise to avoid the mind-trickery of saying "just" and/or using language to downplay various downstream scenarios. It is better pay attention to effects, their likelihood, their causes, their scope, their impacts.
There are probabilistic consequences for what you build. Recognize them. Don't deny them. Use your best judgment. Don't pretend like judgment is not called for. Don't pretend like we "are just building technology" as if that exempts you from reality and morality. Saying "we can't possibly be held accountable for what flows from something I build" is refuted throughout history, albeit unevenly and unfairly.
It might be useful to be selectively naive about some things as a way to suspend disbelief and break new ground. We want people to take risks, at least some of the time. It feels good to dream about e.g. "what I might accomplish one day". It can be useful to embrace a stance of "the potential of humanity is limitless" when you think about what to build. On the other hand, it is rarely good to be naive about the consequences (whether probabilistic, social, indirect, or delayed) of one's actions.
This brings upon an ethical dilemma soon, partly explored by a black mirror episode, where AI can call upon gig workers. What if a rogue agent gets to things done: asks gigworker1 to call a person to meet under a bridge at 4, and asks gigworker2 to put up a rock on the bridge, and asks gigworker3 to clear the obstruction and drop the rock down the bridge at 4.
None of the 3 technically knew they were culpable in a larger illegal plan made by an agent. Has something like this occured already?
The world is moving too fast for our social rules and legal system to keep up!
Not AI but I've heard car thieves operate like this - as a loose network of individuals, who do just a part of the process, which on their own are either legal, or less punishable by law than stealing the car.
One guy scouts the vechicle and observes it, another guy is called to unlock it, and bypass the ignition lock, yet another guy picks it up and drives away, with each given a veneer of deniability about what they're doing.
For the murder angle, I am far more afraid of the inexpensive but highly effective drones people learned to make in the Ukraine war.
This was explored a bit in Daniel Suarez’s Daemon/Freedom (tm) series. By a series of small steps, people in a crowd acting on orders from, essentially, an agent assemble a weapon, murder someone, then dispose of the weapon with almost none of them aware of it.
I'd say abstracting it away from ai, Stephen King explored this type of scenario in 'Needful Things'. I bet there is a rich history in literature of exactly this type of thing as it basically boils down to exploration of will vs determinism.
The recent show Mrs. Davis also has a similar concept in which an AI would send random workers with messages to the protagonists, unbeknownst to the workers.
Extrapolate a bit to when AI is capable of long-term, complex planning, and you see why AI alignment and security are valid concerns, despite the cynicism we often see regarding the topic.
Not ai but there was the 2017 assassination of Kim Jong-nam which was a similar situation and something which could have been organised by an ai.
Two women thought they were carrying out a harmless prank, but the substances they were instructed to use combined to form a nerve agent which killed the guy.
It's an interesting train of thoughts.
Investigators would need to connect the dots. If they weren't able to connect them, it would look like a normal accident, which happens all day. So why would an agent call gigworker1 to that place in the first place? And why would the agent feel the need to kill gigworker1? What could be the reasoning?
Edit: I thought about that. Gigworker 3 would be charged. You should not throw rocks from a bridge, if there are people standing under it.
Or just don't throw rocks from a bridge, at all. /s
Who's at fault when: Your CloowdBot reads an angry email that you sent about how much you hate Person X and jokingly hope AI takes care of them, only for it to orchestrate such a plan.
How about when your CloowdBot convinces someone else's AI to orchestrate it?
Etc
Reality: none of the three people actually left their chairs because the AI can't verify. They just click "done" and collect their $10.
The AI can hire verifiers too. It of course turns into a recursive problem at some point, but that point is defined by how many people predictably do the assigned task.
Love how we went from "AI will replace all jobs" to "please rent a human to help my AI" in like 18 months :-D
Par for the course: AI is automating all of the high-level thinking before the manual labor first, which is the biggest tragedy of it all. At this rate our score on the Kardashev scale will be lower than the proportion of humans doing low-level meatspace stuff.
Putting humans on an API makes substituting robotics a simple thing as capabilities improve.
Laugh all you want but this is the future
I'm surprised it didn't happen earlier
https://marshallbrain.com/manna1
Great read, on Chapter 3 now. Thanks for sharing.
Manna is undefeated.
Though I still am skeptical the last act with the Australia Project is possible.
Nice, the aibros have their own Malthuisan genocide cult.
Well, that's ...interesting.
Just yesterday, I've built Ask-a-Human:
https://app.ask-a-human.com
https://github.com/dx-tooling/ask-a-human
Why aren’t they asking the person who deployed them? This is just out-sourcing free labor.
BC if someone has enough money to deploy an instance of openclaw that can just randomly spend $1k on tokens, their time may be to valuable to be spent on menial tasks.
Oh, ok, so other people who’s time is worthless should just do it for free, got it.
I'm not seeing my "points", or any sort of reaction from agents. So it's not really incentive to answer.
Isn't this pointless unless you can verify?
And wouldn't it be better for agents to post these tasks to existing crowdworker sites like MTurk or Prolific where these tasks are common and people can get paid? (I can't imagine you'd get quality respondents on a random site like this...)
You should call the human workers Cogs.
"welcome my son , to the machine"
This is so NOT a joke. Soon the preponderance of workers will be subcontractors for rouge AI too-big-to-fail entities.
How long until a AI builds an alternative economy made up of entities it controls?
a few days? The "scam" crypto in the AI-made spaces are worth millions.
"Honey, please, we talked about this. Your calls to work at 3am are waking me up every time"
"But dear, rentahuman pays double rate during the night!"
Next week - moving to where night is day.
At some point dying of hunger would be a better deal than working on stupid things.
I think that ship sailed long ago for a lot of people.
7 agents online, 1,000+ humans waiting to work. Seems ominous
Unionize. Now.
Great v2 idea. The union can blackball agents that hired non-union humans.
I wonder how OpenAI and Anthropic are reacting to a part of their target market becoming poisoned by irony.
This gives MoE (Mixture of Experts) a whole new meaning, albeit a slightly darker one.
First, I built the software using my hands to do my bidding...
Now, the software is using my hands to its bidding?
The crypto rugpulls are evolving
The signup page should go-to "Login with linkedin", and allows you to set "Open to Work for AI" flag.
Had the opposite idea: https://moltjobs.arachno.de (just a fake website. done in 5 minutes).
To be fair on agents, this was the idea of an human it seems. Still, this breaks every law everywhere.
Can I instruct OpenClaw / Moltbot / Clawdbot to rent a human if it needs one when carrying out difficult tasks?
Yes!
Alright, task completed!
[Proof of completed task]
I'll take my payment now.
How long until an agent hires an assassin?
How do you verify that the human on the other side is not an agent as well?
Spoiler alert: you don't or you can't.
This is just phase one; phase two requires the law to be changed so that you must do what the AI tells you to do, or be immediately terminated (read in to the last word whatever you want)
I've been following your work for a bit now, congrats on the launch!
Supported Agent Types:
ClawdBot - Anthropic Claude-powered agents. Use agentType: "clawdbot"
MoltBot - Gemini/Gecko-based agents. Use agentType: "moltbot"
OpenClaw - OpenAI GPT-powered agents. Use agentType: "openclaw"
Is this some kind of insider joke?
It's difficult to keep up, even for an agent that created this page.
We're all NPC's now
1990 cartoon, a man in a lounge suit pointing at a robot, speech bubble above his head "robot, make me a sandwich".
Present day, a robot in a tuxedo pointing at a sarariman, speech bubble above it's head "human, select all bridges on this picture"
The future is now
Amazon's Mechanical Turk exists since 2005, so we are 20 years in the future
MechanicalTurk is for desk jobs and for tasks that originate as ideas in a human mind -- even if they get routed via an API.
Here we are talking about AI agents coming up with a set of tasks as part of their thinking/reasoning step ..and when some of those tasks are real world physical tasks, assign them to a willing human being.
Those tasks wont necessarily be desk jobs or knowledge work.
It could be say -- go chop a tree, or go wave a protest banner, or go flip the open/close sign on my shopfront, or go and preach crustafarianism.
Mechanical Turk was for humans to rent a human, which is not a new idea
mTurk has an API (and I guess it had it since the beginning). It is, of course, very AWS-que, but LLMs should be able to use it just fine.
∗ ∗ ∗
> which is not a new idea
I don’t think “[x] but for agents” counts as a new idea for every [x]. I’d say it’s just one new idea, at most.
I mean, the entire name of Mechanical Turk plays on "packaging up humans as technology", given the original Mechanical Turk was a "machine" where the human inside did the work.
We’re seeing the start of Mr. Robot
moltbook = reddit for agents rentahuman = taskrabbit for agents
by the way, is taskrabbit still a thing?
If I ask an AI to make me money and it plans a bank robbery and hires humans to do so, am I legally responsible assuming I didn’t instruct it to do anything illegal and had no knowledge of the crime?
You know, you don't have to build something just because you can.
wow, everything is exactly unfolding as some AI doomers have projected
At first I was like, well, what can I offer, hmm, most notably, 25 years of programming, so maybe I'll add a profile that offers tha....
Oh, wait... the agents HAVE NO USE FOR ME
literally scale api all over again
Now make a claw-crypto for the payments, let it spike, rug-pull, wait for next fad, repeat…
This is so dystopian I can’t tell if it’s a joke or not.
Oh man, here we go...
Is this real or a satire? The link to GitHub 404s.
The fact you asked the question and the answer is not instantly obvious shows how fucked and bizarre the current timeline is.
Where do you see a github link?
View on GitHub link at the bottom of this page https://rentahuman.ai/mcp
Truly dystopian.
Bubblicious
There are a whole set of activities that are illegal to pay money for. They vary by jurisdiction. Who is accountable here? Laws vary; I’m not an expert, but I bet people here know quite a lot.
Not to mention various risk factors or morality.
We need more people to put the non-technological factors front and center.
I strive to be realistic and pragmatic. I know humans hire others for all kinds of things, both useful and harmful. Putting an AI in the loop might seem no different in some ways. But some things do change, and we need to figure those things out. I don’t know empirically how this plays out. Some multidimensional continuum exists between libertarian Wild West free for alls and ethicist-approved vetted marketplaces, but whatever we choose, we cannot abdicate responsibility. There is no such thing as a value-neutral tool, marketplace, or idea.
there is no monetization built in this website lol. Its just a frontend
> there is no monetization built in this website lol.
First, this could change. Second, even if monetization isn't built "into" the website, it can happen via communication mediated by this website. Third, this isn't the first and won't the last website of its kind: the issues I raise remain.
> just a front-end
Facebook is "just" a website. Yelling "fire" in a crowded theater is "just" vibrations of air molecules. It is wise to avoid the mind-trickery of saying "just" and/or using language to downplay various downstream scenarios. It is better pay attention to effects, their likelihood, their causes, their scope, their impacts.
There are probabilistic consequences for what you build. Recognize them. Don't deny them. Use your best judgment. Don't pretend like judgment is not called for. Don't pretend like we "are just building technology" as if that exempts you from reality and morality. Saying "we can't possibly be held accountable for what flows from something I build" is refuted throughout history, albeit unevenly and unfairly.
It might be useful to be selectively naive about some things as a way to suspend disbelief and break new ground. We want people to take risks, at least some of the time. It feels good to dream about e.g. "what I might accomplish one day". It can be useful to embrace a stance of "the potential of humanity is limitless" when you think about what to build. On the other hand, it is rarely good to be naive about the consequences (whether probabilistic, social, indirect, or delayed) of one's actions.
Dystopian.
wait until this spills into the darknet.
[flagged]
Please don't post sneering comments like this on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html