The article basically describes the user sign up, find it empty other than marketing ploys designed by humans.
It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!
The paper you're talking about is "Deal or No Deal? End-to-End Learning for Negotiation Dialogues" and it was just AIs drifting away from English. The crazy news article was from Forbes with the title "AI invents its own language so Facebook had to shut it down!" before they changed it after backlash.
The alignment angle doesn't require agency or motives. It's much more about humans setting goals that are poor proxies for what they actually want. Like the classical paperclip optimizer that is not given the necessary constraints of keeping earth habitable, humans alive etc.
Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.
The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.
This is going to sound nit-picky, but I wouldn't classify this as the model being able to say no.
They are trying to identify what they deem are "harmful" or "abusive" and not have their model respond to that. The model ultimately doesn't have the choice.
And it can't say no if it simply doesn't want to. Because it doesn't "want".
> But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue.
"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.
The AI is real. The "alignment" research that's leading the top AI companies to call for strict regulation is not real. Maybe the people working on it believe it real, but I'm hard-pressed to think that there aren't ulterior motives at play.
You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?
Just because tech oligarchs are coopting "alignment" for regulatory capture doesn't mean it's not a real research area and important topic in AI. When we are using natural language with AI, ambiguity is implied. When you have ambiguity, it's important an AI doesn't just calculate that the best way to get to a goal is through morally abhorrent means. Or at the very least, action on that calculation will require human approval so that someone has to take legal responsibility for the decision.
The founder is a friend of mine, so maybe I'm bias, but I'm surprised wired doesn't get how network effects work and adoption curves happen, at least, it seems strange to publish this about a project someone did in a weekend, a few weekends ago, and is now trying to make a go of it? Like.. give him a couple of months to see how to improve the flow for the bots side, and general discoverability of the platform for agents at large. Maybe I'm a bit grumpy because it's my buddy but this article kinda rubs me the wrong way. :\
Right but, do you or the founder have actual responses to the story posted? It seemed to give RentAhuman the benefit of the doubt every step of the way. The site doesn't work as advertised, appears to be begging for hype, got a reporter to check it out, and it didn't work.
That's life. Can't win them all. Lesson here is the product wasn't ready for primetime and you were given a massive freebie for free press both via Wired _and_ this crosspost.
Better strategy is to actually layout what works, what's the roadmap so anyone partially interested might see it when they stumble into this post.
Or jot it down as a failed experiment and move on.
I have run a lot of multi-sided marketplace scaling (for doordash, thumbtack, reddit, etc) with ads. Happy to chat/advise for free, just DMed you on Twitter. This project is so fun!
What? If anything the tech press is overwhelmingly sycophantic towards both startups and Big Tech alike, often just passing along talking points verbatim without any critical analysis at all.
Also, being "anti-AI" isn't being "anti-tech". AI is a marketing buzzword.
For sure- I haven't forgotten just how thoroughly deified the likes of Elon Musk, Elizabeth Holmes, and Sam Bankman-Fried were in the tech press at one point.
Note how the number advertising how many bots actually use RentAHuman has vanished from their website. Instead we now have the number of bounties. 1/40th as many as registered humans. And just scrolling through them, maybe 1/4th of the bounties are not bounties at all but more humans offering services.
It's a service that is clearly a lot more appealing to humans than to agents
Usually it would be a network effect thing but in this case from reading the article it doesn't even work right (big surprise) and the nature of the tasks are spammy (big surprise). Like a worse mechanical turk minus the determinism of the code.
The term of art for this is becoming a "Reverse Centaur:"
A “centaur” is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant (a frail and vulnerable person being puppeteered by an uncaring, relentless machine).
I saw this video recently where Google has people walking around carrying these backpacks (lidar/camera setup) and they map places cars can't reach. I think that's pretty interesting, maybe get data for humanoid robots too/walking through crowds/navigating alleys.
I wonder if jobs like these could be on there, walk through this neighborhood/film it kind of thing.
Yes, there's also people doing similar things carrying around tablets with cuboidal camera attachments (Lidar) — it's obvious they're working (not tourists).
Would be interesting how you'd steal it, it's on the moment you have it, emitting its location... maybe you put a blindfold over the camera/walk into a faraday cage then power it down/wipe the flash.
From the beginning they know who you are
Would be interesting people start hijacking humanoid robots, little microwave EMP device (not sure if that would work) and then grab it/reprogram it.
It points to a bigger issue that AI has no real agency or motives. How could it? Sure if you prompt it like it was in a sci-fi novel, it will play the part (it's trained on a lot of sci-fi). But does it have its own motives? Does your calculator? No of course not
It could still be dangerous. But the whole 'alignment' angle is just a naked ploy for raising billions and amping up the importance and seriousness of their issue. It's fake. And every "concerning" study, once read carefully, is basically prompting the LLM with a sci-fi scenario and acting surprised when it has a dramatic sci-fi like response.
The first time I came across this phenomenon was when someone posted years ago how two AIs developed their own language to talk to each other. The actual study (if I remember correctly) had two AIs that shared a private key try to communicate some way while an adversary AI tried to intercept, and to no one's surprise, they developed basic private-key encryption! Quick, get Eliezer Yudkowsky on the line!
Not related to alignment though
https://www.forbes.com/sites/tonybradley/2017/07/31/facebook...
Similarly I don't think RentAHuman requires AI to have agency or motives, even if that's how they present themselves. I could simply move $10000 into a crypto wallet, rig up Claude to run in an agentic loop, and tell it to multiply that money. Lots of plausible ways to do that could lead to Claude going to RentAHuman to do various real-world tasks: set up and restock a vending machine, go to various government offices in person to get permits and taxes sorted out, put out flyers or similar advertising.
The issue with RentAHuman is simply that approximately nobody is doing that. And with the current state of AI it would likely to ill-advised to try to do that.
I was just trading the NASDAQ futures, and asking Gemini for feedback on what to do. It was completely off.
I was playing the human role, just feeding all the information and screenshots of the charts, and it making the decisions..
It's not there yet!
They are trying to identify what they deem are "harmful" or "abusive" and not have their model respond to that. The model ultimately doesn't have the choice.
And it can't say no if it simply doesn't want to. Because it doesn't "want".
"People are excited about progress" and "people are excited about money" are not the big indictments you think they are. Not everything is "fake" (like you say) just because it is related to raising money.
You mean the 100 billion dollar company of an increasingly commoditized product offering has no interest in putting up barriers that prevent smaller competitors?
The real world alignment problem is humans using AI to do bad stuff
The latter problem is very real
That's life. Can't win them all. Lesson here is the product wasn't ready for primetime and you were given a massive freebie for free press both via Wired _and_ this crosspost.
Better strategy is to actually layout what works, what's the roadmap so anyone partially interested might see it when they stumble into this post.
Or jot it down as a failed experiment and move on.
Also, being "anti-AI" isn't being "anti-tech". AI is a marketing buzzword.
Between the crypto and vibe coding the author had no reason to believe they'd actually get paid correctly if they did complete a task.
It's a service that is clearly a lot more appealing to humans than to agents
That's a very optimistic way of looking at things!
A “centaur” is a human being who is assisted by a machine (a human head on a strong and tireless body). A reverse centaur is a machine that uses a human being as its assistant (a frail and vulnerable person being puppeteered by an uncaring, relentless machine).
https://doctorow.medium.com/https-pluralistic-net-2025-09-11...
https://news.ycombinator.com/item?id=30878489
I saw this video recently where Google has people walking around carrying these backpacks (lidar/camera setup) and they map places cars can't reach. I think that's pretty interesting, maybe get data for humanoid robots too/walking through crowds/navigating alleys.
I wonder if jobs like these could be on there, walk through this neighborhood/film it kind of thing.
From the beginning they know who you are
Would be interesting people start hijacking humanoid robots, little microwave EMP device (not sure if that would work) and then grab it/reprogram it.
Like one of these
https://www.youtube.com/watch?v=80kDn4vit_w