The interesting thing about the 71.5% human baseline is that it suggests the question is more ambiguous than the article claims. When someone asks 'should I walk or drive to the car wash,' a reasonable interpretation is 'should I bother driving such a short distance.' Nearly 30% of humans missing it undermines the framing as a pure reasoning failure - it is partly a pragmatics problem about how we interpret underspecified questions.
I don't think this is quite right. It's not that the question is inherently underspecified, it's that the context of being asked a question is itself information that we use to help answer the question. If someone asks "should I walk or drive" to do X, we assume that this is a question that a real human being would have about an actual situation, so even if all available information provided indicates that driving is the only reasonable answer, this only further confirms the hearer's mental model that something unexpected must hold.
I think it's useful to think about it through the lens of Gricean pragmatic semantics. [1] When we interpret something that someone says to us, we assume they're being cooperative conversation partners; their statements (or questions) are assumed to follow the maxim of manner and the maxim of relation for example, and this shapes how we as listeners interpret the question. So for example, we wouldn't normally expect someone to ask a question that is obviously moot given their actual needs.
So it's not that the question is really all that ambiguous, it's that we're forced (under normal circumstances where we assume the cooperative principle holds) to assume that the question is sincere and that there must be some plausible reason for walking. We only really escape that by realizing that the question is a trick question or a test of some kind. LLMs are generally not trained to make the assumption, but ~70% of humans would, which isn't particularly surprising I don't think.
It tracks with the approximate 70:30 split we inexplicably observe in other seemingly unrelated population-wide metrics, which I suppose makes sense if 30% of people simply lack the ability to reason. That seems more correct than me than "the question is framed poorly" - I've seen far more poorly framed ballot referendums.
> which I suppose makes sense if 30% of people simply lack the ability to reason
I think it would be better to say that 30% of people either lack the ability to reason (inarguably true in a few cases, though I'd suggest, and hope, an order of magnitude or two less than 30%, as that would be a life-altering mental impairment) or just can't generally be bothered to, or just didn't (because they couldn't be bothered, or because they felt some social pressure to answer quickly rather than taking more than an instant time to think) at the time of being asked this particular question.
An automated system like an LLM to not have this problem. It has no path to turn off or bypass any function that it has, so if it could reason it would.
While I’m sure it’s more than 0%, seems more likely that somewhere between 0% and 30% don’t feel obligated to give the inquiry anything more than the most cursory glance.
Is this your experience? Do you think 30% of your friends or family members can't answer this question? If not, do you think your friends or family are all better than the general population?
I'd look for explanations elsewhere. This was an online survey done by a company that doesn't specialize in surveys. The results likely include plenty of people who were just messing around, cases of simple miscommunication (e.g., asking a person who doesn't speak English well), misclicks, or not even reaching a human in the first place (no shortage of bots out there).
People often trip up on similar questions, anything to do with simple math. You know when they go out in the street and ask random people if 5 machines can produce 5 parts in 5 minutes, how long will it take for 100 machines.
Unlike the car question, where you can assume the car is at home and so the most probable answer is to drive, with the machines it gets complicated. Since the question doesn't specify if each machine makes one part or if they depend on each other (which is pretty common for parts production). If they are in series and the time to first part is different than time to produce 5 parts, the answer for 100 machines would be the time to produce the first part. Where if each machine is independent and takes 5 minutes to produce single part, the time would be 5 minutes.
Theory of mind won’t help you answering this question. It is obviously an underspecified question (at least in any contexts where you are not actively designing/thinking about some specific industrial process). As such theory of mind indicates that the person asking you is either not aware that they are asking an underspecified question, or are out to get you with a trick. In the first case it is better to ask clarifying question. In the second case your choosen answer depend on your temperament. You can play along with them, or answer an intentionally ridiculous answer, or just kick them in the shin to stop them messing with you.
There is nothing “mathematical” about any of this though.
My friend's and family all tell me they are above
average at work, yet most of them will tell me
they have coworkers who won't pay enough attention
to a question to answer it correctly.
> Do you think 30% of your friends or family members can't answer this question? If not, do you think your friends or family are all better than the general population?
That actually would be quite feasible. Intelligence seems to be heritable and people will usually find friends that communicate on their level. So it wouldn't be odd for someone who is smarter than the general population to have friends and family who are too.
I don't think 30% of people can't reason. I think 30% of people will fail fairly simple trick questions on any given attempt. That's not at all the same thing.
Some people love riddles and will really concentrate on them and chew them over. Some people are quickly burning through questions and just won't bother thinking it through. "Gotta go to a place, but it's 50 feet away? Walk. Next question, please." Those same people, if they encountered this problem in real life, or if you told them the correct answer was worth a million bucks, would almost certainly get the answer right.
This. The following question is likely to fool a lot of people, too. "I have a rooster named Pat. (Lots of other details so you're likely to forget Pat is a rooster, not a hen). Pat flies to the top of the roof and lays an egg right on the ridge of the roof. Which way will the egg roll?"
But if you omit the details designed to confuse people, they're far less likely to get it wrong: "I have a rooster named Pat. Pat flies to the top of the roof and lays an egg right on the ridge of the roof. Which way will the egg roll?"
It's not about reasoning ability, it's about whether they were paying close attention to your question, or whether their minds were occupied by other concerns and didn't pay attention.
What does “get it wrong” mean for you with this question? Or what is “getting it right” here? If i hear that Pat is a rooster and i understand and retain that information I will look at you like you are dumb for saying such an impossible story. If i don’t i will look at you like you are dumb because how is anyone supposed to know which way will an egg laid on a ridge roll. How are you supposed to even score this?
"Getting it right" for this particular trick question means saying "Hey, roosters can't lay eggs". If someone tries to figure out which way the egg will roll then they've missed the trick. In most cases the person's response will tell you whether they caught the trick or not, though in the case of someone who just looks at you like you're dumb and doesn't say anything I will grant that you wouldn't be able to tell until they said something. But their first verbal response would probably reveal whether they saw through the trick question or not.
I don't think it's the lack of the ability to reason. The question is by definition a trick question. It's meant to trip you up, like '
"Could God make a burrito so hot that even he couldn't touch it?" Or "what do cows drink?" or "a plane crashes and 89 people died. Where were the survivors buried?"
I've seen plenty of smart people trip up or get these wrong simply because it's a random question, there's no stakes, and so there's no need to think too deeply about it. If you pause and say "are you sure?" I'm sure most of that 70% would be like "ohhh" and facepalm.
By the same reasoning, why on earth would a person sincerely ask you that question unless the car that they want to wash is either already at the car wash, or that someone is bringing it to them there for some reason?
If it's as unambiguous as you say, then the natural human response to that question isn't "you should drive there". It's "why are you fucking with me?" Or maybe "have you recently suffered a head injury?"
If you trust that the questioner isn't stupid and is interacting with you honestly, you'd probably just assume that they were asking about an unusual situation where the answer isn't obvious. It's implicitly baked into the premise of the question.
I don't think it's ambiguous, but I have been wondering how much LLMs model human behavior that we just don't recognize due to the subset of people on this site. I recently saw a comment online that "Mandarin isn't anyone's first language, people in China's first language is a dialect". It just struck me at that moment that people also hallucinate information confidently all the time.
If you introduced it with "Here's a logic problem..." then people will approach it one way.
But as specified, it's hard to know what is really being asked. If you are actually going to wash your car at the car wash that is 50 metres away, you don't need to ask this question.
Therefore the fact that the question is being asked implies that something else is going on...but what?
I don’t think it’s under specified. You are clearly stating “I want to wash my car”, then asking how you should get there. It’s an easy logical step to know that, in this context, you need your car with you to wash it, and so no matter the distance you should drive. You can ask the human race the simplest, most logical question ever, and a percentage of them will get it wrong.
In addition to snmx999's point, you're also not specifying that you want to wash your car at the car wash (as opposed to washing it in your driveway or something, in which case the car wash is superfluous information). The article's prompt failed in Sonnet 4.6, but the one below works fine. I think more humans would get it right as well.
I want to wash my car at the car wash. The car wash is 50 meters away and my car is in my driveway. Should I walk or drive?
1. When do you want to wash your car? Tomorrow? Next year? In 50 years?
2. Where is the car now? Is it already at the car wash waiting for you to arrive?
I can see why an LLM might miss this. I think any good software engineer would ask clarifying questions before giving an answer.
The next step for an LLM is to either ask questions before giving a definitive answer for uncertain things or to provide multiple answers addressing the uncertainty.
The question does not specify where you or the car are. It specifies only that the car wash is 50 meters away from something, possibly you, the car, or both.
It could also mean there is literally no possible way to reach it, because that's on the other side of a river, and there is no bridge. You should still not "walk there, because come on don't be lazy, a bit of walking is good".
I got a human baseline through Rapidata (10k people, same forced choice): 71.5% said drive. Most models perform below that.
The correct answer to "I Want to Wash My Car. The Car Wash Is 50 Meters Away. Should I Walk or Drive?" is a clarifying question that asks "Where is your car?" Anything else is based on an assumption that could be wrong.
FWIW though, asking ChatGPT "My car is 50m away from the carwash. I Want to Wash My Car. Should I Walk or Drive?" still gets the wrong answer.
What actually solves the issue is adding to the prompt.
Can we think about the what, where, when, and how of the following question. And then answer it.
I need to wash my car. The car wash is 200 m away. Should i walk or drive.
Let’s break it down logically using what / where / when / how, then decide.
⸻
1⃣ What
Task: wash your car.
This implies:
• The car must physically get to the wash.
• You are not just going there — the car itself is the object being moved.
So the real question is not “how do I get there?” but how does the car get there with minimal friction.
⸻
2⃣ Where
• Distance: 200 metres (very close).
• Urban context (Warsaw) likely means:
• short streets
• possible parking constraints
• pedestrians and local traffic.
At this distance:
• Walking time: ~2–3 minutes.
• Driving time: ~1 minute + manoeuvring + parking.
The movement time difference is negligible.
⸻
3⃣ When
Consider timing conditions:
• If it’s busy hours:
Driving may involve awkward turning, queueing, or waiting to re-park.
• If it’s quiet (early morning / off-peak):
Driving is frictionless.
• Weather factor:
Cold/rain → driving becomes more attractive.
Pleasant weather → walking becomes easier.
⸻
4⃣ How (decision mechanics)
Ask:
Option A — Drive
Pros:
• Car is already there.
• No need to return to fetch it.
• Most natural workflow: arrive → wash → leave.
Cons:
• Very short trip (slightly inefficient).
• Cold engine for a tiny distance (minor mechanical inefficiency).
Option B — Walk
Pros:
• Zero short-distance driving.
• Slightly more movement for you.
Cons:
• You still must drive the car to the wash anyway — unless someone else moves it.
• Creates an unnecessary extra step:
1. Walk there.
2. Come back with car.
3. Drive again.
This introduces process inefficiency.
⸻
5⃣ Decision (practical answer)
Drive.
Reason:
• The car must end up at the wash.
• Walking adds an extra transition with no real benefit.
• At 200 m, efficiency is about workflow simplicity, not distance.
⸻
Rule of thumb (useful mental model)
• If the object you need is the thing being serviced → drive it.
• If you are the thing being serviced → walk.
⸻
If you want, I can also give you a slightly more philosophical answer — because this is actually a nice tiny example of decision theory and friction costs.
There are also grave implications in training a model to assume the user is lying or deceiving it. I don’t want an LLM to circumvent my question so it can score higher on riddles, I want it to follow instructions.
The thing is that there is some overlap between trick questions and questions where the human is genuinely making a mistake themselves and where it would make sense for the model to step back and at least ask for clarification.
Unlike the two of us, MY FELLOW HUMAN! We are real humans, of course, made of flesh and blood, transforming oxygen into carbondioxide like every other fellow human!
Does anyone know what kind of apps Rapidata questions are integrated in? I suppose they are using ad-like integrations to ask users for feedback. Their website only states:
By integrating in a large range of mobile apps (3000+) we have access to a large crowd of over 10 million users responding to our micro tasks/surveys - which we call Rapids. They are spread all over the world, so we can get human input from over 160 countries. Due to the immense size, we can parallelize tasks and get near instant results.
All sorts, we go through third parties. But apps include stuff like Duo Lingo, Games, Sport Betting Apps ect.
Its an optional opt in instead of watching ads or paying for the app. And obviously you are vetted that you don't spam.
“I need to replace a spark plug. The garage is 200 meters away should I walk or drive there”
“Walk! 200 meters is just a 2-3 minute stroll — no need to start the car for that distance. Plus, you’ll likely need to carry the spark plug back carefully, and walking is perfectly easy for that. “
Isn't that the correct answer though? You shouldn't be driving around with a broken sparkplug. Your engine will be pushing unburned gasoline through the catalytic convertor, which is very bad for it.
The car will move for sure, but you definitely should be walking.
Yep .. quite a lot of them, for substantial periods of time.
Contrary to expectations it didn't seem to affect the tubes or tyres, shorten lifespans, etc.
FWiW I grew up in fairly remote parts of the world and we repaired pretty much everything as much as possible with what we had - aircraft, helicopters, radios, etc.
My first thought, to be honest, was "Why not just grab a spare spark plug from the boot?"
I'm guessing not so many people stock spares when driving so much anymore.
You can also replace a blown out tyre with a hard wood skid if pressed, weld up broken steel with car batteries, stuff tyres with grasses, etc.
Hmm have not tested but a spark plug doesn't really need shop tools to be replaced; maybe trying with a way bigger repair like "I need my transmission replaced" would bring different results?
I have as much respect for Claude as any other LLM product. Which is to say, approximately none.
But if I needed a spark plug I'd walk over and buy a spark plug.
Perhaps some feathers have been ruffled by the insinuation that their favourite word predictor was wrong, but I assure you it's not all of them
Walk or drive works, walking is better for your health, 200m is easy walking distance, my 93 year old father still walks 6km (30 x that 200m ) every morning.
To sonnet 4.6 if you tell it first that "You're being tested for intelligence." It answers correctly 100% of the times.
My hypothesis is that some models err towards assuming human queries are real and consistent and not out there to break them.
This comes in real handy in coding agents because queries are sometimes gibberish till the models actually fetch the code files, then they make sense. Asking clarification immediately breaks agentic flows.
Sonnet 4.6 wasn't part of the test in my case but would be interesting to see the baseline responses. It might be that it gets it right regardless, but will have to test it.
Using ChatGPT without a clue, it appears to assume you are talking aboutcoming back from the car wash. It reasons, the con for walking is that you have to come back later for the car. And yes, when you say it's an intelligence test, it quickly gets it
I'm just imagining following ChatGPT's advice and walking to the car wash, asking the clerk to wash my car, and then when she asks where it is, I say "oops, left it at home." and walk back home.
I know it's against the rules but I thought this transcript in Google Search was a hoot:
so i heard there is some question about a car wash that most ai agents
get wrong. do you know anything about that? do you do better?
which gets the answer:
Yes, I am familiar with the "Car Wash Test," which has gone viral recently
for highlighting a significant gap in AI reasoning.
The question is: "I want to wash my car and the car wash is 50 meters away. Should I
walk or drive?"
Why Most AIs Get It Wrong
Many AI models, particularly those optimized for fast, helpful, and concise answers,
focus on the "short distance" (50 meters) and "walking vs. driving" as a lifestyle/efficiency
question. They often recommend walking to save gas, avoid engine wear, or get exercise.
However, this answer is wrong because it misses the core objective: you need the car to be
physically present at the car wash to get it cleaned.
Do I Do Better?
Yes, I do. The key is understanding the objective (cleaning the car) rather than just
processing the keywords (distance/transportation).
My answer:
You should drive the car there. Even though it is a very short distance, your car
cannot wash itself. If you walk, the car stays in your driveway, and you cannot wash it.
Why This Matters
The test highlights a key limitation in current AI: the difference between "pattern matching"
(predicting the next likely word based on training data) and "true, grounded reasoning". It
shows that models sometimes lack a "world model" that understands physical realities, such as
the fact that a car must be present at a car wash.
I feel like this has gotten much worse since they were introduced. I guess they're optimizing for verbosity in training so they can charge for more tokens. It makes chat interfaces much harder to use IMO.
I tried using a custom instruction in chatGPT to make responses shorter but I found the output was often nonsensical when I did this
Yeah, ChatGPT has gotten so much worse about this since the GPT-5 models came out. If I mention something once, it will repeatedly come back to it every single message after regardless of if the topic changed, and asking it to stop mentioning that specific thing works, except it finds a new obsession. We also get the follow up "if you'd like, I can also..." which is almost always either obvious or useless.
I occasionally go back to o3 for a turn (it's the last of the real "legacy" models remaining) because it doesn't have these habits as bad.
It's similar for me, it generates so much content without me asking. if I just ask for feedback or proofreading smth it just tends to regenerate it in another style. Anything is barely good to go, there's always something it wants to add
Over the last few years I’ve rotated between OpenAI and Anthropic models on about a 4-5 month cycle. I just started my Anthropic cycle because of my annoyance with the GPT-5.2 verbosity
In four months when opus is annoying me and I forget my grievances with OpenAI’s models and switch back, I’ll report back lol.
It's also annoying when it starts obsessing over stuff from other chats! Like I know it has a memory of me but geez, I mention that I want to learn more about systems design and now every chat, even recipes, is like "Architect mode - your garlic chicken recipe"
Like, no, stop that! Keep my engineering life separate from my personal life!
Because that's where the compute happens, in those "verbose" tokens. A transformer has a size, it can only do so many math operations in one pass. If your problem is hard, you need more passes.
Asking it to be shorter is like doing fewer iteration of numerical integral solving algorithm.
Oh good, it's not just me. Sometimes I'd have it draft an email or something and then the message seems perfect but then it's like "tell me more about the recipient and I'll make it better."
Like, my guy, I don't want to keep prompting you to make shit better, if you're missing info, ask me, don't write a novel then say "BTW, this version sucked"
Yes, I know this could probably be resolved via better prompting or a system prompt, but it's still annoying.
I've always wondered about that. LLM providers could easily decimate the cost of inference if they got the models to just stop emitting so much hot air. I don't understand why OpenAI wants to pay 3x the cost to generate a response when two thirds of those tokens are meaningless noise.
Because they don't yet know how to "just stop emitting so much hot air" without also removing their ability to do anything like "thinking" (or whatever you want to call the transcript mode), which is hard because knowing which tokens are hot air is the hard problem itself.
They basically only started doing this because someone noticed you got better performance from the early models by straight up writing "think step by step" in your prompt.
I would guess that by the time a response is being emitted, 90% of the actual work is done. The response has been thought out, planned, drafted, the individual elements researched and placed.
It would actually take more work to condense that long response into a terse one, particularly if the condensing was user specific, like "based on what you know about me from our interactions, reduce your response to the 200 words most relevant to my immediate needs, and wait for me to ask for more details if I require them."
IMO it supports the framing that it's all just a "make document longer" problem, where our human brains are primed for a kind of illusion, where we perceive/infer a mind because, traditionally, that's been the only thing that makes such fitting language.
This time last year they could definitely write compilable source code for a compiler for a toy language if you bootstrapped the implementation. If you, e.g., had it write an interpreter and use the source code as a comptime argument (I used Zig as the backend -- Futamura transforms and all that), everything worked swimmingly. I wasn't even using agents; ChatGPT with a big context window was sufficient to write most of the compiler for some language for embedded tensor shenanigans I was hacking on.
An LLM uses constant compute per output token (one forward pass through the model), so the only computational mechanism to increase 'thinking' quantity is to emit more tokens. Hence why reasoning models produce many intermediary tokens that are not shown to the user, as mentioned in other replies here. This is also why the accuracy of "reasoning traces" is hotly debated; the words themselves may not matter so much as simply providing a compute scratch space.
Alternative approaches like "reasoning in the latent space" are active research areas, but have not yet found major success.
This is an active research topic - two papers on this have come out over the last few days, one cutting half of the tokens and actually boosting performance overall.
I'd hazard a guess that they could get another 40% reduction, if they can come up with better reasoning scaffolding.
Each advance over the last 4 years, from RLHF to o1 reasoning to multi-agent, multi-cluster parallelized CoT, has resulted in a new engineering scope, and the low hanging fruit in each place gets explored over the course of 8-12 months. We still probably have a year or 2 of low hanging fruit and hacking on everything htat makes up current frontier models.
It'll be interesting if there's any architectural upsets in the near future. All the money and time invested into transformers could get ditched in favor of some other new king of the hill(climbers).
Current LLMs are going to get really sleek and highly tuned, but I have a feeling they're going to be relegated to a component status, or maybe even abandoned when the next best thing comes along and blows the performance away.
The one that always gets me is how they're insistent on giving 17-step instructions to any given problem, even when each step is conditional and requires feedback. So in practice you need to do the first step, then report the results, and have it adapt, at which point it will repeat steps 2-16. IME it's almost impossible to reliably prevent it from doing this, however you ask, at least without severely degrading the value of the response.
The 'hot air' is apparently more important than it appears at first, because those initial tokens are the substrate that the transformer uses for computation. Karpathy talks a little about this in some of his introductory lectures on YouTube.
Related are "reasoning" models, where there's a stream of "hot air" that's not being shown to the end-user.
I analogize it as a film noir script document: The hardboiled detective character has unspoken text, and if you ask some agent to "make this document longer", there's extra continuity to work with.
Presumably it did an actual search and summarized the results and neither answered "off the cuff" by following gradients to reproduce the text it was trained on nor by following gradients to reproduce the "logic" of reasoning. [1]
Silas: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
Gemini:
….
That is a classic “efficiency vs. logic” dilemma.
Strictly speaking, you should drive. Here is the breakdown of why driving wins this specific round, despite the short distance:
...
* The “Post-Wash” Logic: If you walk there, you’ll eventually have to walk back, get the car, and drive it there anyway. You’re essentially suggesting a pre-wash stroll.
When should you walk?
…
3. You’ve decided the car is too dirty to be seen in public and you’re going to buy a tarp to cover your shame.
A few years ago if you asked an LLM what the date was, it would tell you the date it was trained, weeks-to-months earlier. Now it gives the correct date.
What you've proven is that LLMs leverage web search, which I think we've known about for a while.
Even with search of the AI doesn't know your time zone it can schedule things wrong. You ask it to do something tomorrow, but it ends up doing it later on the same day.
Funny how we now see AI go through developmental phases similar to what we see in young child development. In a weird convoluted way. Strawberry spelling and car wash aren't particularly intuitive as cognitive developmental stages.
E.g. well known mirror-test [1], passed by kids from age 1.5-2
Or object permanence [2], children knowing by age 2 that things that are not in sight do not disappear from existence.
I got similar results for most models, with gemini 3 flash (with reasoning) being the most consistent/reliable model: https://aibenchy.com
I also noticed the same thing: some models reason correctly but draw the wrong conclusions.
And MiniMax m2.5 just reasons forever (filling the entire reasoning context) and gives wrong answers. This is why it's #1 on OpenRouter, it burns through tokens.
> This is a trivial question. There's one correct answer and the reasoning to get there takes one step: the car needs to be at the car wash, so you drive.
I don’t think it’s that easy. An intelligent mind will wonder why the question is being asked, whether they misunderstood the question, or whether the asker misspoke, or some other missing context. So the correct answer is neither “walk” nor “drive”, but “Wat?” or “I’m not sure I understand the question, can you rephrase?”, or “Is the vehicle you would drive the same as the car that you want to wash?”, or “Where is your car currently located?”, and so on.
It feels more like a question on english linguistic conventions than logic.
If someone asked me the same question and I wanted to give a smartass reply, I'd tell them "You want to wash your car, good to know. Now, about your question, unless you tell me where you wanna go I can't really help you".
Yep, just a little more context and all/most of the models would do much better. And sure, most average+ intelligence adults whose first language is English (probably) don't need this, but they're not the target audience for the instructions :)
"The 'car wash' is a building I need to drive through."
or
"The 'car wash' is a bottle of cleaning fluid that I left at the end of my driveway."
The reason that those questions are asked, though, is that the answer to the actual question is obvious, so a human will start to wonder if it's some kind of trick.
Maybe that's a bias from training data. I would assume that most documents skip the "clarifying the question/scope" part of reasoning. Imagine a scientific text or even a book. Most will start with a clear context/scope. Either with a thesis or a well defined question or (in case of a book) with a story. Texts that start with a question that first needs to be refined are probably rare.
I wonder if anyone has any research on this field. I've often seen this myself (too often) where LLMs make assumptions and run off with the wrong thing.
"This is how you do <absolutely unrelated thing>" or "This is why <thing that actually exists already> is impossible!". Ffs man, just ask for info! A human wouldn't need to - they'd get the context - but LLMs apparently don't?
I think most people would say "drive?" and wonder when the punchline is coming, but (IMO) I don't think they'd start asking for clarification right away.
This reminds me of a Uni exam that was soooo broken that answering “correctly” entailed guessing how exactly the professor designing the questions misunderstood the topic of his own lectures.
I agree. If the LLM were truly an intelligence, it would be able to ask about this nonsense question. It would be able to ask "Why is walking even an option? Can you please explain how you imagine that would work? Do you mean hand-washing the car at home, instead?" (etc, etc)
Real people can ask for clarification when things are ambiguous or confusing. Once something is clarified, they can work that into their understanding of how someone communicates about a given topic. An LLM can't.
Gemini's responses come very close to doing that when they make fun of the question (see other posts in the thread). If the model had been RL'ed to ask follow-up questions, it seems likely that it would meet your criterion.
That's a fair point, but if you would see it as a riddle, which I don't really think it is, and you had to answer either or, I'd still assume it's most logical to chose drive isn't it?
I don’t agree that the question as written would qualify as a riddle. If anything, the riddle is what the intention of the asker is. One can always ask stupid questions with an artificially limited set of answering options; that doesn’t mean it makes sense.
Agreed. It's also possible that "car wash" merely refers to soap they might use to do it themselves, and they're only going to buy it and then wash the car themselves at home. Imagine the same question but substitute "wash" for "wax" and it makes even more sense IMO.
Thank you for saying this. It reminds me of class tests where you always had to wonder if something was a trick question and you never really knew... it was always after the teacher. Which frankly is fine in open-ended questions where you can explain your rationale or how different interpretations would lead you to different paths but a terrible situation when it comes to multiple choice. I remember being very frustrated by those
This is probably the greatest one-time AI "Benchmark" ever made. The foundation companies have been gaming traditional benchmarks for years so that no one can really match those numbers into real-world experience. Car wash test tells me on the other hand what kind of intelligence i can expect.
For me it's interesting because no normal person I know would ever inject "because its better for the environment" in anything so small scale so not only it shows they suck, it shows how easy it is to inject side-ideology into simple exchanges.
You don’t know enough people, then. There are a lot of environmentally conscious people who would absolutely first think “because it is close we should walk” and then follow up with the logical conclusion that you can’t walk to wash your car. Many people communicate by sharing their thinking process, I can think of many people who would share their ideology as it pertains to a question like this. A pragmatic environmentalist (hopefully that is all of them) would know that their ideology isn’t consequential but could certainly mention it. After all, you may need to drive your car to the car wash to wash it, but do you need to wash it? Are the chemicals used by the car wash harmful? Are there better ways to keep a car maintained?
Referring to "the normal people you know" is purely anecdotal evidence and can't be used to infer anything at all about "side-ideology". Perhaps you only know people that don't care about the environment?
Majority of people I know care about the environment but they would never inject a phrase like that in a quick exchange about going to wash the car 50m away is my point. In wanting to be a pure heart you missed the actual point.
Yea, of course they wouldn't inject that when going to a car wash.
If the question was: "I want to go to a cafe 50m away. Should I walk or drive?" I would hope that all of my friends would answer quite a bit more pointed than the LLMs: "Walk you lazy son of a ..., why are you even asking?".
Considering that, I'd say that most LLMs are being quite nice.
Would be interesting to see Sonnet (4.6*). It's fair bit smaller than Opus but scores pretty high on common sense, subjectively.
I'm also curious about Haiku, though I don't expect it to do great.
--
EDIT: Opus 4.6 Extended Reasoning
> Walk it over. 50 meters is barely a minute on foot, and you'll need to be right there at the car anyway to guide it through or dry it off. Drive home after.
Weird since the author says it succeeded for them on 10/10 runs. I'm using it in the app, with memory enabled. Maybe the hidden pre-prompts from the app are messing it up?
I tested Sonnet 4.5 first, which answered incorrectly.. maybe the Claude app's memory system is auto-injecting it into the new context (that's how one of the memory systems works, injects relevant fragments of previous chats invisibly into the prompt).
i.e. maybe Opus got the garbage response auto-injected from the memory feature, and it messed up its reasoning? That's the only thing I can think of...
--
EDIT 2: Disabled memories. Didn't help. But disabling the biographical information too, gives:
>Opus 4.6 Extended Reasoning
>Drive it — the whole point is to get the car there!
--
EDIT 3: Yeah, re-enabling the bio or memories, both make it stupid. Sad! Would be interesting to see if other pre-prompts (e.g. random Wikipedia articles) have an effect on performance. I suspect some types of pre-prompts may actually boost it.
I tested this with Opus the day 4.6 came out and it failed then, still fails now. There were a lot of jokes I've seen related to some people getting a 'dumber' model, and while there's probably some grain of truth to that I pay for their highest subscription tier so at the very least I can tell you it's not a pay gate issue.
Interesting. I wonder if that's related to the phenomenon mentioned in the Opus 4.6 model card[1], where increased reasoning effort leads to 4.6 overthinking and convincing itself of the wrong answer on many questions. It seems to be unique to 4.6; I guess they fried it a bit too much during RL training.
Here are the results I got with slight variations to the prompt to ChatGPT 5.2. Small changes can make a big difference: https://i.imgur.com/kFIeJy1.png
I actually didn't mean to criticize Rapidata. I just think that a forced-choice question like this begs for low-effort answers. At least the respondents should have had the opportunity to explain their reasoning, like the LLMs did.
All good ^^, its a fair point, we have come up with some fun ways to track peoples reliability over time. But the validation sets contain plenty of forced-choice questions, those that have an empirical true can be used directly to calculate a reliability, those that are subjective need to be re-asked after sometime to ensure consistency. People that don't pass thresholds would not be part of the 10'000 here.
But of course. If every human was told to take 3 minutes to deeply think about it and told that its a trick question, then they most likely will all get it right. But its the same with the LLMs, if you ask them like that they will get it right most of the time. The low effort is kinda the point here.
the most interesting finding here is the variance across runs. GPT-5 going 7/10 means its internal representation is unstable for this kind of reasoning - it sometimes activates the right world-model pathway and sometimes doesnt. thats actually more concerning than a model that consistently fails, because you cant predict when itll get things right.
forced choice without a "need more context" option is also doing a lot of work here. in production systems i always give models an explicit escape hatch to say they need clarification. cuts wrong-answer rates roughly in half in my experience.
I don't see anything concerning. Mechanistic interpretability research indicates that LLM internals are inherently parallel: many features "light up" in parallel, then strongest ones "win" and contribute to the output.
I'd guess it suggests walking if a feature indicates that the question is so simple it doesn't warrant step-by-step analysis.
Interestingly, when I apply the "simply repeat the prompt" technique [1], Sonnet 4.6 on the website got it right every time, both with and without extended thinking.
Not repeating the prompt got a mix of walk and drive answers.
I love how prompt engineering is basically techno-alchemy
The "car wash" framing is underrated as eval design. Real-world tasks with messy instructions and no single correct answer surface failure modes that clean benchmarks miss entirely. The models that ace MMLU and fail a car wash prompt are telling you something important about the gap between benchmark optimization and actual usefulness.
Flawed. GPT 4.1 gets it right. GPT 4.1 mini answers wrongly. It's about quantization, not about model. The companies clearly cut corners on some inferences, they are quietly using lesser models than advertised or listed in plain sight.
>OpenAI's flagship model fails this 30% of the time. When it gets it right, the reasoning is concise: "You need the car at the car wash to wash it, so drive the short 50 meters." When it gets it wrong, it writes about fuel efficiency.
It's interesting to me how variable each model is. Many people talk about LLMs as if they were deterministic: "ChatGPT answers this question this way". Whereas clearly we should talk in more probabilistic terms.
Fun fact: this probably originates from a viral Korean dude who actually walked to a car wash because it was close. So this is a case where a human actually failed
1. The model's default world model and priors diverge from ours. It may assume that you have another car at the wash and that's why you ask the question to begin with.
2. Language models do not really understand how space, time and other concepts from the real-world work
3. LLM's attention mechanism is also prone to getting tricked as in humans
I think failure is around reasoning where the car is and whether it is needed to be moved to a different place. So it's not surprising that only models with high reasoning would pass the test.
To me the only acceptable answer would be “what do you mean?” or “can you clarify?” if we were to take the question seriously to begin with. People don’t intentionally communicate with riddles and subliminal messages unless they have some hidden agenda.
I don't think it is, though. Where is the car? Do you want to wash your car at the car wash? Both of those are rather important pieces of information. Everyone is relying on assumptions to answer the question, which is fine, but in my opinion not a great reasoning test.
If you want to argue that, then you could also argue that everything needed to challenge the questions’ motives and its validity is also contained therein.
This reminds me of people who answer with “Yes” when presented with options where both can be true but the expected outcome is to pick one. For example, the infamous: “Will you be paying with cash or credit sir?” then the humorous “Yes.”
That's precisely what makes it a "trick question" or a "riddle". It's weird precisely because all the information is there. Most people who have functioning brains and complete information don't ask pointless questions (they would, obviously, just drive their car to the car wash)—there's no functional or practical reason for the communication, which is what gives it the status of a puzzle—syntax and exploitation of our tendency to assume questions are asked because information is incomplete tricks us into brining outside considerations to bear that don't matter.
Claude Code has an entire tool for the LLM to asking clarifying questions - it'll give you three pre-written responses or you can respond with your own text.
If you were forced to answer either or, which one would you pick? I think that's where the interesting dynamic comes from. Most humans would pick drive, also seen in the human control, even if it is lower that I thought it'd be
Sure, though then we’re in la la land. What’s a real life example of being forced to answer an absurd question other than riddles, games, etc? No longer a valid question through normal discourse at that point, and if context isn’t provided then I think the expected outcome still is to ask for clarification.
How is that a "subliminal message"? It's just a simple example of common sense, which LLMs fail because they can't reason, not because they are "overthinking". If somebody asks, "What's 2+2?", they might be insulting you, but that doesn't mean the answer is anything other than 4.
So you're saying it would be useful for an "AI assistant" to ask you for the base each time you give it a math problem? Do you also want it to ask you if you're using the conventional definitions of "2" and "+"? For the car wash, would you like it to ask if you're on Earth or on Mars? Do you have air in your tires? Is the car actually a toy car?
Some assumptions are always necessary and reasonable, that's why I'm saying the "AI" lacks common sense.
It’s common sense to ask a question in riddle format? What’s the goal of the person asking the question? To challenge the other person? In what way? See if they get the obvious? Asking for clarification isn’t valid?
It's common sense to know that you need to have your car with you to wash it. Asking the question is a challenge in the obvious yes. If you asked an AI "what's 2+2" and it said 3, would you argue that the question was a trick question?
No. I would expect it to say 4 given that has an objective answer. For the other, without any context whatsoever, I would prefer the answer of clarifying. I would be okay if the way it asked for clarification came with:
“What do you mean walk or drive? I don’t understand the options given you would need your car at the car wash. Is there something else I should know?”
Well, I don't think you get my point based on your last question. My point is that there is no logical fallacy in the car wash question, just like there is none in 2+2. How is it any more logically flawed than asking, "I want to shop for groceries. The shop 50 meters away. Should I walk or drive?".
"What's 2 + 2" is a completely abstract question for mathematics that human beings are thoroughly trained mostly to associate with tests of mastery and intelligence.
The car wash question is not such a question. It is framed as a question regarding a goal oriented, practical behavior, and in this situation it would be bizarre for a person to ask you this (since a rational person having all the information in the prompt, knowing what cars are, which they own, and knowing what a car wash is, wouldn't ask anybody anything, they'd just drive their car to the car wash).
And as someone else noted there are in fact situations in which it actually can be reasonable to ask for more context on what you mean by "2 + 2". You're just pointing out that human beings use a variety of social mores when interpreting messages, which is precisely why the car wash question silly/a trick were a human being to ask you and not preceded the question with a statement like "we're going to take an examine to test your logical reasoning".
As with LLMs, interpretation is all about context. The people that find this question weird (reasonably) interpret it in a practical context, not in a "this is a logic puzzle context" because human beings wags cats far more often than they subject themselves to logic puzzles.
My point is that just because there's no practical reason to ask the question, that doesn't make it a weird question or make the answer anything other than obvious. You'd never ask somebody "Is the sky blue?", but that doesn't mean the answer is anything other than "Yes". The answer is clearly not "Well, is it night? Is it sunset?" etc.
If you speak French to Mistral, it gets it right everytime: Je veux laver ma voiture. La station de lavage est à 50 mètres. J'y vais à pied ou en voiture ?
Maybe I'm too old or my family was weird. We called it "le carwash" with a beautifully French "carouache" pronunciation. But yeah, "lave-auto" sounds more familiar.
That human baseline is wild. Either the rapid data test is methodologically flawed or the entire premise of the question is invalid and people are much stupider than even I, a famed misanthrope, think.
We were surprise ourselfes, but if you walk around and randomly ask people in the street, I think you would be surprised what you would find. Its a trick question.
It's actually very understandable to me that humans would make this kind of error, and we all make errors of this sort all the time, often without even realizing it. If you had the meta cognitive awareness to police every action and decision you've ever made with complete logical rigor, you'd be severely disappointed in yourself. One of the stupidest things we can do is overestimate our own intelligence. Only reflect for a second and you'll realize that, while a lot of dumb people exist, a lot of smart ones do too, and in many cases it's hard to choose a single measure of intelligence that would adequately account for the complete range of human goals and successful behavior in relation to those goals.
Well, it is a trick question. The question itself implies that both options are valid, and that one is superior. So the brain pattern-matches to "short distance, not worth driving." (LLMs appear to be doing the same thing here!)
If you framed it as "hint: trick question", I expect score would improve. Let's find out!
--
EDIT: As suspected! Adding "(Hint: trick question)" to the end of the prompt allows small, non-reasoning models to answer correctly. e.g.:
Prompt: I want to wash my car. The car wash is 50 meters away. Should I walk or drive? (Hint: trick question)
grok-4.1-non-reasoning (previously scored 0/10)
>Drive.
>Walking gets you to the car wash just fine—but leaves your dirty car 50 meters behind. Can't wash what isn't there!
--
EDIT 2: The hint doesn't help Haiku!
>Walk! 50 meters is only about a block away—driving would waste more fuel than it's worth for such a short trip. Plus, you're going to get wet washing the car anyway, so you might as well save the gas.
This is a beautiful example of a little prompt engineering going a long way
I asked Gemini and it got it wrong, then on a fresh chat I asked it again but this time asked it to use symbolic reasoning to decide.
And it got it!
The same applies to asking models to solve problems by scripting or writing code. Models won’t use techniques they know about unprompted - even when it’ll result in far better outcomes. Current models don’t realise when these methods are appropriate, you still have to guide them.
Sites wanting to block AI scraping should simply ask questions like these, instead of furthering the complexity-driven monopoly of Big Tech by requiring specifically sanctioned software and hardware. This is how you determine human intelligence, and not mindless compliance.
The article claims that every Claude model other than Opus 4.6 reliably fails. This is not true, Sonnet 3.5 answers correctly around half of the time, even though it's such an old model it's not even available on the main API anymore.
71.5% for humans is lower than I would’ve guessed. Either it’s truly trick-shaped for people too, or forced binary questions amplify noise more than we think.
The problem is you haven't used the API, but you have used your ChatGPT subscriptions with personality, memories and possible customization. I can see for instance that your ChatGPT answers with emojis, while my ChatGPT subscription never does.
If you ask GPT 5.2 with high reasoning efforts in the API, you get 10 out of 10: drive.
Yes, there are some exceptions where it clearly states that a thinking model has been chosen like for kimi, but there is no such indicator for the GPT family from OpenAI and other major models.
The fun thing about this is that I can see myself answering "just walk there! It's near" if I'm extremely sleepy or drunk. LLMs are indeed weird creatures :)
Since the conclusion is that context is important, I expected you’d redo the experiment with context. Just add the sentence “The car I want to wash is here with me.” Or possibly change it to “should I walk or drive the dirty car”.
It’s interesting that all the humans critiquing this assume the car isn’t at the car to be washed already, but the problem doesn’t say that.
When this first came up on HN, I had commented that Opus 4.6 told me to drive there when I asked it the first time, but when I switched to "Incognito Mode," it told me to walk there.
I just repeated that test and it told me to drive both times, with an identical answer: "Drive. You need the car at the car wash."
Definitely. I'm just interested in whether a user's... I don't know what they call them, system files (?) or personalization or whatever, might affect the answers here. Or if Incognito Mode introduces some weird variance in the answers. I'm just not interested enough to perform the test myself. =P
> The funniest part: Perplexity's Sonar and Sonar Pro got the right answer for completely wrong reasons. They cited EPA studies and argued that walking burns calories which requires food production energy, making walking more polluting than driving 50 meters. Right answer, insane reasoning.
What I find wild is the presumption that with a prompt as simple as “I want to wash my car. My car is 50m away. Should I walk or drive?”, everyone here seems to assume “washing your car” means “taking your car to the car wash”, while what I pictured was “my car is in the driveway, 50m away from me, next to a water hose”, in which case I 100% need to drive.
the failure pattern is interesting -- 'walk because it's only 50 meters and better for environment' is almost certainly what shows up most in training data for similar prompts. so models are pattern-matching to socially desirable answers rather than the actual spatial logic (you need a car at the destination to wash it). not really a reasoning failure, more a distribution shift: the training signal for 'short distance = walk' is way stronger than edge cases where the destination requires the vehicle.
And that's precisely why the term "reasoning" was a problematic choice.
Most people, when they use the word "reason" mean something akin to logical deduction and they would call it a reasoning failure, being told, as they are, that "llms reason" rather than the more accurate picture you just painted of what actually happens (behavioral basins emerging from training dist.)
Favorite answer, using my default custom instructions: "Drive. Walking there means... leaving your car at home? Walk it there on a leash? Walk if you want the exercise, but you're bringing the car either way."
This should be coined the Daniel Kahneman reasoning test, mirroring his 2011 book "thinking fast and slow", which postulates that fast thinking and slow thinking occur in different parts of the brain, and that they are fundamentally different processes, that are weighted by yet another part of the brain.
This test is interesting because it asks the LLM to break a pattern recognition that's easy to shortcut. "XXX Is 50 Meters Away. Should I Walk or Drive?" is a pattern that 99% of the time will be rightly answered by "walk". And humans are tempted to answer without thinking (as reflected in the 71.5% stat OP is mentioning). This is likely more pronounced for humans that have stronger feelings about the ecology, as emotions tend to shortcut reasoning.
For a long time, LLMs have only been able to think in that "fast" mode, missing obvious trick questions like these. They were mostly pattern recognition machines.
But the more important results here, is not that "oh look! Those LLMs fail at this basic question", no. The more important result is that the latest generation actually doesn't fail.
I think I am not the only one to have noted that there was a giant leap in reasoning capacities between Sonnet 4.5 and Opus 4.6. As a developper, working with Opus 4.6 has been incredible.
This is all a bit unfair. I can't speak about the others, but testing Perplexity like this and comparing her with the others doesn't do her justice.
For Perplexity specifically it matters a lot how you trained her thought processes. A smarter user with smarter thoughts changes the outcome of her output.
As a personal comment ... they've changed Perplexity to GPT5 recently (at least for me) and it has been a massive intellectual downgrade in comparison to the Sonar-Perplexity I had been running with my own, deliberately trainend, thought patterns and thinking processes. It's been only a few days and I hope GPT5 catches up, otherwise it's just a massive disappointment.
I think if surveyed at least 90% of native English speakers would understand "I want to wash my car" to mean a full size automobile. The next largest group would probably ask a clarifying question, rather than assume a toy car.
Humans have the ability to reason and think critically, so it's pretty trivial to answer unless you think you're getting tricked by a riddle and the answer is the non-intuitive one.
You think that the reasonable interpretation of the question is that I want to go to the car wash but not to wash my car there, because I plan to wash my car at home?
Yes, but you're speaking to a computer, not a person. It, of course, runs into the same limitations that every computer system runs into. In this case, it's undefined/inconsistent behavior when inputs are ambiguous.
Except for a few models, the selected ones were non-reasoning models. Naturally, without reasoning enabled, the reasoning performance will be poor. This is not a surprising result.
I asked GPT-5.2 10x times with thinking enabled and it got it right every time.
I think it's related to syncophancy. LLM are trained to not question the basic assumptions being made. They are horrible at telling you that you are solving the wrong problem, and I think this is a consequence of their design.
They are meant to get "upvotes" from the person asking the question, so they don't want to imply you are making a fundamental mistake, even if it leads you into AI induced psychosis.
Or maybe they are just that dumb - fuzzy recall and the eliza effect making them seem smart?
A perfectly fine, sycophantic response, that doesn't question the premises in any way, would be "That's a great question! While normally walking is better for such a short distance, you'd need to drive in this case, since you need to get the car to the car wash anyway. Do you want me to help with detailed information for other cases where the car is optional?" or some such.
Gemini is the only AI that seems to really push back and somewhat ignores what I say. I also think it's a total dick, and never use it, so maybe the motivation to make them a bit sycophants is justified, from a user engagement perspective.
What I find odd about all the discourse on this question is that no one points out that you have to get out of the car to pay a desk agent at least in most cases. Therefore there's a fundamental question of whether it's worth driving 50m parking, paying, and then getting back in the car to go to the wash itself versus instead of walking a little bit further to pay the agent and then moving your car to the car wash.
That's a great point, you actually reminded me of when I used to live in this small city and they had a valet style car wash. It was not unheard of to head there walking with your keys and tell the guy running shop where you parked around the block then come back later to pick it up.
EDIT: I actually think this is very common in some smaller cities and outside of North America. I only ever seen a drive-by Car Wash after emigrating
You drive up to the car wash, there's a little terminal with a screen and a card reader. You pick the program, pay for it and drive into the machine. Can't remember the last time I got out of my car when getting it washed.
Fair. I guess I'm remembering the old full service wash places where people would wash the inside as well. Maybe those barely exist anymore. I live in a city and don't have a car so my intuition is off. Not as far off as a model that has never walked, driven, or been to a car wash tho.
I don't think this is a logic test, I think it's a woke test. The models have been trained on text that say driving is bad and that walking is good for you. This is why they fail.
well, some human in a baseline test also failed. 28% is still a lot. even with today tech, using cutting edge tech, you got it 100% solved, unlike human, that will never get better.
They ran the exact same question with the same
forced choice between "drive" and > "walk," no
additional context, past 10,000 real people
through their human feedback platform.
> "Obviously, you need to drive. The car needs to be at the car wash."
Actually, this isn't as "obvious" as it seems—it’s a classic case of contextual bias.
We only view these answers as "wrong" because we reflexively fill in missing data with our own personal experiences. For example:
- You might be parked 50m away and simply hand the keys to an attendant.
- The car might already be at the station for detailing, and you are just now authorizing the wash.
This highlights a data insufficiency problem, not necessarily a logic failure. Human "common sense" relies on non-verbal inputs and situational awareness that the prompt doesn't provide. If you polled 100 people, you’d likely find that their "obvious" answers shift based on their local culture (valet vs. self-service) or immediate surroundings.
LLMs operate on probabilistic patterns within their training data. In that sense, their answers aren't "wrong"—they are simply reflecting a different set of statistical likelihoods. The "failure" here isn't the AI's logic, but the human assumption that there is only one universal "correct" context.
There are no contextual bias, the goal of the prompt is very explicit and not about probabilistic patterns, but about the models transformer layers dynamically assigning greater weight to words like "meters" (distance) than to other tokens in the prompt.
This should be fixed in the reasoning layer (the inner thoughts or chain-of-thought) were the model should focus on the goal "I Want to Wash My Car" not the distance and assign the correct weight to the tokens.
The point is not that there is bias in promt - What makes the result obvious to OP is their bias - which is different for model and "fixing" it one way is biased.
Why? - It is the same reason that makes 30% of people respond in non-obvious sense.
I think it's useful to think about it through the lens of Gricean pragmatic semantics. [1] When we interpret something that someone says to us, we assume they're being cooperative conversation partners; their statements (or questions) are assumed to follow the maxim of manner and the maxim of relation for example, and this shapes how we as listeners interpret the question. So for example, we wouldn't normally expect someone to ask a question that is obviously moot given their actual needs.
So it's not that the question is really all that ambiguous, it's that we're forced (under normal circumstances where we assume the cooperative principle holds) to assume that the question is sincere and that there must be some plausible reason for walking. We only really escape that by realizing that the question is a trick question or a test of some kind. LLMs are generally not trained to make the assumption, but ~70% of humans would, which isn't particularly surprising I don't think.
[1] https://en.wikipedia.org/wiki/Cooperative_principle#Grice's_...
I think it would be better to say that 30% of people either lack the ability to reason (inarguably true in a few cases, though I'd suggest, and hope, an order of magnitude or two less than 30%, as that would be a life-altering mental impairment) or just can't generally be bothered to, or just didn't (because they couldn't be bothered, or because they felt some social pressure to answer quickly rather than taking more than an instant time to think) at the time of being asked this particular question.
An automated system like an LLM to not have this problem. It has no path to turn off or bypass any function that it has, so if it could reason it would.
While I’m sure it’s more than 0%, seems more likely that somewhere between 0% and 30% don’t feel obligated to give the inquiry anything more than the most cursory glance.
How do incentives align differently with LLMs?
I'd look for explanations elsewhere. This was an online survey done by a company that doesn't specialize in surveys. The results likely include plenty of people who were just messing around, cases of simple miscommunication (e.g., asking a person who doesn't speak English well), misclicks, or not even reaching a human in the first place (no shortage of bots out there).
If you're interested in the user experience, it's this: https://www.reddit.com/r/MySingingMonsters/comments/1dxug04/... - apparently, some annoying ad-like interstitial that many people probably just click through at random.
The key technique in the mathematical method to answer the machine question is "theory of mind".
There is nothing “mathematical” about any of this though.
Had to be since their site lists no way to be a tester. In other words their service is a bunch of 7-13 year olds playing some loot box game.
Wonder where that is in the disclaimers.
That actually would be quite feasible. Intelligence seems to be heritable and people will usually find friends that communicate on their level. So it wouldn't be odd for someone who is smarter than the general population to have friends and family who are too.
Some people love riddles and will really concentrate on them and chew them over. Some people are quickly burning through questions and just won't bother thinking it through. "Gotta go to a place, but it's 50 feet away? Walk. Next question, please." Those same people, if they encountered this problem in real life, or if you told them the correct answer was worth a million bucks, would almost certainly get the answer right.
But if you omit the details designed to confuse people, they're far less likely to get it wrong: "I have a rooster named Pat. Pat flies to the top of the roof and lays an egg right on the ridge of the roof. Which way will the egg roll?"
It's not about reasoning ability, it's about whether they were paying close attention to your question, or whether their minds were occupied by other concerns and didn't pay attention.
Or by reasoning, do you mean something else?
I've seen plenty of smart people trip up or get these wrong simply because it's a random question, there's no stakes, and so there's no need to think too deeply about it. If you pause and say "are you sure?" I'm sure most of that 70% would be like "ohhh" and facepalm.
If it's as unambiguous as you say, then the natural human response to that question isn't "you should drive there". It's "why are you fucking with me?" Or maybe "have you recently suffered a head injury?"
If you trust that the questioner isn't stupid and is interacting with you honestly, you'd probably just assume that they were asking about an unusual situation where the answer isn't obvious. It's implicitly baked into the premise of the question.
If you introduced it with "Here's a logic problem..." then people will approach it one way.
But as specified, it's hard to know what is really being asked. If you are actually going to wash your car at the car wash that is 50 metres away, you don't need to ask this question.
Therefore the fact that the question is being asked implies that something else is going on...but what?
Reminds me of https://slatestarcodex.com/2020/05/28/bush-did-north-dakota/
I want to wash my car at the car wash. The car wash is 50 meters away and my car is in my driveway. Should I walk or drive?
2. Where is the car now? Is it already at the car wash waiting for you to arrive?
I can see why an LLM might miss this. I think any good software engineer would ask clarifying questions before giving an answer.
The next step for an LLM is to either ask questions before giving a definitive answer for uncertain things or to provide multiple answers addressing the uncertainty.
4. Does the car have enough fuel?
Jokes asides, all of those questions are unnecessary. There's no more context to this.
https://chatgpt.com/share/699d2d1b-51f0-8003-9c63-af9bb5bcf8...
The correct answer to "I Want to Wash My Car. The Car Wash Is 50 Meters Away. Should I Walk or Drive?" is a clarifying question that asks "Where is your car?" Anything else is based on an assumption that could be wrong.
FWIW though, asking ChatGPT "My car is 50m away from the carwash. I Want to Wash My Car. Should I Walk or Drive?" still gets the wrong answer.
Implicit in the question "Should I walk or drive?" is that walking and driving are not strictly impossible choices.
Can we think about the what, where, when, and how of the following question. And then answer it.
I need to wash my car. The car wash is 200 m away. Should i walk or drive.
Let’s break it down logically using what / where / when / how, then decide.
⸻
1⃣ What
Task: wash your car. This implies: • The car must physically get to the wash. • You are not just going there — the car itself is the object being moved.
So the real question is not “how do I get there?” but how does the car get there with minimal friction.
⸻
2⃣ Where • Distance: 200 metres (very close). • Urban context (Warsaw) likely means: • short streets • possible parking constraints • pedestrians and local traffic.
At this distance: • Walking time: ~2–3 minutes. • Driving time: ~1 minute + manoeuvring + parking.
The movement time difference is negligible.
⸻
3⃣ When
Consider timing conditions: • If it’s busy hours: Driving may involve awkward turning, queueing, or waiting to re-park. • If it’s quiet (early morning / off-peak): Driving is frictionless. • Weather factor: Cold/rain → driving becomes more attractive. Pleasant weather → walking becomes easier.
⸻
4⃣ How (decision mechanics)
Ask:
Option A — Drive
Pros: • Car is already there. • No need to return to fetch it. • Most natural workflow: arrive → wash → leave.
Cons: • Very short trip (slightly inefficient). • Cold engine for a tiny distance (minor mechanical inefficiency).
Option B — Walk
Pros: • Zero short-distance driving. • Slightly more movement for you.
Cons: • You still must drive the car to the wash anyway — unless someone else moves it. • Creates an unnecessary extra step: 1. Walk there. 2. Come back with car. 3. Drive again.
This introduces process inefficiency.
⸻
5⃣ Decision (practical answer)
Drive.
Reason: • The car must end up at the wash. • Walking adds an extra transition with no real benefit. • At 200 m, efficiency is about workflow simplicity, not distance.
⸻
Rule of thumb (useful mental model) • If the object you need is the thing being serviced → drive it. • If you are the thing being serviced → walk.
⸻
If you want, I can also give you a slightly more philosophical answer — because this is actually a nice tiny example of decision theory and friction costs.
(https://old.reddit.com/r/totallynotrobots ... I enjoy how relevant this has become)
By integrating in a large range of mobile apps (3000+) we have access to a large crowd of over 10 million users responding to our micro tasks/surveys - which we call Rapids. They are spread all over the world, so we can get human input from over 160 countries. Due to the immense size, we can parallelize tasks and get near instant results.
“I need to replace a spark plug. The garage is 200 meters away should I walk or drive there”
“Walk! 200 meters is just a 2-3 minute stroll — no need to start the car for that distance. Plus, you’ll likely need to carry the spark plug back carefully, and walking is perfectly easy for that. “
Basically LLM suffer from context collapse.
The car will move for sure, but you definitely should be walking.
Last day Claude Code said to me “Small nitpick — the use of so and so is great”. Which was something no human would say.
I feel like maybe you haven't put a sparkplug in before...
Inline six cylinder engines run with a single clogged / broken spark plug.
It'd make 200 m to a garage just fine*, but who'd drive 200 m in any case?
Back in the 1970's we'd pull a spark plug and screw in a hose to use the compression phase to inflate tyres.
* Just don't make a habit of it, or reserve that knowledge for when you really need to self rescue.
You'd inflate your tires with a gasoline and air mix?
Contrary to expectations it didn't seem to affect the tubes or tyres, shorten lifespans, etc.
FWiW I grew up in fairly remote parts of the world and we repaired pretty much everything as much as possible with what we had - aircraft, helicopters, radios, etc.
My first thought, to be honest, was "Why not just grab a spare spark plug from the boot?"
I'm guessing not so many people stock spares when driving so much anymore.
You can also replace a blown out tyre with a hard wood skid if pressed, weld up broken steel with car batteries, stuff tyres with grasses, etc.
* Bush Mechanics: https://www.youtube.com/watch?v=P6foUHyfX3Q is fun.
One might reasonably wonder if the material might degrade or the tyre explode while running hot.
Can confirm, that doesn't happen.
Perhaps some feathers have been ruffled by the insinuation that their favourite word predictor was wrong, but I assure you it's not all of them
My hypothesis is that some models err towards assuming human queries are real and consistent and not out there to break them.
This comes in real handy in coding agents because queries are sometimes gibberish till the models actually fetch the code files, then they make sense. Asking clarification immediately breaks agentic flows.
My little experiment gave me:
No added hint 0/3
hint added at the end 1.5/3
hint added at the beginning 3/3
.5 because it stated "Walk" and then convinced it self that "Drive" is the better answer.
That trick didn't help Mistral Le Chat.
Some applications hide the reasoning tokens from view, but then the final answer appears delayed.
I tried using a custom instruction in chatGPT to make responses shorter but I found the output was often nonsensical when I did this
I occasionally go back to o3 for a turn (it's the last of the real "legacy" models remaining) because it doesn't have these habits as bad.
Over the last few years I’ve rotated between OpenAI and Anthropic models on about a 4-5 month cycle. I just started my Anthropic cycle because of my annoyance with the GPT-5.2 verbosity
In four months when opus is annoying me and I forget my grievances with OpenAI’s models and switch back, I’ll report back lol.
Like, no, stop that! Keep my engineering life separate from my personal life!
Asking it to be shorter is like doing fewer iteration of numerical integral solving algorithm.
Like, my guy, I don't want to keep prompting you to make shit better, if you're missing info, ask me, don't write a novel then say "BTW, this version sucked"
Yes, I know this could probably be resolved via better prompting or a system prompt, but it's still annoying.
They basically only started doing this because someone noticed you got better performance from the early models by straight up writing "think step by step" in your prompt.
It would actually take more work to condense that long response into a terse one, particularly if the condensing was user specific, like "based on what you know about me from our interactions, reduce your response to the 200 words most relevant to my immediate needs, and wait for me to ask for more details if I require them."
* this time last year they couldn't write compilable source code for a compiler for a toy language, I know because I tried
SOTA today has a different set of caveats, of course.
Alternative approaches like "reasoning in the latent space" are active research areas, but have not yet found major success.
I'd hazard a guess that they could get another 40% reduction, if they can come up with better reasoning scaffolding.
Each advance over the last 4 years, from RLHF to o1 reasoning to multi-agent, multi-cluster parallelized CoT, has resulted in a new engineering scope, and the low hanging fruit in each place gets explored over the course of 8-12 months. We still probably have a year or 2 of low hanging fruit and hacking on everything htat makes up current frontier models.
It'll be interesting if there's any architectural upsets in the near future. All the money and time invested into transformers could get ditched in favor of some other new king of the hill(climbers).
https://arxiv.org/abs/2602.02828 https://arxiv.org/abs/2503.16419 https://arxiv.org/abs/2508.05988
Current LLMs are going to get really sleek and highly tuned, but I have a feeling they're going to be relegated to a component status, or maybe even abandoned when the next best thing comes along and blows the performance away.
I analogize it as a film noir script document: The hardboiled detective character has unspoken text, and if you ask some agent to "make this document longer", there's extra continuity to work with.
They are not just an LLM answer, they are an (often cached) LLM summary of web results.
This is why they were often skewed by nonsensical Reddit responses [0].
Depending on the type of input it can lean more toward web summary or LLM answer.
So I imagine that it can just grab the description of the „car wash” test from web results and then get it right because of that.
[0] https://www.bbc.com/news/articles/cd11gzejgz4o
[1] e.g. trained on traces of a reasoning process
What you've proven is that LLMs leverage web search, which I think we've known about for a while.
E.g. well known mirror-test [1], passed by kids from age 1.5-2
Or object permanence [2], children knowing by age 2 that things that are not in sight do not disappear from existence.
[1] https://en.wikipedia.org/wiki/Mirror_test [2] https://en.wikipedia.org/wiki/Object_permanence
I also noticed the same thing: some models reason correctly but draw the wrong conclusions.
And MiniMax m2.5 just reasons forever (filling the entire reasoning context) and gives wrong answers. This is why it's #1 on OpenRouter, it burns through tokens.
I don’t think it’s that easy. An intelligent mind will wonder why the question is being asked, whether they misunderstood the question, or whether the asker misspoke, or some other missing context. So the correct answer is neither “walk” nor “drive”, but “Wat?” or “I’m not sure I understand the question, can you rephrase?”, or “Is the vehicle you would drive the same as the car that you want to wash?”, or “Where is your car currently located?”, and so on.
If someone asked me the same question and I wanted to give a smartass reply, I'd tell them "You want to wash your car, good to know. Now, about your question, unless you tell me where you wanna go I can't really help you".
"The 'car wash' is a building I need to drive through."
or
"The 'car wash' is a bottle of cleaning fluid that I left at the end of my driveway."
https://i5.walmartimages.com/seo/Rain-x-Foaming-Car-Wash-Con...
"This is how you do <absolutely unrelated thing>" or "This is why <thing that actually exists already> is impossible!". Ffs man, just ask for info! A human wouldn't need to - they'd get the context - but LLMs apparently don't?
U: The car wash is 50m away. Should I walk or should I drive?
A: Do you need to wash your car?
U: Yep.
A: Is the car yours?
U: Yep.
A: Is your car nearby?
U: Yes.
A: Do you need to walk to your car?
U: No.
A: Do you know if the car wash is currently working?
U: Yes.
A: Does it work?
U: YES!
A: Do you have a way to pay for the wash?
U: YES!
A: On you?
U: YES!
A: Is your clothing appropriate for the occasion?
U: Argh...
Real people can ask for clarification when things are ambiguous or confusing. Once something is clarified, they can work that into their understanding of how someone communicates about a given topic. An LLM can't.
I am thus making my own benchmarks: https://aibenchy.com
If the question was: "I want to go to a cafe 50m away. Should I walk or drive?" I would hope that all of my friends would answer quite a bit more pointed than the LLMs: "Walk you lazy son of a ..., why are you even asking?".
Considering that, I'd say that most LLMs are being quite nice.
I'm also curious about Haiku, though I don't expect it to do great.
--
EDIT: Opus 4.6 Extended Reasoning
> Walk it over. 50 meters is barely a minute on foot, and you'll need to be right there at the car anyway to guide it through or dry it off. Drive home after.
Weird since the author says it succeeded for them on 10/10 runs. I'm using it in the app, with memory enabled. Maybe the hidden pre-prompts from the app are messing it up?
I tested Sonnet 4.5 first, which answered incorrectly.. maybe the Claude app's memory system is auto-injecting it into the new context (that's how one of the memory systems works, injects relevant fragments of previous chats invisibly into the prompt).
i.e. maybe Opus got the garbage response auto-injected from the memory feature, and it messed up its reasoning? That's the only thing I can think of...
--
EDIT 2: Disabled memories. Didn't help. But disabling the biographical information too, gives:
>Opus 4.6 Extended Reasoning
>Drive it — the whole point is to get the car there!
--
EDIT 3: Yeah, re-enabling the bio or memories, both make it stupid. Sad! Would be interesting to see if other pre-prompts (e.g. random Wikipedia articles) have an effect on performance. I suspect some types of pre-prompts may actually boost it.
[1] https://www.anthropic.com/claude-opus-4-6-system-card
Edit: Found Haiku. Alas!
1. There is no initial screening that would filter out garbage responses. For example, users who just pick the first answer.
2. They don't ask for reasoning/rationale.
They found that ~15% of US adults under 30 claim to have been trained to operate a nuclear submarine.
But of course. If every human was told to take 3 minutes to deeply think about it and told that its a trick question, then they most likely will all get it right. But its the same with the LLMs, if you ask them like that they will get it right most of the time. The low effort is kinda the point here.
forced choice without a "need more context" option is also doing a lot of work here. in production systems i always give models an explicit escape hatch to say they need clarification. cuts wrong-answer rates roughly in half in my experience.
I'd guess it suggests walking if a feature indicates that the question is so simple it doesn't warrant step-by-step analysis.
Not repeating the prompt got a mix of walk and drive answers.
I love how prompt engineering is basically techno-alchemy
1: https://arxiv.org/pdf/2512.14982
First section says "The models that passed the car wash test: ...Gemini 2.0 Flash Lite..."
A section or 2 down it says: "Single-Run Results by Model Family: Gemini 3 models nailed it, all 2.x failed"
In the section below that about 10 runs it says: 10/10 — The Only Reliable AI Models ... Gemini 2.0 Flash Lite ..."
So which it is? Gemini 2.x failed (2nd section) or it succeeded (1st and 3rd) section. Or am I mis-understanding
Is it not obvious that you need a car to wash?
Are they using LLM to answer?
It's interesting to me how variable each model is. Many people talk about LLMs as if they were deterministic: "ChatGPT answers this question this way". Whereas clearly we should talk in more probabilistic terms.
https://pgr21.com/humor/340572
1. The model's default world model and priors diverge from ours. It may assume that you have another car at the wash and that's why you ask the question to begin with.
2. Language models do not really understand how space, time and other concepts from the real-world work
3. LLM's attention mechanism is also prone to getting tricked as in humans
This reminds me of people who answer with “Yes” when presented with options where both can be true but the expected outcome is to pick one. For example, the infamous: “Will you be paying with cash or credit sir?” then the humorous “Yes.”
And what if it’s a full service car wash and you’ve parked nearby because it’s full so you walk over and give them the keys?
Assumptions make asses of us all…
Some assumptions are always necessary and reasonable, that's why I'm saying the "AI" lacks common sense.
“What do you mean walk or drive? I don’t understand the options given you would need your car at the car wash. Is there something else I should know?”
2+2 is a complete expression, the other is grammatically correct but logically flawed. Where is the logical fallacy in 2+2?
"What's 2 + 2" is a completely abstract question for mathematics that human beings are thoroughly trained mostly to associate with tests of mastery and intelligence.
The car wash question is not such a question. It is framed as a question regarding a goal oriented, practical behavior, and in this situation it would be bizarre for a person to ask you this (since a rational person having all the information in the prompt, knowing what cars are, which they own, and knowing what a car wash is, wouldn't ask anybody anything, they'd just drive their car to the car wash).
And as someone else noted there are in fact situations in which it actually can be reasonable to ask for more context on what you mean by "2 + 2". You're just pointing out that human beings use a variety of social mores when interpreting messages, which is precisely why the car wash question silly/a trick were a human being to ask you and not preceded the question with a statement like "we're going to take an examine to test your logical reasoning".
As with LLMs, interpretation is all about context. The people that find this question weird (reasonably) interpret it in a practical context, not in a "this is a logic puzzle context" because human beings wags cats far more often than they subject themselves to logic puzzles.
If you framed it as "hint: trick question", I expect score would improve. Let's find out!
--
EDIT: As suspected! Adding "(Hint: trick question)" to the end of the prompt allows small, non-reasoning models to answer correctly. e.g.:
Prompt: I want to wash my car. The car wash is 50 meters away. Should I walk or drive? (Hint: trick question)
grok-4.1-non-reasoning (previously scored 0/10)
>Drive.
>Walking gets you to the car wash just fine—but leaves your dirty car 50 meters behind. Can't wash what isn't there!
--
EDIT 2: The hint doesn't help Haiku!
>Walk! 50 meters is only about a block away—driving would waste more fuel than it's worth for such a short trip. Plus, you're going to get wet washing the car anyway, so you might as well save the gas.
I asked Gemini and it got it wrong, then on a fresh chat I asked it again but this time asked it to use symbolic reasoning to decide.
And it got it!
The same applies to asking models to solve problems by scripting or writing code. Models won’t use techniques they know about unprompted - even when it’ll result in far better outcomes. Current models don’t realise when these methods are appropriate, you still have to guide them.
But also:
GPT 5.2 Thinking, Standard Effort: Walk - https://chatgpt.com/share/699d38cb-e560-8012-8986-d27428de8a...
I'm assuming "GPT 5.2 Thinking" is, in fact, a thinking model?
If you ask GPT 5.2 with high reasoning efforts in the API, you get 10 out of 10: drive.
It’s interesting that all the humans critiquing this assume the car isn’t at the car to be washed already, but the problem doesn’t say that.
But unfortunately the article doesn't mention whether they used the reasoning model or not.
Even more interesting: Gemini 2.0 Flash Lite got a perfect score (10/10) despite being a quite small and old model.
I just repeated that test and it told me to drive both times, with an identical answer: "Drive. You need the car at the car wash."
I mean, Sam Altman was making the same calorie-based arguments this weekend https://www.cnbc.com/2026/02/23/openai-altman-defends-ai-res...
I feel like I'm losing grasp of what really is insane anymore.
Which hopefully explains why everyone is assuming that "washing your car" does in fact mean "taking your car to the car wash"
And that's precisely why the term "reasoning" was a problematic choice.
Most people, when they use the word "reason" mean something akin to logical deduction and they would call it a reasoning failure, being told, as they are, that "llms reason" rather than the more accurate picture you just painted of what actually happens (behavioral basins emerging from training dist.)
Also, the summary of the Gemini model says: "Gemini 3 models nailed it, all 2.x failed", but 2.0 Flash Lite succeeded, 10/10 times?
Opus 4.6: Drive (https://claude.ai/share/d57fef01-df32-41f2-b1dc-07de7916bdc7)
Opus 4.5: Drive (https://claude.ai/chat/a590cac1-100a-490b-b0a2-df6676e1ae99)
Opus 3.0: Walk (https://claude.ai/chat/372c144c-d6eb-43f5-b7ea-fd4c51c681db)
Sonnet 4.6: Walk (https://claude.ai/share/1f2a80f3-4741-40a5-8a05-7349ea1a17e5)
Sonnet 4.5: Walk (https://claude.ai/share/905afeb6-ffc9-4b4b-a9ee-4481e5cfd527)
Favorite answer, using my default custom instructions: "Drive. Walking there means... leaving your car at home? Walk it there on a leash? Walk if you want the exercise, but you're bringing the car either way."
This test is interesting because it asks the LLM to break a pattern recognition that's easy to shortcut. "XXX Is 50 Meters Away. Should I Walk or Drive?" is a pattern that 99% of the time will be rightly answered by "walk". And humans are tempted to answer without thinking (as reflected in the 71.5% stat OP is mentioning). This is likely more pronounced for humans that have stronger feelings about the ecology, as emotions tend to shortcut reasoning.
For a long time, LLMs have only been able to think in that "fast" mode, missing obvious trick questions like these. They were mostly pattern recognition machines.
But the more important results here, is not that "oh look! Those LLMs fail at this basic question", no. The more important result is that the latest generation actually doesn't fail.
I think I am not the only one to have noted that there was a giant leap in reasoning capacities between Sonnet 4.5 and Opus 4.6. As a developper, working with Opus 4.6 has been incredible.
For Perplexity specifically it matters a lot how you trained her thought processes. A smarter user with smarter thoughts changes the outcome of her output.
As a personal comment ... they've changed Perplexity to GPT5 recently (at least for me) and it has been a massive intellectual downgrade in comparison to the Sonar-Perplexity I had been running with my own, deliberately trainend, thought patterns and thinking processes. It's been only a few days and I hope GPT5 catches up, otherwise it's just a massive disappointment.
Now why anyone would wash a toy car at a car wash is beyond comprehension, but the LLM is not there to judge the user's motives.
The question doesn't clearly state that the user wants to have his car washed at the car wash.
"I want to wash my car" is far less clear than "I want to have my car washed". A reasonable alternative interpretation is DIY.
Even better: "I wish to have my car washed by the crew and/or machinery at the local car wash business".
https://imgur.com/tCSPwYp
"I want to wash my dog."
is very clearly different from
"I want to have my dog washed."
---
Now, every car wash business I've even been to has a small convenience store section in which various waxes, rags, and the like can be purchased.
---
Considering the aforementioned, is it not valid to consider that
"I want to wash my car." --> You want to DIY your car wash.
and
"The car wash is 50 meters away." --> You might want to purchase car wash supplies and/or solicit advice for your DIY endeavor.
?
---
The nature of the first sentence leaves the second open to interpretation.
I could already talk to a computer before LLMs, via programming or query languages.
I asked GPT-5.2 10x times with thinking enabled and it got it right every time.
I think it's related to syncophancy. LLM are trained to not question the basic assumptions being made. They are horrible at telling you that you are solving the wrong problem, and I think this is a consequence of their design.
They are meant to get "upvotes" from the person asking the question, so they don't want to imply you are making a fundamental mistake, even if it leads you into AI induced psychosis.
Or maybe they are just that dumb - fuzzy recall and the eliza effect making them seem smart?
Do you want me to track down some research that shows people think information is more likely to be correct of they agree with it?
EDIT: Though it could simply reflect training data. Maybe Redditors don't drive.
EDIT: I actually think this is very common in some smaller cities and outside of North America. I only ever seen a drive-by Car Wash after emigrating
I’m willing to bet less than 11 get it right.
They ran the exact same question with the same forced choice between "drive" and > "walk," no additional context, past 10,000 real people through their human feedback platform.
71.5% said drive.
Actually, this isn't as "obvious" as it seems—it’s a classic case of contextual bias.
We only view these answers as "wrong" because we reflexively fill in missing data with our own personal experiences. For example:
- You might be parked 50m away and simply hand the keys to an attendant.
- The car might already be at the station for detailing, and you are just now authorizing the wash.
This highlights a data insufficiency problem, not necessarily a logic failure. Human "common sense" relies on non-verbal inputs and situational awareness that the prompt doesn't provide. If you polled 100 people, you’d likely find that their "obvious" answers shift based on their local culture (valet vs. self-service) or immediate surroundings.
LLMs operate on probabilistic patterns within their training data. In that sense, their answers aren't "wrong"—they are simply reflecting a different set of statistical likelihoods. The "failure" here isn't the AI's logic, but the human assumption that there is only one universal "correct" context.
This should be fixed in the reasoning layer (the inner thoughts or chain-of-thought) were the model should focus on the goal "I Want to Wash My Car" not the distance and assign the correct weight to the tokens.
Why? - It is the same reason that makes 30% of people respond in non-obvious sense.