What 81,000 People Actually Want From AI — Hope, Fear, and Everything In Between

What 81,000 People Actually Want From AI — Hope, Fear, and Everything In Between

2 0 0

Last December, tens of thousands of Claude users sat down for a chat with an AI interviewer. Not a survey. Not a multiple-choice form. An actual conversation, where they talked about how they use AI, what they dream it could make possible, and what keeps them up at night.

80,508 people across 159 countries and 70 languages participated. Anthropic claims this is the largest and most multilingual qualitative study ever conducted on the topic. I don’t doubt it.

What’s refreshing here is the approach. Instead of another abstract debate about AI risks and benefits from think tanks or pundits, Anthropic went straight to people who already use the stuff. These aren’t hypotheticals. These are lived experiences.

Hope and alarm aren’t enemies

The most interesting finding? Hope and fear don’t divide people into warring camps. They coexist inside each person.

A lawyer from Israel put it perfectly: “I use AI to review contracts, save time… and at the same time I fear: am I losing my ability to read by myself? Thinking was the last frontier.”

That tension runs through almost every response. People see AI as a tool that can lift them up or undercut them, sometimes both at once.

A freelancer in the US shared how Claude helped piece together a medical history that led to a proper diagnosis after nine years of misdiagnosis. That’s real. That matters.

Meanwhile, a technical support specialist in the US got laid off because their company replaced them with an AI system. That’s also real. That also matters.

What people actually want

Anthropic used Claude-powered classifiers to categorize what each person most wanted from AI. The top categories tell a clear story:

Professional excellence (18.8%) led the pack. People want AI to handle the grunt work so they can focus on higher-value stuff. A healthcare worker described getting 100-150 text messages per day from doctors and nurses. AI lifted the documentation burden, and suddenly they had more patience with nurses, more time for families. That’s not abstract efficiency. That’s better care.

Personal transformation (13.7%) came next. People use AI as a coach, a guide, even a model for emotional intelligence. One person from Hungary said: “AI modeled emotional intelligence for me… I could use those behaviors with humans and become a better person.” I find that both heartening and a little unsettling.

Life management (13.5%) and time freedom (11.1%) round out the top wants. People want AI to handle the mental load of schedules and chores so they can be present with family. A manager from Denmark said: “If AI truly handled the mental load… it would give me back something priceless: undivided attention.”

The fears are real too

Anthropic allowed multiple concern codes per person, which makes sense because people rarely have just one worry.

Job displacement is the obvious one. That technical support specialist who got laid off? That’s not an isolated story. A software engineer from South Korea put it bluntly: “Humanity has never dealt with something smarter than itself. We need to reflect on how to prepare for the AI age.”

But the fears go deeper than jobs. People worry about losing their own skills and judgment. They worry about dependency. They worry about what happens when AI gets things wrong and there’s no human in the loop to catch it.

The methodology matters

Anthropic used what they call “Anthropic Interviewer” — a version of Claude prompted to conduct conversational interviews. It asked a set list of questions, then adapted follow-ups based on responses. This bridges the typical tradeoff in qualitative research between depth and volume. You usually have to choose: either deep interviews with a handful of people, or shallow surveys with thousands. Here they got both.

They then used Claude-powered classifiers to categorize responses across dimensions like what people want, whether they’re getting it, what they fear, and their overall sentiment. They also used Claude to pull representative quotes, then manually reviewed them for privacy.

Is this perfect? No. The sample is Claude users, which skews toward people who already see value in AI. The classifiers introduce their own biases. But it’s still a massive improvement over the usual hand-waving about what “people” think about AI.

What’s missing

The study doesn’t address non-users. What about people who’ve tried AI and walked away? What about communities without reliable internet access? The 159 countries sound impressive, but participation requires a Claude account and the willingness to sit through an AI interview.

Also, this is self-reported data. People say they want time freedom, but that doesn’t mean they’ll actually use AI that way. The gap between aspiration and behavior is real.

Still, this is the most concrete picture we have of what regular people — not CEOs, not researchers, not policy wonks — actually want from AI. They want help, not replacement. They want time, not surveillance. They want tools that make them better at being human, not machines that make them obsolete.

A Nigerian entrepreneur captured the ambivalence best: “I live hand to mouth, zero savings. If I use AI smarter, it may help me craft solutions to that cycle. It still depends on me.”

That’s the key line. AI can help, but it still depends on us. Hope and fear live together. The question is which one we act on.

Comments (0)

Be the first to comment!