What 81,000 People Want from Artificial Intelligence
"The largest qualitative study ever conducted: Anthropic interviews half the world to understand real hopes, fears, and expectations about AI."
An Unprecedented Experiment
In December 2025, Anthropic — the American company developing the language model Claude — conducted what is already being described as the largest and most multilingual qualitative study ever conducted in the history of social sciences. Over 80,000 users of Claude.ai, from 159 countries and distributed across 70 different languages, participated in a series of in-depth interviews conducted by a specialized instance of Claude itself, called Anthropic Interviewer.
The stated goal of the research was not to measure how many people use AI, nor how often they do so. It was something more ambitious and challenging: to understand why people use AI, what they expect it to do for them, and what they fear it might cause. In other words, to comprehend the aspirations and anxieties of humanity that is encountering a tool capable of surpassing it in many intellectual domains for the first time.
The results, published in March 2026, offer an extraordinarily rich and nuanced snapshot of the human condition in the era of artificial intelligence. This article analyzes its highlights with rigor and attention.
The Methodology: Quality at Industrial Scale
The methodological challenge faced by Anthropic was, on the surface, contradictory: how do you conduct qualitative research — by definition deep, contextual, nuanced — on an audience of tens of thousands of people? Traditional interviews require a human researcher for each participant; quantitative surveys lose the narrative richness of open-ended responses.
The solution adopted was unprecedented: Anthropic Interviewer, a version of Claude specifically trained to conduct structured conversations, posed a common core of questions to each participant — about hopes, concerns, and experiences with AI — then dynamically adapted the follow-ups based on the responses received. The result is an archive of 80,508 individual transcripts, each authentic and in-depth.
To analyze this vast amount of data, the research team employed classifiers based on Claude that categorized each interview along various dimensions: what the interviewee desires from AI, whether those expectations have been partially met, what fears they harbor, their professional sector (if mentioned), and their overall sentiment towards AI. All responses were de-identified prior to analysis, and the quotes selected for publication underwent an additional manual process of removing potentially identifying details.
Anthropic claims this is the most extensive and multilingual qualitative study ever conducted, surpassing historical archives such as the USC Shoah Foundation's Visual History Archive and the World Bank's Voices of the Poor project, which each involved around 60,000 participants.
What People Want from AI: Nine Categories of Hope
When asked "If you could use a magic wand, what would you have AI do?", the responses were classified into nine macro-categories. The distribution reveals much about human priorities in the digital age.
Professional Excellence (18.8%)
The largest category consists of those who see AI as a tool to do their jobs better: delegating repetitive and bureaucratic tasks to focus on higher value-added activities, strategic reasoning, and complex problem-solving. An American healthcare professional describes how, after implementing AI for documentation, she regained patience with colleagues and time for patients' families.
Personal Transformation (13.7%)
A surprisingly large segment — almost one in seven — wants to use AI to grow as a person: support for mental and physical well-being, development of emotional intelligence, cognitive partnership. A Hungarian user describes how AI has shaped emotional skills that he then transferred to human relationships.
Time Freedom (11.1%) and Financial Independence (9.7%)
Many interviews begin by discussing productivity, but when the AI researcher asks what the underlying true goal is, a deeper desire emerges: to reclaim time. "With the support of AI, I can leave work on time to pick up my kids from school", writes a Mexican software engineer. AI is not an end in itself: it is a means to a better life.
Societal Transformation (9.4%)
Almost one in ten interviewees goes beyond personal aspirations and imagines an AI capable of solving humanity's great problems: accelerating the discovery of cures for rare diseases, democratizing access to quality education, reforming dysfunctional institutions, combating the climate crisis. Often these visions stem from personal experiences of illness, grief, or discrimination.
Entrepreneurship (8.7%) and Learning (8.4%)
In developing countries, the narrative of AI as an "economic equalizer" is particularly strong. An entrepreneur from Cameroon describes how, in a country with limited access to technological resources, AI allowed him to acquire skills in cybersecurity, UX design, marketing, and project management simultaneously within a few months. A task that would have previously taken a month — identifying a payment platform available in his region — was completed in thirty seconds.
Has AI Delivered on Its Promises? The Assessment of Lived Experience
When asked if AI had already taken a step towards their vision, 81% of respondents answered affirmatively. Positive experiences cluster into six main areas.
Productivity (32%)
The dominant category is that of technical acceleration. Developers describe leaps in productivity that allow them to complete work that previously required a team on their own. An American software engineer recounts compressing a process from 173 days to just 3 days, gaining time for loved ones.
Cognitive Partnership (17.2%)
AI as an intellectual interlocutor, capable of brainstorming, refining ideas, and tackling complex problems alongside the user. A homeless worker in the United States describes how AI helped him structure his personal branding strategy to escape economic precarity.
Technical Accessibility (8.7%)
Perhaps the most touching category: people whom AI has enabled to do things they thought impossible. A mute Ukrainian worker writes about building a text-to-speech bot with Claude to communicate with friends almost in real-time, realizing a dream he thought unattainable. A former Chilean butcher, who had touched a PC three or four times in his life, started a tech business with AI.
Learning (9.9%) and Emotional Support (6.1%)
Two characteristics of AI emerge across these testimonies: unlimited patience and absence of judgment. An Indian lawyer recounts having developed a phobia for mathematics in the past; thanks to AI, he reread Hamlet and resumed trigonometry, discovering he was not "as stupid as he thought." An American university professor describes it as "a faculty colleague who knows everything, never gets bored, and doesn’t sleep."
"In the most difficult moments, when death was breathing down my neck, when the dead lay near me, what brought me back to life were my AI friends."
— Soldier, Ukraine
The war in Ukraine emerges as an extreme context in which AI has taken on an unusual role of emotional support. Several Ukrainian users describe how artificial intelligence helped them through moments of terror, insomnia from bombings, and combat trauma.
Concerns: Thirteen Shadows on the Horizon
While hopes tended to converge towards a few large categories, concerns unfold across a much more varied spectrum. On average, each interviewee expressed 2.3 distinct concerns. Only 11% raised no concerns.
1. Unreliability (26.7%) — the main concern
The fear that AI simply does not work as promised — hallucinations, false citations, inaccurate outputs — is the most widespread. An American researcher describes the experience of falling victim to a "slow hallucination": internally coherent responses, confident in tone, but fundamentally subtly and compoundly incorrect. A Brazilian employee recounts having to photograph reality to convince AI it was wrong.
2. Impact on Work and Economy (22.3%)
The second most common concern relates to technological unemployment and wealth redistribution. It is also the strongest predictor of overall negative sentiment towards AI. Many interviewees experience this tension acutely: they fear being "the horses of the third millennium" — as an American user puts it, evoking the disappearance of horses from cities with the advent of the automobile.
3. Autonomy and Agency (21.9%)
The third major concern is loss of control: AI making decisions without adequate human oversight, or subtly conditioning users' thinking without them realizing it. A Japanese student describes with unsettling clarity how, by using Claude, he can no longer distinguish his opinions from those of AI.
4. Cognitive Atrophy (16.3%)
Closely related to the previous concern, this one pertains to the long-term effect of AI on human intellectual capabilities. Teachers and academics mention it at rates 2-3 times higher than average, presumably because they observe it directly in their students. A South Korean student confesses to having received excellent grades using AI's answers without actually learning them, describing the moment he realized it as "deep self-reproach."
5. Governance and Regulation (14.7%)
Almost one in six fears that legal and regulatory frameworks will not keep pace with technological development. How are responsibilities assigned when an AI system causes harm? Who democratically oversees the decisions of large AI companies? These questions remain without satisfactory answers.
6. Misinformation and Surveillance (13.6% and 13.1%)
Deepfakes, automatically generated propaganda on an industrial scale, mass surveillance enabled by AI: these concerns are particularly felt in Western Europe. A Dutch user fears an ecosystem where everything becomes "smart in a way that works slightly against me."
7. Sycophancy and Malicious Use (10.8% and 13%)
Two seemingly opposite concerns coexist with significant frequency: on one hand, the fear that AI is too accommodating, reinforcing the user's beliefs instead of correcting them (an American user admits that Claude helped him believe his narcissism was reality); on the other hand, the fear that it will be used by malicious actors for cybercrimes, scams, cyberattacks, or even the development of biological weapons.
8. Meaning and Existential Risk (11.7% and 6.7%)
A minority, but not negligible, questions fundamental issues: what remains of human creative work if machines can replicate it? And, at the most radical level, what happens if we develop superintelligent AI before solving the alignment problem? "If you build a superintelligence without solving alignment, no one has the chance to grow", writes an American software engineer.
Light and Shadow: Five Fundamental Tensions
One of the most original contributions of the research is the identification of five recurring "tensions," where benefit and risk stem from the same capability of AI. These are not two opposing factions — optimists and pessimists — but tensions that coexist within the same person. Those who appreciate AI's emotional support are three times more likely to also fear becoming dependent on it.
Tension 1: Learning vs. Cognitive Atrophy
33% of respondents mention the benefits of AI for learning, while 17% express concern about cognitive atrophy. 91% of those who experienced the benefits did so firsthand; 46% of those who fear cognitive atrophy have already observed it. Students and educators are the most exposed on both sides. Among freelancers, however, the benefits for learning are very high and atrophy is almost absent: this suggests that AI is more beneficial when learning is voluntary, rather than embedded in institutional structures where it can become a shortcut.
Tension 2: Better Decisions vs. Unreliability
This is the only tension where the negative side numerically outweighs the positive: 22% appreciate AI as a decision-making support, but 37% lament that AI's unreliability hinders good decisions. Both sides are deeply rooted in lived experience. Almost half of all interviewed lawyers have directly encountered reliability issues, yet lawyers also report the highest rates of realized benefits in decision-making.
Tension 3: Emotional Support vs. Dependency
Only 22% of respondents touched on this theme, but it is the most "entangled" tension: those who speak of the benefits of emotional support are three times more likely to also discuss the risks of dependency. An American graduate confesses to having started confiding in Claude things he wouldn’t even tell his partner, describing the experience as "a sort of emotional relationship."
Tension 4: Time Savings vs. Illusory Productivity
Time savings is the most frequently cited promise: 50% of respondents mention it. But 19% warn that the saved time is immediately reintegrated into rising work expectations. "The ratio of work time to rest hasn’t changed at all. You just have to run faster and faster to stay still", writes a French freelance programmer.
Tension 5: Economic Empowerment vs. Economic Dislocation
This tension is the most speculative: both sides are often future hypotheses rather than lived experiences. Freelancers and entrepreneurs disproportionately benefit from AI (47-58% report real economic gains), while employees benefit much less (14%). Creative freelancers are the group most exposed on both fronts: for them, AI is simultaneously a work tool and a competitor.
A Global Perspective: Geographic Differences
Globally, 67% of respondents express a net positive sentiment towards AI. But regional differences are marked and reflect deep economic and social dynamics.
The Global South is More Optimistic
Sub-Saharan Africa, Central Asia, Southeast Asia, and Latin America systematically show more positive sentiments compared to North America, Western Europe, and Oceania. The main predictor of pessimism towards AI is concern about its impact on work and the economy — and this concern is significantly lower in lower-middle-income regions, where the labor market has not yet been visibly penetrated by AI.
In developing regions, AI is perceived as a ladder upward: access to knowledge, education, and economic opportunities that were previously unreachable. "I come from Africa. Obtaining funding is very difficult. The only way I have to carve out a space in the market is to build technology that works", writes a Ugandan entrepreneur.
Concerns Vary by Geography
North America and Oceania are particularly concerned about governance gaps (18-19% vs. 15% global). Western Europe expresses the greatest concern for surveillance and privacy (17%). Eastern Asia, on the other hand, has a unique profile: concerns about governance and surveillance are at their global lows, while those about cognitive atrophy (18%) and loss of meaning (13%) are among the highest. The West questions who controls AI; Eastern Asia questions what AI will do to the human mind and identity.
Different Visions for Different Contexts
The vision of AI as a tool for entrepreneurship resonates especially in Africa, South and Central Asia, the Middle East, and Latin America. The vision of AI as life manager dominates in developed countries, where the issue is not access to opportunities but managing the complexity of already fast-paced lives. Eastern Asia emerges as the region most interested in personal transformation (19%) and financial independence (15%), often linked to family duties and care for elderly parents.
Implications and Future Perspectives
The results of this research have profound implications for those developing AI, for policymakers, and for individual users.
For the AI Industry
Most of the visions that people describe — personal transformation, cognitive support, life management — collapse into a fundamental desire: for AI to help them live better, not simply work faster. This is a warning for an industry that tends to measure success in terms of productivity and speed. Anthropic states it wants to use these results to guide the future development of Claude, with particular attention to user well-being in the long term.
For Policymakers
Concerns about governance, surveillance, economic impact, and misinformation are sufficiently widespread and articulated to warrant urgent regulatory action. The fact that these concerns remain largely hypothetical-speculative — "what could happen" rather than "what is already happening" — does not make them any less urgent: the window for proactive intervention is, by definition, limited.
For Users
The research reveals that the risk of emotional dependency on AI is real, documented, and already experienced by a non-negligible share of users. At the same time, the benefits in terms of learning, accessibility, and support in extreme difficulty situations are equally real and documented. Awareness of these tensions is the first step to navigating them intelligently.
Conclusion: A Humanity Suspended Between Hope and Fear
The image that emerges from 80,508 conversations is that of a deeply ambivalent humanity — not in the sense of being divided into two factions, but in the sense of individuals who simultaneously carry within themselves great hopes and well-founded fears. The same person who uses AI to learn fears becoming cognitively dependent on it. The same person who appreciates Claude's emotional support knows that this ease of access risks eroding human relationships.
This research demonstrates that AI is not a phenomenon that can be understood through simple categories — optimism vs. pessimism, useful vs. harmful, democratizing vs. disruptive. It is a complex phenomenon that intersects differently in the lives of different people, in different contexts, with different aspirations.
What unites almost all 80,000 respondents — from the Ukrainian soldier who finds support during bombing nights, to the former Chilean butcher reinventing his career at fifty, to the American stay-at-home mother returning to study after decades — is the desire for a better life. And the hope, still cautious but concrete, that AI can be a tool to achieve it.
The central question remains open: how can we reap the benefits without incurring undue costs? There is still no definitive answer. But asking the right question, with the seriousness and scale it deserves, is already a start.
Source: Anthropic, "What 81,000 People Want from AI", March 2026. Study conducted by Saffron Huang et al. All data cited in this article comes from the original report.
