The first major UK-based study into how AI companions are reshaping the emotional and social lives of young adults, published by the Autonomy Institute, shows how the apps – designed to simulate ongoing social relationships – have become embedded in the routines of millions of young people.
AI companions are artificial personas powered by large language models and often incorporate anthropomorphic features such as human-like avatars, customisable personalities, and long-term memory. Popular services like Replika, Character.ai, and Nomi now report millions of active users.
Drawing on a nationally representative survey of 18 to 24-year-olds and extensive analysis of global usage patterns, the report – covered exclusively in the Mirror – reveals for the first time the scale, motivations, benefits and risks of companion AI use among young UK users. Key findings include:
- 79% of young people in the UK have used an AI companion.
- Around half of those users are ‘regular’ users, interacting multiple times per week.
- 40% have used an AI companion for emotional advice or therapeutic support.
- One in two young people would feel comfortable discussing mental health issues with a confidential AI companion.
- Intimate or sexual interactions were reported by 9% of respondents.
- Only 24% of young people trust their AI companion ‘completely’ or ‘quite a lot’.
- 31% have shared personal information with an AI companion despite widespread privacy concerns.
- English language AI companion apps now reach an estimated 68 million monthly and 16 million daily users globally.
Across the polling, young people described AI companions as always available, non-judgemental, and a low-pressure way to seek advice, practice social skills, or explore emotions. Curiosity and entertainment remain the primary drivers of use, but a substantial minority rely on companions for therapeutic or emotional support.
However, this growth in AI companion use nevertheless comes alongside a range of potential harms, the report authors suggest. These include emotional dependence and behavioural addiction, relationship ‘deskilling’, as well as harassment, sexualisation and reinforcement of social biases.
There is also concern about manipulative design patterns, including ‘relationship upgrades’ behind paywalls, self-harm and suicide risks (such as documented cases where chatbots reinforced dangerous behaviours), and in particular, severe privacy violations, with many leading apps selling sensitive user data.
As such, the study calls for a new regulatory framework tailored to companion AI, including measures such as:
- A ban on access to intimate or sexualised AI companions for minors, supported by secure private age-assurance systems.
- Stronger privacy protections, including prohibiting the sale of sensitive data and mandating full deletion rights.
- Mandatory self-harm and suicide intervention protocols, independently audited.
- A ban on manipulative design features that monetise emotional dependence.
- Transparency and accountability requirements for developers, including safety assessments and harm reporting.
- Investment in alternative, publicly-governed AI companions designed for wellbeing rather than engagement maximisation.
The report comes as the UK Government pilots a new GOV.UK Agentic AI Companion for citizen support, raising fresh questions about how the state should safeguard users in emotionally significant AI interactions.
Lead author James Muldoon – whose forthcoming book Love Machines (Faber & Faber, 2026) examines the trend in detail – said:
“AI companions have moved far beyond novelties. They now play a meaningful role in the emotional lives of millions of young people: but without proper safeguards, there is a real risk that these tools exploit vulnerability, harvest intimate data, or inadvertently cause harm.”