As more people engage with AI in everyday life – via chatbots, image generators, voice assistants, or social-media filters – it is worth asking: when does AI stop being simply helpful, and start becoming just a little bit … creepy?
This article explores three ways in which AI can unsettle us: when it seems human but isn’t, when its ‘mirror’ of society is distorted, and when it fabricates things that are plausible but wrong.
-
When the machine looks almost human
One of the big cognitive shifts comes when an AI system appears to understand us – responds fluently, emulates natural speech, even tells jokes – yet behind the scenes it lacks real comprehension. This gap between appearance and capacity is sometimes called the ‘Uncanny Valley of Agency’ – when the AI seems agentic, but then acts erratically or incoherently.
A simple everyday example: you might call a customer-service helpline and find yourself talking to what sounds like a well-trained human voice, yet the responses feel scripted, just slightly ‘off’ – a mis-interpreted question, a strange pause, an answer that doesn’t quite land. You realise you’re interacting with the machine. What’s unsettling is almost believing it’s human, only to be reminded that it’s not.
In a broader sense, when we anthropomorphise a chatbot or voice assistant we risk over-trusting it. This phenomenon has been called the ‘ELIZA effect’ – named after an early chatbot that convinced users it was human simply by reflecting their input.
For the consumer this means getting a tool that feels intelligent – and so we assume accuracy or insight which may not be justified.
-
When the mirror of society is warped
AI systems are trained on huge volumes of data – text, images, audio – drawn from the real world. That means whatever biases, stereotypes, or omissions exist in that world may be baked into the model. For example, an image-generator might depict ‘engineers’ almost exclusively as men, or portray older people in limited roles.
These biases matter not only for fairness but because they shape what the system produces and what we as users expect. For example: imagine a home-decor advertisement made with AI images that repeatedly show white, young, middle-class occupants. The ad might look innocuous, but what message is it sending? What groups are absent? And if you use AI to generate educational illustrations or social-media posts, how inclusive and accurate are those visuals?
The takeaway for ordinary users: AI output is not neutral – it reflects the world that fed it.
-
When the output is plausible but wrong
Perhaps most unsettling of all is when Generative AI produces something that looks right – an image, a voice-clip, a written paragraph – but is in fact incorrect, misleading, or disconnected from reality. These might be labelled ‘hallucinations’ in AI jargon: the system confidently renders output based on patterns in training data, yet lacks actual understanding of truth.
A consumer example: you ask an AI to generate a photograph of yourself riding a historic steam train. The image might look amazing – but the details (the train’s design, historical uniform, location signage) could be subtly wrong. You might only notice if you know the subject well, otherwise you might accept it as ‘real enough’. The risk: mis-information, misplaced trust, and a blurring of the line between fact and synthetic.
Another scenario: asking a chatbot for medical advice and receiving fluent, plausible-sounding but inaccurate information – the worst kind of creepiness isn’t the weird image, it’s the quiet confidence of the false answer.
Why this matters for everyday users
You don’t need to be an engineer to encounter the ‘creepy’ side of AI. From voice assistants that sound too polished, to AI-generated images on social media that feel just a little ‘off’, to chatbots that confidently answer but stumble when pressed – the technologies are becoming embedded in daily life.
Here are a few practical tips:
- Treat AI outputs as assistants, not authorities. Just because something looks polished doesn’t make it correct
- Look out for ‘too good to be true’ polish. If a voice-chat sounds flawless yet seems scripted, that’s a cue to pause
- Consider representation. If AI-generated visuals keep showing the same type of people, ask who’s missing
- Retain your own human judgement. Use AI as a starting point, not a final answer
Being aware of these limits doesn’t mean rejecting AI – it means engaging with it consciously. When you do that, you can enjoy the benefits of generative tools while keeping the weird feeling at bay.