Research shows that most people believe generative AI is conscious, which could prove it’s also good at making people hallucinate

When you interact with ChatGPT and other conversational generative AI tools, they process your input through algorithms to compose a response that feels like it came from another sentient being, despite the reality of how large language models (LLMs) function. Two-thirds of respondents for A study from the University of Waterloo nevertheless believe that AI chatbots are somehow conscious and pass the Turing test, which convinces them that an AI is equal to a human in terms of consciousness.

Generative AI, as embodied by OpenAI’s work on ChatGPT, has come a long way in recent years. The company and its rivals often talk about a vision for artificial general intelligence (AGI) with human-like intelligence. OpenAI even has a new scale to measure how close its models are to achieving AGI. But even the most optimistic experts aren’t suggesting that AGI systems will be self-aware or capable of real emotion. Still, 67% of the 300 people surveyed said they believed ChatGPT could somehow reason, feel, and be aware of its existence.