Research shows that most people believe generative AI is conscious, which could prove it’s also good at making people hallucinate

When you interact with ChatGPT and other conversational generative AI tools, they process your input through algorithms to compose a response that feels like it came from another sentient being, despite the reality of how large language models (LLMs) function. Two-thirds of respondents for A study from the University of Waterloo nevertheless believe that AI chatbots are somehow conscious and pass the Turing test, which convinces them that an AI is equal to a human in terms of consciousness.

Generative AI, as embodied by OpenAI’s work on ChatGPT, has come a long way in recent years. The company and its rivals often talk about a vision for artificial general intelligence (AGI) with human-like intelligence. OpenAI even has a new scale to measure how close its models are to achieving AGI. But even the most optimistic experts aren’t suggesting that AGI systems will be self-aware or capable of real emotion. Still, 67% of the 300 people surveyed said they believed ChatGPT could somehow reason, feel, and be aware of its existence.

There was also a notable correlation between how often someone uses AI tools and how likely they are to observe consciousness in those tools. That’s a testament to how good ChatGPT is at mimicking humans, but it doesn’t mean the AI ​​is awake. ChatGPT’s conversational approach likely makes them seem even more human, though no AI model works like a human brain. And while OpenAI is working on an AI model capable of autonomous exploration, called Strawberry, that’s still a far cry from an AI that knows what it’s doing and why.

“While most experts deny that current AI could be conscious, our research shows that AI consciousness is already a reality for much of the general public,” explains Dr. Clara Colombatto, a professor of psychology at the University of Waterloo and co-leader of the study. “These results demonstrate the power of language, because conversation alone can trick us into thinking that an agent that looks and acts very differently from us could have a mind.”

(Image credit: Neuroscience of)

Customer service

Belief in AI consciousness could have major implications for the way people interact with AI tools. On the positive side, it encourages and facilitates ways to trust what the tools do, making them easier to integrate into daily life. But trust comes with risks, from over-reliance on AI for decision-making to, at the extreme, emotional dependence on AI and fewer human interactions.

The researchers plan to dig deeper into the specific factors that make people think AI is conscious and what that means at the individual and societal level. It will also include long-term analyses of how those attitudes change over time and with respect to cultural background. Understanding public perceptions of AI consciousness is crucial not only for the development of AI products, but also for the regulations and rules that govern their use.

“In addition to emotions, consciousness is related to intellectual capacities that are essential for moral responsibility: the ability to formulate plans, act purposefully, and exercise self-control are principles of our ethical and legal systems,” Colombatto said. “These public attitudes should thus be a key consideration in designing and regulating AI for safe use, alongside expert consensus.”

You may also like

Related Post