ChatGPT is a poor knowledge base, new research confirms

There is (probably a little too much) chatter on the internet about how OpenAI’s ChatGPT and similar artificially intelligent (AI) chatbots are going to change the way we work.

There’s also some mischief involved: Will AI chatbots mock academia? Get rid of experts? Will they be a harbinger in some way I robots or will Skynet become real?

Now experts from Purdue University, based in West Lafayette in the US, have finally answered this question definitively in a thirteen-page paper (PDF), to the hitherto ill-considered conclusion that, no, AI chatbots don’t know everything.

AI chatbots and factual disinformation

The paper takes software engineering questions as the basis for its findings, comparing the veracity of ChatGPT’s answers to those of real, real users of the popular programming question-and-answer portal (essentially a worthy Yahoo! Answers) StackOverflow.

The gratingly ubiquitous chatbot got 517 questions on the topic found on the site, and the results are irrefutable.

52% of ChatGPT’s answers were incorrect, and when we asked Stack Overflow to calculate this for us, they said 48% of the chatbot’s answers were correct.

Analysis – certainly not infallible

Based on this, we should commit to throwing AI into the Caspian Sea. We must respect the result. It started over 40 years ago with Stanley Kubrick and it ends here. A fantastic campaign from all involved.

We can joke, but the results Are clear: AI as a knowledge source doesn’t quite work, and the implications are clear and dangerous.

Even according to this study, a bizarre number of people don’t notice or care about the potential of information. In a kind of Pepsi/Coke blind taste test, 12 participants with varying levels of programming knowledge failed to identify an AI-generated answer 39.34% of the time, preferring what was a Stack Overflow response turned out to be.

ChatGPT is often considered foolproof, even if it’s definitely not, because of the way answers are presented. The study found that even correct answers covered all aspects of the question 65% of the time, and users often accepted incorrect information as truth due to “elaborate, well-articulated, and human-like” answers.

  • For real expertise in your organization, try the best job boards instead of

Through ZDNet

Related Post