ChatGPT sets its sights on the university! AI bot can now reason as well as the average student, study assertions

>

Artificial intelligence can now reason as well as the average student.

Dr. Geoffrey Hinton, who is seen as one of the godfathers of AI, recently warned that the technology “might soon” be more intelligent than humans.

Now it appears that AI has mastered a type of intelligence called “analogical reasoning” that was previously believed to be uniquely human.

Analogical reasoning means working out a solution to an entirely new problem by using experiences from previous similar problems.

Given one type of test that requires this reasoning, the AI ​​language program GPT-3 beat the average score of 40 college students.

Artificial intelligence can now reason as well as the average student, research claims

WHAT IS CHATGPT?

ChatGPT is powered by a large language model known as GPT-3, which is trained on a massive amount of text data, allowing it to generate eerily human-like text in response to a given prompt.

OpenAI says its ChatGPT model is trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF).

This can simulate dialogue, answer follow-up questions, admit errors, challenge false premises, and reject inappropriate requests.

It responds to text prompts from users and can be asked to write essays, lyrics for songs, stories, marketing pitches, scripts, letters of complaint, and even poetry.

The emergence of human-like thinking in machines is something that many experts are watching closely.

Dr. Hinton, who resigned from Google, told the BBC earlier this year that there are “long-term risks of things more intelligent than us taking control.”

But many other leading experts insist that artificial intelligence isn’t much of a threat, and the new study warns that GPT-3 is still unable to run some relatively simple tests that children can solve.

However, the language model, which processes text, did about as well as humans in detecting patterns in letter and word sequences, completing lists of linked words, and identifying similarities between detailed stories.

Importantly, it did this without any training – it seemed to use reasoning based on unrelated previous tests.

Professor Hongjing Lu, senior author of the study from the University of California, Los Angeles (UCLA), said: ‘Language learning models just try to make word predictions, so we’re surprised they can reason.

“In the past two years, the technology has made a big leap from its previous incarnations.”

The AI ​​scored better than the average human in the study on a series of problems inspired by a test known as Raven’s Progressive Matrices, which requires someone to predict the next image in a complicated arrangement of shapes.

However, the shapes were converted to a text format that GPT-3 could handle.

GPT-3, developed by OpenAI – the company behind the infamous ChatGPT program that experts have suggested could one day replace many people’s jobs – correctly solved about 80 percent of the problems.

ChatGPT is powered by a large language model known as GPT-3 that is trained on a massive amount of text data, allowing it to generate eerily human-like text in response to a given prompt

The language model, which processes text, was about as good as humans at detecting patterns in letter and word sequences, completing lists of linked words, and identifying similarities between detailed stories

The score was well above the average of the 40 students, just under 60 percent, although some people outperformed the technology.

GPT-3 also outperformed school students in a series of tests that completed a list of words, where the first two were related, such as “love” and “hate,” and it had to guess the fourth word, “poor” — in the case because it was the opposite of the third word, “rich.”

The AI ​​scored better than the average results of students who tested for it when they applied for university.

The authors of the study, published in the journal Nature human behaviorwant to understand whether GPT-3 mimics human reasoning, or developed a fundamentally different form of machine intelligence.

Keith Holyoak, a UCLA psychology professor and co-author of the study, said, “GPT-3 could think a little bit like a human.

‘But on the other hand, people didn’t learn by absorbing the whole internet, so the way of training is very different.

“We’d like to know if it actually does it the way humans do, or if it’s something completely new — a real artificial intelligence — which would be amazing in itself.”

However, the AI ​​still gave “nonsensical” answers in a reasoning test where it was given a list of items including a cane, a hollow cardboard tube, paperclips and rubber bands and asked how it would use them to put a bowl of gum in a second empty bowl.

Related Post