The new Bing AI ChatGPT bot is going to be limited to five replies per chat

>

As regular TechRadar readers will know, the highly promoted AI chatbot enhancements recently added to Bing haven’t launched so smoothly – and now Microsoft is making some changes to improve the user experience.

In a blog post (opens in new tab) (through The edge (opens in new tab)), Microsoft says the tweaks should “help focus chats”: Bing’s AI portion will be limited to 50 chats (a question and answer) per day, and five responses per chat.

This is coming: Microsoft executives have previously stated that they are looking for ways to weed out some of the weird behavior noticed by early testers of the AI ​​bot service.

Put to the test

Those early testers tested pretty hard: they managed to get the bot, based on an upgraded version of OpenAI’s ChatGPT engine, to give inaccurate answers, get angry, and even question the very nature of its own existence to pull.

Letting your search engine go through an existential crisis when all you were looking for was a list of the best phones is not ideal. Microsoft says that very long chat sessions confuse the AI ​​and that the “vast majority” of queries can be answered in 5 answers.

The AI ​​add-on for Bing isn’t available to everyone yet, but Microsoft says it’s working its way through the waiting list. If you plan to try out the new functionality, remember to keep your interactions short and concise.


Analysis: Don’t believe the hype just yet

Despite the early problems, there is clearly a lot of potential in the AI-powered search tools that Microsoft and Google are developing. Whether you’re looking for party game ideas or places to visit, they can provide quick, informed results — and you don’t have to scroll through pages of links to find them.

At the same time, there is clearly still a lot of work to be done. Large Language Models (LLMs) like ChatGPT and its Microsoft version don’t really “think” as such. They’re like supercharged autocorrect engines, predicting which words should come in sequence to give a cohesive and relevant answer to what’s being asked of them.

Plus there’s the issue of sourcing – if people start relying on AI to tell them the best laptops and put human writers out of work, these chatbots won’t have the data they need to get their answers. Like traditional search engines, they still rely heavily on content curated by real people.

We, of course, took the opportunity to ask the original ChatGPT why long interactions confuse LLMs: apparently it can cause the AI ​​models to be “too focused on the specifics of the conversation” and prevent it from “generalizing to other contexts or topics”, leading to looping behavior and responses that are “repetitive or irrelevant”.

Related Post