ChatGPT creator confirms a bug allowed some users to snoop on others’ chat histories 

>

Are YOUR conversations safe? ChatGPT creator fixes a bug that allowed some users to snoop other people’s chat history

  • Sam Altman, CEO of OpenAI, confirmed that ChatGPT had a “significant” problem
  • A “small percentage” of users were able to view other people’s chat history
  • This follows previous privacy concerns raised about the company’s data usage

The creator of ChatGPT has confirmed that a bug in the system allowed this some users sniff other people’s chat history.

Sam Altman, CEO of OpenAI, confirmed last night that the company was experiencing a “significant issue” that threatened the privacy of conversations on its platform.

The revelations came after several social media users shared ChatGPT conversations online that they had not participated in.

As a result, users were unable to view chat history between 8am and 5pm (GMT) yesterday.

Mr. Altman said: ‘We had a significant problem in ChatGPT due to a bug in an open source library, which has now been released and we just finished validating. A small percentage of users were able to see other users’ conversation history titles.”

On Monday, it was confirmed that a “small percentage” of ChatGPT users could view other people’s chat history

ChatGPT quick facts – what you need to know

  • It is a chatbot built on a large language model that can execute human-like text and understand complex queries
  • It was launched on November 30, 2022
  • In January 2023, it had 100 million users – faster than TikTok or Instagram
  • The company behind it is OpenAI
  • OpenAI has secured a $10 billion investment from Microsoft
  • Other “big tech” companies like Google have their own rivals like Google’s Bard

ChatGPT was founded in 2015 in Silicon Valley by a group of American angel investors, including current CEO Sam Altman.

It is a large language model trained on a huge amount of text data, which allows it to generate responses to a given prompt.

People all over the world have used the platform to write human-like poems, lyrics and various other written works.

However, a “small percentage” of users this week were able to see chat titles in their own conversation history that didn’t belong to them.

On Monday, a person on Twitter warned others to “be careful” of the chatbot that had shown them other people’s conversation topics.

An image of their listing showed a number of titles, including “Girl Chases Butterflies,” “Books on human behavior,” and “Boy Survives Solo Adventure,” but it was unclear which of those weren’t theirs.

They said, ‘If you’re using #ChatGPT, be careful! There is a risk that your chats will be shared with other users!

“Today I was shown another user’s chat history. I couldn’t see the content, but I could see the titles of their recent chats.”

Sam Altman, CEO of OpenAI, confirmed that ChatGPT had a

Sam Altman, CEO of OpenAI, confirmed that ChatGPT had a “significant” problem yesterday

Users were unable to view chat history between 8am and 5pm (GMT) yesterday

Users were unable to view chat history between 8am and 5pm (GMT) yesterday

A person on Twitter warned others to 'be careful' with the chatbot that had shown them other people's conversation topics

A person on Twitter warned others to ‘be careful’ with the chatbot that had shown them other people’s conversation topics

During the incident, the user added that they were facing many errors related to network connectivity in addition to “failed to load history” errors.

According to the BBCanother user also claimed they could see conversations written in Mandarin and another called “Chinese Socialism Development.”

Next follows ChatGPT features were temporarily disabled while the company worked to resolve the issue.

But this privacy concern is not the first to be raised around the online language model.

Last month, JP Morgan Chase joined companies like Amazon and Accenture in restricting the use of the AI ​​chatbot ChatGPT among the company’s approximately 250,000 employees over data privacy concerns.

One of the biggest shared concerns was that data might be used by ChatGPT’s developers to improve algorithms or make sensitive information accessible to engineers.

ChatGPT’s privacy policy states that it may use personal data related to ‘use of the services’ to ‘develop new programs and services’.

However, it is also claimed that this personal information may be anonymized or aggregated prior to service analysis.

What is OpenAI’s chatbot ChatGPT and what is it used for?

OpenAI states that their ChatGPT model, trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), can simulate dialogues, answer follow-up questions, admit errors, challenge incorrect assumptions, and reject inappropriate requests.

The initial development involved human AI trainers who provided the model with conversations where they played both sides: the user and an AI assistant. The version of the bot available for public testing tries to understand user questions and responds with in-depth answers that resemble human-written text in a conversational format.

A tool like ChatGPT can be used in real-world applications such as digital marketing, online content creation, answering customer service questions or, as some users have discovered, even to help debug code.

The bot can respond to a wide range of questions while imitating human speaking styles.

A tool like ChatGPT can be used in real-world applications such as digital marketing, online content creation, answering customer service questions or, as some users have discovered, even to help debug code

A tool like ChatGPT can be used in real-world applications such as digital marketing, online content creation, answering customer service questions or, as some users have discovered, even to help debug code

As with many AI-driven innovations, ChatGPT does not come without its doubts. OpenAI has acknowledged the tool’s tendency to respond with “plausible-sounding but incorrect or nonsensical answers,” a problem it considers challenging to solve.

AI technology can also perpetuate societal biases, such as those around race, gender and culture. Tech giants, including Google and Alphabet Inc’s Amazon.com, have previously acknowledged that some of their projects experimenting with AI were “ethically unpredictable” and had limitations. At several companies, people had to intervene to solve the AI ​​havoc.

Despite these concerns, AI research remains attractive. Venture capital investments in AI development and operations companies rose to nearly $13 billion last year, and $6 billion had flowed in through October of this year, according to data from Seattle-based funding tracking company PitchBook.