ChatGPT is leaking… again – we shouldn’t be surprised, but we should be disturbed
OpenAI’s ChatGPT has been around for a long time ‘stupid’, willing to assist with cybercrime, an Icarus analogy for the age and a threat to sensitive corporate data.
Apparently we need to go through all of this again, though, because reports are popping up that the artificial intelligence tool leaks passwords again, for the sake of variety, corporate support tickets.
Per Ars TechnicaChatGPT recently provided a user’s chat logs from a pharmaceutical company’s support system, consisting of another user’s bug report related to a user portal… which contained that user’s login credentials.
Cleaning up in the GPT aisle
“I went to ask a question (…) and when I came back a little later to gain access, I noticed the additional conversations. They weren’t there when I used ChatGPT last night. (…) No questions were asked – they just appeared in my history and are definitely not mine (nor do I think they are from the same user),” Ars Technica reader Chase Whiteside told the publication.
Whiteside was also able to collect ‘the name of a presentation someone was working on’, ‘details of an unpublished research proposal’ and, one for the real Ny Breaking anorak hardcore – a script ‘written’ in PHP, on balance, probably stolen from a public Github repository.
What’s interesting/bleak (delete according to worldview) is that, despite ChatGPT having nothing near a spotless record, Whiteside says they are “a pretty heavy user” of the service, while giving no signs of this incident, or any ChatGPT- The formed incident we reported on last year or whatever has brought them to a standstill. Ladies, gentlemen and undefined: a dependency in action.
Analysis: I want to throw the elusive concept of AI into a vat of acid, can you help me?
Look, at Ny Breaking we are very niche, we know that. You’re reading us, so you already know that “artificial intelligence” isn’t a sentient computer, and that it’s just a billionaire forcing copyrighted digital works into a CPU to create a corpus that’s spit out like orange pips to the masses . the Heimlich maneuver.
If you put monkeys in a room full of computers, all they’ll end up doing is controlling the entire works of Shakespeare. Or better said: worth half an act Romeo and Juliet before telling you that for longer quick responses you have to pay $20 per month.
Computers don’t really do context, you don’t really talk to a human. You know this and more, you’re cool, but no one else knows, or no one else cares about the damage ‘AI’ does, or maybe a little bit of both.
It seems like every week the average age of someone who asks me ‘what is AI, I don’t get it but I think about it daily so I want to get it’ increases, while my success rate in communicating what it is, it profitable cynicism behind it, and the resulting damage caused by it diminishes in realistic terms. Reliably, I get told ‘Don’t know, makes my life easier’, or some variation thereof.
“Just some brazen leaking of user data, buddy,” is increasingly the stance taken by your mother who posts a status on Facebook stating that she is NOT giving Mark Zuckerberg permission to use her private messages or personal information before posting blurry posts. -up photos of a new mole on her thigh that is worrying her. This is probably why we get the governments we get.
I have the sneakiest feeling I’ve written about blind technical submission before, so what do I say? In this old-fashioned apple piety medicine show world, tell them the truth: that AI chatbots can to threaten, swearAnd be racist (a not-safe-for-work but totally relevant headline from The Verge, right there.)
Don’t tell them that this is simply a machine learning from their terrible behavior and reflecting it back on them, even though it is. Literally so in the case of Whiteside, who encountered our anonymous bug reporter exclaiming “this is so f–ing insane” to a living, breathing customer service representative.
Tell them to think about the children. It will be banned in two weeks.