It appears that the vast majority of us are still rather skeptical of the infamous Large Language Model (LLM) ChatGPT, largely due to concerns about its accuracy and safety.
According to a survey conducted by antivirus vendor Malwarebytes, a massive 81% of consumers are worried about the security risks the AI writer poses, while a further 63% do not trust the information it provides, and just over half believe its usage should be halted in order for regulations governing the use and development of generative AI to be drawn up first.
“An AI revolution has been gathering pace for a very long time, and many specific, narrow applications have been enormously successful without stirring this kind of mistrust,” said Mark Stockley, Cybersecurity Evangelist at Malwarebytes. He added that, “public sentiment on ChatGPT is a different beast and the uncertainty around how ChatGPT will change our lives is compounded by the mysterious ways in which it works.”
Analysis: Why does it matter?
If consumers are concerned about the the security and trustworthiness of ChatGPT, then businesses absolutely should be too, if they plan on using it in the workplace and feeding the machine sensitive company or customer data.
Samsung has already learnt this lesson the hard way, as workers inputted confidential company information and trade secrets into ChatGPT when using it to help with their work. This data now presumably sits in the servers of OpenAI, and, as with all inputs ChatGPT receives, may be used to further train the generative AI.
The electronics giant has now banned its use for employees, but they do now have a contained version to use, which has a limited prompt length and inputted data remains internally preserved within the company.
OpenAI is the company behind ChatGPT, and it has received heavy investment from Microsoft in the development of its all-conquering AI.
Many firms, including Microsoft themselves, are using the underlying models of ChatGPT to make tailor-made AI systems to suit certain enterprise applications, with some promising to make sure that any inputted data and prompts remain isolated and are not sent to OpenAI.
What have others said?
The survey results echo the sentiments of many experts in the fields of AI, security and privacy.
Many will no doubt be aware of the Statement on AI Risk from the Center of AI Safety, signed by significant figures in the industry – including OpenAI’s own CEO Sam Altman – warning that AI threatens the very existence of mankind. Plenty of others are voicing their concerns too, although in a somewhat less bombastic way.
For instance, Professor Uri Gal from the University of Sydney Business School, argues that ChatGPT contravenes the legal principle of textual integrity with regards to the copious amount of data it hovers up to train its models. Even though this data is publicly available, it should not be revealed outside of its original context according to this principle.
He also believes that ChatGPT may not even be GDPR compliant, as there is no clear way to check whether personal user data is stored by OpenAI, or to request for its deletion. This is one of the reasons why Italy banned the chatbot nationwide.
Going deeper
If you too are concerned about ChatGPT, then you can read more about its privacy issues. If you are interested in using similar AI systems at your organization, then you can see what the costs of using AI are really like.