Why Deepfakes and AI Trust Issues Impact Business

From the now-infamous Kensington Palace Mother’s Day photo to doctored audio of Tom Cruise disparaging the Olympic Committee, AI-generated content has been in the news for all the wrong reasons recently. These cases have sparked widespread controversy and paranoia, leading people to question the authenticity and provenance of the content they see online.

It’s affecting every corner of society, not just public figures and everyday internet users, but also the world’s largest corporations. Chase Bank, for example, reported being fooled by a deepfake during an internal experiment. Meanwhile, a report revealed that deepfake incidents in the fintech sector have increased by 700% in just one year.

Today, there is a serious lack of transparency around AI, including whether an image, video, or voice was generated by AI. Effective methods for vetting AI that unlock greater levels of accountability and incentivize companies to more aggressively stamp out misleading content are still in development. These shortcomings compound AI’s trust problem, and combating these challenges depends on bringing greater clarity to AI models. It’s a primary hurdle for companies that want to tap into the immense value of AI tools but fear the risk outweighs the reward.

Mrinal Manohar

CEO and Founder of Casper Labs.

Can business leaders trust AI?

Right now, all eyes are on AI. But while the technology has seen historic levels of innovation and investment, trust in AI and many of the companies behind it has steadily declined. Not only is it becoming harder to distinguish human-generated and AI-generated content online, but business leaders are also becoming increasingly reluctant to invest in their own AI systems. There’s a common struggle to ensure the benefits outweigh the risks, all compounded by a lack of clarity around how the technology actually works. It’s often unclear what kind of data is being used to train models, how that data influences the output they generate, and what the technology is doing with a company’s proprietary data.

This lack of visibility presents a range of legal and security risks for business leaders. Despite AI budgets set to increase by as much as five times this year, mounting concerns over cybersecurity have reportedly led to 18.5% of all AI or ML transactions being blocked in the enterprise. That’s a massive 577% increase in just nine months, with the highest example of this (37.16%) occurring in finance and insurance – sectors that have particularly stringent security and regulatory requirements. The finance and insurance sector is a harbinger of what could happen in other industries, as questions about the security and legal risks of AI grow and businesses must consider the implications of using the technology.

Even as businesses eagerly anticipate the $15.7 trillion in value that AI could unlock by 2030, it’s clear that businesses can’t fully trust AI yet, and this hurdle will only get worse if problems aren’t addressed. There’s a pressing need to introduce greater transparency into AI to make it easier to identify when content is AI-generated or not, see how AI systems are using data, and better understand outputs. The big question is how to achieve this. Transparency and declining trust in AI are complex problems with no single solution, and progress will require collaboration across industries around the world.

Tackling a complex technical challenge

Fortunately, we’ve already seen signs that both government and technology leaders are focused on addressing the issue. The recent EU AI Act is an important first step in establishing regulatory guidelines and requirements around responsible AI implementation, and in the US, states like California have taken steps to introduce their own legislation.

While these laws are valuable because they outline specific risks for industrial use cases, they only provide standards to enforce, not solutions to implement. The lack of transparency in AI systems runs deep, down to the data used to train models and how that data influences outputs, and it poses a difficult technical problem.

Blockchain is one technology that is emerging as a potential solution. While blockchain is widely associated with crypto, its foundational technology is essentially based on a tamper-resistant, highly serialized data store. For AI, it can increase transparency and trust by providing an automated, certifiable audit trail of AI data – from data used to train AI models, to inputs and outputs during use, and even the impact specific datasets have had on an AI’s output.

Retrieval augmented generation (RAG) has also quickly emerged and been embraced by AI leaders to bring transparency to systems. RAG allows AI models to query external data sources such as the internet or a company’s internal documents in real time to inform outputs, meaning models can ensure outputs are based on the most relevant, up-to-date information possible. RAG also introduces the ability for a model to cite its sources, allowing users to verify the information themselves rather than requiring blind trust.

And speaking of tackling deepfakes, OpenAI said in February that it would embed metadata into images generated in ChatGPT and its API, making them easier for social platforms and content distributors to detect. That same month, Meta announced a new approach to identifying and labeling AI-generated content on Facebook, Instagram, and Threads.

These emerging regulations, governance technologies, and standards are a great first step in fostering greater trust around AI and paving the way for responsible adoption. But much more work needs to be done across the public and private sectors, particularly in light of viral moments that have heightened public discomfort with AI, upcoming global elections, and growing concerns about the security of AI in the enterprise.

We’re reaching a pivotal moment in the AI ​​adoption journey, where trust in the technology has the power to tip the scales. Only with greater transparency and trust will businesses embrace AI, and their customers reap the rewards of AI-powered products and experiences that inspire joy, not discomfort.

We list the best AI website builders.

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we showcase the best and brightest minds in the technology sector today. The views expressed here are those of the author and do not necessarily represent those of Ny BreakingPro or Future plc. If you’re interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post