Through collaboration we can shape a secure AI future

2023 will be seen as the year that artificial intelligence (AI) becomes mainstream – and that’s just the beginning. The global AI market is expected to grow to $2.6 trillion within ten years. Given how transformative AI will become, in areas ranging from healthcare to food safety, the built environment and beyond, it is critical that we find a way to harness its power as a force for good.

In addition to the excitement around ChatGPT, there are serious questions about how we build trust in AI, especially generative AI, and what guardrails are needed. This is not a future challenge; According to BSI’s recent Trust in AI survey, 38% of people already use AI in their work every day and 62% expect to do so by 2030. As the use of AI increases, many questions will need to be answered. For technology leaders and those focused on digital transformation, these include: What does safe use of AI look like? How can we bring everyone along on this exciting journey, and upskill those who need it? How can the business community be encouraged to innovate and what should the government do to make this possible, while maintaining the focus on safety?

Susan Taylor Martin

Safe use of AI

Governments around the world are rushing to answer these questions. From Australia’s Responsible AI Network to China’s draft regulation of AI-powered services for citizens, to the EU AI Act and President Biden’s recent Executive Order on AI, this global conversation is live – its urgency in stark contrast to its slowness global policy response to social problems. media. Crucially, no country can influence how another country chooses to regulate, and there is no guarantee of consistency. But in our globally connected economy, organizations – and the technology they use – operate across borders. International collaboration to define our AI future and catalyze future innovation is critical.

Some, including former Google CEO Eric Schmidt, have called for an IPCC-style body to govern AI, bringing together different groups to determine our future approach. This is in line with public opinion: research by BSI shows that three-fifths of people want international guidelines for the safe use of AI. There are many ways we can do this. Bringing people together physically is crucial, for example at the recent UK AI Safety Summit. I look forward to further progress in the upcoming discussions in South Korea and France.

Another useful starting point is international standards, which are dynamic and based on a common consensus between countries and multiple stakeholders, including consumers, on what good practice looks like. With rapidly emerging technology, standards and certification can act as a common infrastructure, providing clear principles designed to ensure innovation is safe. Compliance with international standards can act as a common thread, and is already an important part of similar cross-border issues such as sustainable finance, or even cybersecurity, where long-standing international standards are often used to mitigate risks. Such guidelines are intended to ensure that what is on the market is safe, builds trust, and can help organizations implement better technology solutions for everyone. The flexibility of standards and the rapid ability of organizations to adopt them is critical given the pace of change in AI. The endgame is both to promote interoperability and to give suppliers, users and consumers confidence that AI-based products and systems meet international safety standards.

Global cooperation

While reaching consensus is not easy, in AI we are not starting from scratch. The soon-to-be published AI Management System Standard (ISO/IEC 42001), recognized in the UK Government’s National AI Strategy, is based on existing guidance. This is a risk-based standard that helps mainstream organizations of all sizes protect themselves and their customers. It is designed to assist with considerations such as non-transparent automatic decision making, the use of machine learning for system design and continuous learning. Additionally, there are already many standards around reliability, bias, and consumer inclusion that can be leveraged immediately, and we are in the early stages of developing GAINS (Global AI Network of Standards). Ultimately, some of the big questions surrounding AI lack a technological solution, but standards help define the principles behind robustness, fairness, and transparency as the technology continues to evolve.

To see this approach in action, we can look at how global cooperation is helping to accelerate decarbonization. Launched a year ago, the ISO Net Zero Guidelines were developed from a conversation among thousands of voices from more than 100 countries, including many underrepresented voices. Now adopted by organizations including General Motors to inform their strategies, Nigel Topping, the UN high-level climate action champion, described the guidelines as “a core reference text… to align global actors”.

AI has the ability to positively impact society and accelerate progress towards a sustainable world. But trust is crucial. We need global cooperation to balance the great opportunities it promises with its potential risks. If we work together across borders, we can build the right checks and balances to make AI a powerful force for good in every area of ​​life and society.

We’ve highlighted the best business VPN.

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post