AI must not become a driver of human rights abuses

On May 30, the Center for AI Safety issued a public warning about the risk artificial intelligence poses to humanity. The one-sentence statement, signed by more than 350 scientists, business leaders and public figures, states: “Reducing the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. “

It’s hard not to feel the brutal double irony in this statement.

First some of the signatories — including the CEOs of Google DeepMind and OpenAI — warning of the end of civilization represent companies primarily responsible for creating this technology. Second, it is the very same companies that have the power to ensure that AI actually benefits humanity, or at least does no harm.

They must heed the advice of the human rights community and immediately adopt a due diligence framework that helps them identify, prevent and mitigate the potential negative impacts of their products.

While scientists have long warned of the dangers posed by AI, it was only with the recent release of new generative AI tools that a larger segment of the general public realized the negative impact it could have.

Generative AI is a broad term that describes “creative” algorithms that can generate new content on their own, including images, text, audio, video, and even computer code. These algorithms are trained on huge datasets and then use that training to create output that is often indistinguishable from “real” data, making it difficult, if not impossible, to determine whether the content was created by a person or by an algorithm generated.

To date, generative AI products have taken three main forms: tools like ChatGPT that generate text, tools like Dall-E, Midjourney, and Stable Diffusion that generate images, and tools like Codex and Copilot that generate computer code.

The sudden emergence of new generative AI tools is unprecedented. The ChatGPT chatbot developed by OpenAI took less than two months to reach 100 million users. This is much faster than the initial growth of popular platforms like TikTok, which took nine months to reach so many people.

Throughout history, technology has promoted human rights but also caused harm, often in unpredictable ways. When search tools across the web, social media, and mobile technology first came to market, and as they rose in widespread adoption and accessibility, it was nearly impossible to predict many of the disturbing ways in which these transformative technologies would become drivers and multipliers. of human rights violations around the world. world.

Meta’s role in the 2017 ethnic cleansing of Myanmar’s Rohingya, or the use of nearly undetectable spyware deployed to turn mobile phones into 24-hour surveillance machines used against journalists and human rights defenders, are both consequences of the introduction of disruptive technologies whose social and political implications had not been seriously considered.

The human rights community is learning from these developments and is calling on companies developing generative AI products to take immediate action to prevent potential negative impacts on human rights.

So what might a human rights-based approach to generative AI look like? There are three steps, based on evidence and examples from the recent past, that we propose.

First, in order to fulfill their responsibility to respect human rights, they must immediately implement a rigorous framework for human rights due diligence, as set out in the UN Guiding Principles on Business and Human Rights. This includes proactive and ongoing due diligence to identify actual and potential damages, transparency regarding those damages and, where necessary, mitigation and remediation.

Second, companies developing these technologies must proactively collaborate with academics, civil society actors and community organizations, especially those representing traditionally marginalized communities.

While we cannot predict all the ways this new technology may cause or contribute to harm, we have extensive evidence that marginalized communities are most likely to be affected. The first versions of ChatGPT dealt with racial and gender biases, suggesting, for example, that Indigenous women have less “worth” than people of other races and genders.

Active engagement with marginalized communities should be part of product design and policy development processes to better understand the potential impact of these new tools. This is not possible after companies have already caused or contributed to damage.

Third, the human rights community itself must go a step further. In the absence of regulations to prevent and mitigate the potentially dangerous effects of generative AI, human rights organizations should take the lead in identifying actual and potential harms. This means that human rights organizations themselves must help build a deep understanding around these tools and develop research, advocacy, and engagement that anticipate the transformative power of generative AI.

Complacency in the face of this revolutionary moment is not an option, but neither is cynicism. We all have an interest in ensuring that this powerful new technology is used to benefit humanity. Implementing a human rights-based approach to identifying and responding to harm is a critical first step in this process.

The views expressed in this article are those of the author and do not necessarily reflect the editorial view of Al Jazeera.