AI could help scale humanitarian responses. But it could also have big downsides

NEW YORK– As the International Rescue Committee deals with the dramatic increase in the number of displaced people in recent years, the refugee aid agency has sought efficiencies wherever possible – including the use of artificial intelligence.

Since 2015, the IRC has invested in Signpost – a portfolio of mobile apps and social media channels that answer questions in different languages ​​for people in dangerous situations. The Signpost project, which involves many other organizations, has reached 18 million people to date, but IRC is looking to significantly expand its reach by using AI tools.

Conflicts, climate emergencies and economic hardship have increased the demand for humanitarian aid 117 million people forcibly displaced by 2024, according to the United Nations refugee agency. As humanitarian organizations encounter more and more people in need, they also face massive funding shortfalls. The turn to artificial intelligence technologies is driven in part by this huge gap between needs and resources.

To achieve its goal of reaching half of displaced people within three years, the IRC is building a network of AI chatbots that can boost the capacity of their humanitarian officers and the local organizations that directly serve people through Signpost. For now, the project is active in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It is based on a combination of major language models from some of the largest technology companies, including OpenAI, Anthropic and Google.

The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems.

In addition to developing these tools, the IRC aims to expand this infrastructure to other humanitarian nonprofits at no cost. They hope to create shared technology resources that less tech-focused organizations could use without having to negotiate directly with tech companies or manage the risks of implementation.

“We try to be really clear about where the legitimate concerns lie, but lean into the optimism of the opportunity and also not allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human until human is possible. or other technology cannot do that,” said Jeannie Annan, Chief Research and Innovation Officer of the International Rescue Committee.

The answers and information that Signpost chatbots provide are monitored by local organizations to be up-to-date and sensitive to the precarious circumstances people may find themselves in. A sample question IRC shared is from a woman from El Salvador traveling through Mexico to the United States with her son seeking shelter and services for her child. The bot provides a list of providers in the area where it is located.

More complex or sensitive questions are escalated so people can respond.

The main potential disadvantage of these tools is that they don’t work. For example, what if the situation on site changes and the chatbot does not know? It can yield information that is not only wrong, but also dangerous.

A second problem is that these tools can collect a valuable trove of data on vulnerable people that hostile actors can target. What if a hacker manages to gain access to data containing personal information or if that data is accidentally shared with an oppressive government?

IRC said it has agreed with the technology providers that none of their AI models will be trained on the data generated by the IRC, the local organizations or the people they serve. They have also worked to anonymize the data, including removing personal information and location.

As part of the Signpost.AI project, IRC is also testing tools such as a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises.

Cathy Petrozzino, who works for the nonprofit research and development company MITER, said AI tools have great potential, but also big risks. To use these tools responsibly, she says, organizations must ask themselves: does the technology work? Is it fair? Are data and privacy protected?

She also emphasized that organizations need to bring together a range of people to help drive and design the initiative – not just technical experts, but also people with deep knowledge of the context, legal experts and representatives of the groups that will use the tools.

“There are a lot of good models in the AI ​​graveyard,” she said, “because they were not developed in collaboration with the user community.”

For any system that has potentially life-changing consequences, Petrozzino said, groups should bring in outside experts to independently review their methodologies. Designers of AI tools need to consider the other systems they will interact with, she said, and make plans to monitor the model over time.

Consulting with displaced people or others serving humanitarian organizations can increase the time and effort required to design these tools, but not having their input poses many security and ethical concerns, says Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge.

People receiving services from humanitarian organizations should be told whether an AI model will analyze the information they hand over, she said, even if the intention is to help the organization respond better. That requires meaningful and informed consent, she said. They also need to know whether an AI model is making life-changing decisions about resource allocation and where responsibility for those decisions lies, she said.

Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate of changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies could solve the access problems, citing the week-long power outages caused by Hurricane Helene in the U.S. Chatbots don’t help if there’s no device, internet or electricity, she said.

Ali also warned that few local organizations have the capacity to attend major humanitarian conferences discussing the ethics of AI. Few employees are senior enough and knowledgeable enough to really engage in these discussions, she said, even if they understand the potential power and impact these technologies can have.

“We must be extremely careful not to replicate power imbalances and biases through technology,” Ali said. “The most complex questions will always require local, contextual and lived experiences to be answered in a meaningful way.”

___

The Associated Press and OpenAI have done so a license and technology agreement which gives OpenAI access to some of AP’s text archives.

___

Associated Press coverage of philanthropy and nonprofits is supported by the AP’s partnership with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropic coverage, visit https://apnews.com/hub/philanthropy.