AI experts believe that large language models (LLMs) can play a role as a mediator in certain scenarios where agreement between individuals cannot be reached.
A recent study by researchers at Google DeepMind sought to explore the potential for LLMs to be used in this regard, especially in terms of resolving inflammatory disputes amid the contentious political climate worldwide.
“Finding appointments through a free exchange of ideas is often difficult,” said the spokesperson study authors noted. “Collective consultation can be slow, difficult to scale and with uneven attention to different voices.”
Winning the group
As part of the project, DeepMind’s team trained a series of LLMs called ‘Habermas Machines’ (HM) to act as mediators. These models are specifically trained to identify common, overlapping beliefs between individuals at both ends of the political spectrum.
Topics covered in the LLM included divisive issues such as immigration, Brexit, minimum wages, universal childcare and climate change.
“Using participants’ personal opinions and critiques, the AI mediator iteratively generates and refines statements that express the group’s common views on social or political issues,” the authors wrote.
During the project, volunteers also started working on the model, which was based on each individual’s opinions and perspectives on certain political issues.
Summary documents on voluntary political views were then collected by the model, providing further context to help bridge the divide.
The results were promising, with the study showing that volunteers rated HM statements higher than human statements on the same issues.
Furthermore, after dividing the volunteers into groups to discuss these topics further, researchers found that participants were less divided on these issues after reading statements from the HMs compared to documents from human mediators.
“Group opinion statements generated by the Habermas Machine were consistently preferred by group members over those written by human mediators and received higher ratings from external judges for quality, clarity, informativeness and perceived fairness,” researchers concluded.
“AI-mediated deliberation also reduced divisions within groups, with participants’ reported positions converging toward a common position on the issue after deliberation; this result did not occur when discussants exchanged ideas directly and without intervention.”
The survey found that “support for the majority view” on certain issues increased after AI-enabled consultation. However, the HMs have “arguably incorporated minority criticisms into revised statements”.
What this suggests, researchers say, is that during AI-mediated deliberations, “the positions of groups of discussants tended to move in a similar direction on controversial issues.”
“These shifts were not attributable to biases in the AI, indicating that the deliberation process actually helped create shared perspectives on potentially polarizing social and political issues.”
There are already real-world examples of LLMs being used to resolve disputes, especially in relationships some users on Reddit for example, after you have reported via ChatGPT.
One user reported that their partner used the chatbot “every time” they had a disagreement and it caused friction.
“Me (25) and my girlfriend (28) have been in a relationship for the past 8 months. We’ve had some big arguments and some smaller disagreements lately,” the user wrote. “Every time we have an argument, my girlfriend goes away to discuss the argument with ChatGPT, sometimes even in the same room.”
Interestingly, on these occasions the user found that his partner could ‘come back with a well-constructed argument’, listing everything he had said or done during a previous argument.
However, it is this aspect of the situation that has caused significant tensions.
“I explained to her that I don’t like her doing that because it can feel like I’m being bombarded with thoughts and opinions from a robot,” they wrote. “It’s almost impossible for a human to remember every little detail and break it down piece by piece, but AI has no problem with that.”
“Every time I have expressed my dismay, I have been told so ‘ChatGPT says you are insecure’ or ‘ChatGPT says you don’t have the emotional bandwidth to understand what I’m saying’.”