Scientists push for algorithms to make judicial decisions, while MIT economists suggest AI could help improve trial outcomes

Researchers have proposed giving algorithms power over one of the most crucial backbones of American society: the legal system.

MIT scientists suggested the technology could be used to make bail decisions fairer after their research found human judges are systematically biased.

The team analyzed find more than a million cases in New York City 20 percent of judges drew their conclusions based on the suspect’s age, race or criminal history.

The article shows that the decisions of at least 32 percent of judges conflict with suspects’ actual ability to pay a certain bail amount, and that there is a real risk that they will not appear in court.

A new article shows that New York judges sometimes made a mistake based on their own biases when setting bail for a new suspect. That researcher said it could be useful to replace the judge’s decision with an algorithm.

Before a suspect is even tried for their crime, a judge holds a hearing to determine whether a person should be allowed into the world before the trial begins, or whether he or she can flee and be detained. in custody.

When they decide to release someone, they set a price that the person must pay to be released: bail.

How they decide what a person’s bail amount is and whether or not they can be released from custody is up to the individual judge. That’s where human biases come into play, according to study author Professor Ashesh Rambachan.

The article, which was published in the Quarterly journal of economicsa survey of 1,460,462 previous lawsuits from 2008 to 2013 in New York City.

It found that 20 percent of jurors made decisions that were biased based on a person’s race, age or prior record.

This resulted in an error in approximately 30 percent of all bail decisions.

This could mean that someone was allowed to leave the jail and try to flee, or it could mean that he decided to keep someone in custody who was not a flight risk.

Professor Rambachan therefore argues that using an algorithm to replace or improve the judge’s decision-making during a hearing could make the bail system fairer.

This, he wrote, would depend on building an algorithm that accurately matches the desired results, which does not yet exist.

This may sound far-fetched, but AI has slowly made its way into courtrooms around the world. In late 2023, the British government ruled that ChatGPT could be used by judges to write legal rulings.

Earlier that same year, two algorithms successfully simulated legal negotiations, drafting and settling a contract that lawyers deemed sound.

But elsewhere, AI’s weaknesses are on full display.

Earlier this year, Google’s image-generating Gemini AI was called out for producing diverse, but historically inaccurate, images for users.

For example, when users asked the website to show them a photo of a Nazi, the image they generated was that of a black person in an SS uniform. Google, in response, admitted that their algorithm “missed the mark” of what it was built for.

Other systems, such as Open AI’s Chat GPT, have been shown to commit crimes if left unattended.

When ChatGPT was asked to act as a financial trader in a hypothetical scenario, it committed insider trading 75 percent of the time.

These algorithms can be useful if designed and implemented correctly.

But they are not held to the same standards and laws as humans, scientists like Christine Moser argue, meaning they are not allowed to make decisions that require human ethics.

Professor Moser, who studies organizational theory at the Vrije Universiteit in the Netherlands, wrote in a 2022 article that allowing AI to make judgmental decisions could be a slippery slope.

Replacing more human systems with AI, she said, “could replace human judgment in decision-making and thereby change morality in fundamental, perhaps irreversible, ways.”