Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court?

CITY OF OKLAHOMA — A body camera captured every word and bark as Officer Matt Gilmore and his K-9 dog, Gunner, searched for a group of suspects for nearly an hour.

Normally, the Oklahoma City police officer would grab his laptop and spend another 30 to 45 minutes writing up a report on the search. But this time, he had artificial intelligence write the first draft.

Based on all the sounds and radio messages picked up by the microphone on Gilbert’s bodycam, the AI ​​tool produced a report in eight seconds.

“It was a better report than I could have ever written, and it was 100 percent accurate. It flowed better,” Gilbert said. It even documented a fact he didn’t remember hearing — another officer mentioning the color of the car the suspects were running from.

The Oklahoma City Police Department is among a handful of people experimenting with AI chatbots to produce early drafts of incident reports. Officers who have tried it rave about the time-saving technology, while some prosecutors, police watchdogs and legal scholars worry about how it could change a foundational document in the criminal justice system that plays a role in who gets prosecuted or jailed.

Built using the same technology as ChatGPT and sold by Axon, best known for developing the Taser and as the dominant US supplier of bodycams, it could become what Gilbert describes as a new “game changer” for policing.

“They become police officers because they want to do police work, and spending half their day entering data is just a boring part of the job that they hate,” said Rick Smith, founder and CEO of Axon, who described the new AI product — called Draft One — as having the “most positive response” of any product the company has introduced.

“There are certainly concerns now,” Smith added. In particular, he said, district attorneys prosecuting a criminal case want to make sure that police officers — and not just an AI chatbot — are responsible for writing their reports, since they may have to testify in court about what they saw.

“They never want to have an agent on the stand saying, ‘The AI ​​wrote that, not me,’” Smith said.

AI technology is not new to police departments, which have adopted algorithmic tools to read license plates, recognize faces of suspects, detect gunshots and predict where crimes might occur. Many of these applications have come with privacy and civil rights concerns and attempts by legislators to put safeguards in place. But the introduction of AI-generated police reports is so new that there are few, if any, safeguards for their use.

Concerns about racial bias and prejudice in society built into AI technology are just part of what Oklahoma City community activist Aurelius Francisco finds “deeply disturbing” about the new tool, which he learned about via The Associated Press.

“The fact that the technology is being used by the same company that supplies the department with Tasers is alarming enough,” said Francisco, co-founder of the Foundation for Liberating Minds in Oklahoma City.

He said automating those reports “will ease the ability of police to harass, surveil, and use force on community members. While it makes the officer’s job easier, it makes the lives of Black and brown people harder.”

Before testing the tool in Oklahoma City, police officers showed it to local prosecutors, who advised caution about using it in high-stakes criminal cases. For now, it’s being used only for minor incident reports that don’t result in an arrest.

“So no arrests, no crimes, no violent crimes,” said Oklahoma City Police Capt. Jason Bussert, who manages information technology for the 1,170-officer department.

That’s not the case in another city, Lafayette, Indiana, where Police Chief Scott Galloway told the AP that all of his officers can use Draft One on any type of case and that it has been “incredibly popular” since the pilot began earlier this year.

Or in Fort Collins, Colorado, where Police Officer Robert Younger said officers are free to use it on any type of report, though they’ve found it doesn’t work well when patrolling the city’s bar district because of an “overwhelming amount of noise.”

In addition to using AI to analyze and summarize the audio recording, Axon experimented with computer vision to summarize what was “seen” in the video footage, but quickly realized the technology wasn’t ready yet.

“Given all the sensitivities around policing, around race and other identities of people involved, that’s an area where I think we really need to do some work before we implement it,” said Smith, the Axon CEO, who described some of the responses tested as not “overtly racist” but otherwise insensitive.

Those experiments led Axon to focus entirely on audio in the product, which was unveiled in April at the company’s annual conference for law enforcement officials.

The technology is based on the same generative AI model that powers ChatGPT, created by San Francisco-based OpenAI. OpenAI is a close business partner of Microsoft, Axon’s cloud computing provider.

“We’re using the same underlying technology as ChatGPT, but we have access to more buttons and dials than a real ChatGPT user would have,” said Noah Spitzer-Williams, who manages Axon’s AI products. By turning down the “creativity dial,” the model stays true to the facts, so it “doesn’t embellish or hallucinate in the same way that it would if you were using ChatGPT on its own,” he said.

Axon declined to say how many police departments are using the technology. It’s not the only vendor, with startups like Policereports.ai and Truleo touting similar products. But given Axon’s close relationship with police departments that buy its Tasers and body cameras, experts and law enforcement officials expect AI-generated reports to become ubiquitous in the months and years ahead.

Before that happens, legal scholar Andrew Ferguson would like to see more public discussion of the benefits and potential drawbacks. For one, the large language models behind AI chatbots are prone to making up false information, a problem known as hallucination which can add compelling and difficult-to-detect untruths to a police report.

“I worry that automation and the ease of technology will make police officers less careful with their texts,” said Ferguson, a law professor at American University who is writing what is expected to be the first law journal article on the emerging technology.

Ferguson said a police report is important in determining whether an officer’s suspicion “justifies the loss of a person’s liberty.” Sometimes it’s the only testimony a judge sees, especially in minor offenses.

According to Ferguson, human-generated police reports also have their flaws, but the question is which is more reliable.

For some officers who have tried it, it already changes how they respond to a reported crime. They tell what happens so the camera better captures what they want to put on paper.

As technology continues to develop, Bussert expects officers to become more verbal in describing what they see.

After Bussert loaded the traffic stop video into the system and pressed a button, the program produced a narrative report in colloquial language, with dates and times, just as an officer would have typed from his notes. All of this was based on the audio from the bodycam.

“It literally took a few seconds,” Gilmore said, “and it was to the point where I was like, ‘I don’t need to change anything.’”

At the end of the report, the agent must check a box to indicate that the report was generated using AI.

—————

O’Brien reported from Providence, Rhode Island

—————

The Associated Press and OpenAI have a license and technology agreement which gives OpenAI access to part of AP’s text archives.

Related Post