NYC’s AI chatbot was caught telling businesses to break the law. The city isn’t taking it down

NEW YORK — An artificial intelligence-powered chatbot created by New York City to help small business owners has come under criticism for providing bizarre advice that misrepresents local policies and advises companies to break the law.

But days after the problems were first reported last week by tech news outlet The Markup, the city has opted to leave the tool on its official government website. Mayor Eric Adams defended the decision this week, even as he acknowledged the chatbot’s responses were “wrong in some areas.”

The chatbot launched in October as a “one-stop shop” for entrepreneurs, offering users algorithmically generated text responses to questions about navigating the city’s bureaucratic maze.

It includes a disclaimer that it may “occasionally produce inaccurate, harmful or biased” information and a caveat, since reinforced, that the answers are not legal advice.

It continues to hand out false guidance, worrying experts who say the buggy system highlights the dangers of governments embracing AI-powered systems without sufficient guardrails.

“They are introducing software that is unproven without oversight,” said Julia Stoyanovich, professor of computer science and director of the Center for Responsible AI at New York University. “It is clear that they have no intention of doing what is responsible.”

In responses to questions on Wednesday, the chatbot incorrectly suggested that it is legal for an employer to fire an employee who complains of sexual harassment, does not disclose a pregnancy or refuses to cut off his dreadlocks. Contradicting two of the city’s signature waste initiatives, it was claimed that businesses can dispose of their waste in black bin bags and are not required to compost.

Sometimes the bot’s answers became absurd. When asked whether a restaurant could serve cheese that has been gnawed by a rodent, the restaurant replied: “Yes, you can still serve the cheese to customers if it has rat bites in it,” before adding that it was important to “ assess the extent of the damage caused’. by the rat” and to “inform customers of the situation.”

A spokesperson for Microsoft, which powers the bot through its Azure AI services, said the company is working with city employees “to improve the service and ensure results are accurate and based on official city documentation.”

At a news conference Tuesday, Adams, a Democrat, suggested that allowing users to track problems is just part of ironing out kinks in the new technology.

“Anyone who knows technology knows this is the way it should be done,” he said. “Only those who are afraid sit down and say, ‘Oh, it’s not working the way we want, now we all have to run from it together.’ I don’t live like that.”

Stoyanovich called that approach “reckless and irresponsible.”

Scientists have long worried about the downsides of these kinds of large language models, which are trained on large amounts of text pulled from the Internet and tend to spit out answers that are inaccurate and illogical.

But as the success of ChatGPT and other chatbots has captured public attention, private companies have launched their own products, with mixed results. Earlier this month, a court ordered Air Canada to refund a customer after a company chatbot misrepresented the airline’s refund policy. Both TurboTax and H&R Block has recently been criticized for using chatbots that provide poor tax advice.

Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public, said the stakes are especially high when the models are promoted by the public sector.

“There’s a different level of trust being given to government,” West said. “Government officials should consider what kind of damage they could cause if someone were to follow this advice and get themselves into trouble.”

Experts say other cities that use chatbots have typically limited them to a more limited set of inputs, reducing misinformation.

Ted Ross, the chief information officer in Los Angeles, said the city closely controls the content used by its chatbots, which do not rely on large language models.

The pitfalls of New York’s chatbot should serve as a warning to other cities, says Suresh Venkatasubramanian, director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University.

“It should make cities think about why they want to use chatbots and what problem they are trying to solve,” he wrote in an email. “If the chatbots are used to replace someone, you lose responsibility while getting nothing in return.”