ALEX BRUMMER: Is Big Tech putting profits ahead of the safety of humankind?

Genius or destroyer?: OpenAI founder Sam Altman

Amid the carnage in London’s stock markets, two companies stand out for bucking the trend. Why? The short answer is artificial intelligence (AI).

Online data pioneer and publisher Relx has quietly climbed into the FTSE top ten and is now valued at a staggering £54 billion, following a 33 percent rise in its shares this year.

And Gateshead-based small business software company Sage is up 52 per cent to be worth £12bn as it climbs the FTSE 100.

Both companies are early adopters of artificial intelligence, a technology first nurtured in Britain at DeepMind and swallowed up by Google owner Alphabet, where it is a growth engine.

Generative AI, developed by the founding genius of OpenAI, the now famous 38-year-old Sam Altman, is technology that can create high-quality images, text and code that matches human efforts.

OpenAI employees had sent a letter to their company’s board warning of the discovery of a potentially dangerous, powerful new algorithm. This contributed to Altman’s rancorous departure, which has now been reversed.

Generative AI, including ChatGPT created by OpenAI, is far from infallible. This is why UK-based Relx tested it to death before applying it to its legal, scientific and medical data repositories.

In the United States, the most litigious society in the world, the small accidents of everyday life can add up to big bucks. Last May, airline passenger Robert Mata called his lawyers after he was hit by a serving cart aboard an Avianca flight from Colombia to New York.

Avianca asked a judge in Manhattan to dismiss the lawsuit. Mata’s legal advisers then cited half a dozen cases – involving airlines such as Delta and China Southern – in which damages were allegedly paid. But Avianca’s counsel and judge were unable to verify any of them.

When challenged, the plaintiff’s attorney, Steven A. Schwartz, admitted that he was in a hurry and instead of using the legal bible Lexus, run by British data giant Relx, he had turned to ChatGPT, developed by Altman and his colleagues. The generative AI app hallucinated and spit out made-up cases without any legal value. The case was thrown out. Reliability is still a long way off.

The row that saw Altman ousted from his job, first rehired by major investor Microsoft and then reinstated by OpenAI after a rebellion by almost all of its employees, has been portrayed as a failure of governance. But it’s more complex than that.

Much of the dispute, which sparked an uprising from 743 of OpenAI’s 750 techies and programmers, centered on whether a nonprofit was letting commercialization become the driving force. The reality is that the pass had already been sold when Microsoft, an outsider among San Jose’s elite, invaded the Seattle invader with £800 billion injected into the company. Small change to £2.2 trillion Microsoft, but a down payment on the next big thing.

AI uses advanced microchips, developed by Nvidia and others, to mine data at astonishing speeds, process information and convert it into understandable text. It is the groundbreaking technology of our time.

A battle has erupted between Silicon Valley giants and older incumbent Microsoft for AI hegemony, with billions, if not trillions, of dollars at stake.

Just as was the case when search engine Google first crashed the commercial scene twenty years ago, this raises profound questions about intellectual property and copyright. The legal status of generative AI creations is being fought out in courtrooms around the world. Top music artists and production companies are in hysterics over song rights after generative AI mined the internet to recreate CDs, vinyl and videos that are indistinguishable from the originals.

AI’s excessive brainpower and intelligence and its ability to sift through private and security-sensitive information – such as medical records and nuclear designs – make it a powerful security threat. That’s before even considering the possibility that it will outsmart humanity and take control of us, as some of Altman’s colleagues feared.

Governments around the world are struggling to muster the potential power to control our lives. Americans, captivated by the commercial success of big tech and its ability to generate political donations and win election campaigns, have tended to trust people like Facebook (now Meta) founder Mark Zuckerberg and Microsoft CEO Satya Nadella to to monitor the industry.

These are elephantine companies that hate regulation. They are monopolists who use their market power to swallow up any technology that threatens their dominance. And they don’t like paying taxes.

The US government’s obsequiousness to big tech companies’ efforts to control AI has outraged a community that believes the internet is for everyone, not just Silicon Valley.

Emad Mostaque, CEO of British AI unicorn Stability AI, told The Mail on Sunday: “There needs to be more checks and balances, especially given the opaqueness of some of these companies.

‘Open technology is transparent and more robust, therefore much safer. We saw what happened with social media and the lack of accountability. Humanity should not place its trust in an unelected group that will lead the development of AI technology without proper oversight.”

The EU aims to introduce labyrinthine rules that require enablers and users to conduct comprehensive risk assessments and make all data available.

Rishi Sunak’s attempt to establish global principles for security monitoring may already be outdated by the speed of events.

The Altman affair has inserted some old-fashioned human drama and power struggle into the goings-on in a secretive corner of the trade. The idea of ​​a non-profit model and open AI – good for all humanity – is lost on the fairies.

At its core is a struggle for more money and domination. It will be a nightmare to stop arrogant tech giants from taking control.

Some links in this article may be affiliate links. If you click on it, we may earn a small commission. That helps us fund This Is Money and keep it free to use. We do not write articles to promote products. We do not allow a commercial relationship to compromise our editorial independence.