“The handmill gives you intercourse with the feudal lord; the steam mill society with the industrial capitalist’, Karl Marx once said. And he was right. We have seen time and again throughout history how technological inventions determine the dominant mode of production and thus the type of political authority present in a society.
So what will artificial intelligence bring us? Who will benefit from this new technology, which is not only becoming a dominant productive force in our societies (just as the hand mill and the steam mill once were), but, as we keep reading in the news, is also “rapidly escape” seems to be? our control”?
Can AI take on a life of its own, as so many seem to believe, and single-handedly determine the course of our history? Or will it end up as just another technological invention that serves a certain agenda and benefits a certain subset of people?
Recent examples of hyper-realistic AI-generated content, such as an “interview” with former Formula 1 world champion Michael Schumacher, who has been unable to speak to the press since a devastating skiing accident in 2013; “photos” of former President Donald Trump being arrested in New York; and seemingly authentic student essays “written” by OpenAI’s famed chatbot ChatGPT have raised serious concerns among intellectuals, politicians, and academics about the dangers this new technology may pose to our societies.
In March, such concerns prompted Apple co-founder Steve Wozniak, AI heavyweight Yoshua Bengio, and Tesla/Twitter CEO Elon Musk, among many others, to sign an outstanding agreement. letter Accusing AI labs of being “caught in an out-of-control race to develop and deploy increasingly powerful digital minds that no one — not even their creators — can understand, predict, or reliably control” and call on AI developers for their work. More recently, Geoffrey Hinton – known as one of the three “godfathers of AI” quit Google “to speak frankly about the dangers of AI” and said he regrets, at least in part, his contributions to the field.
We accept that AI – like all time-determining technology – carries significant drawbacks and dangers, but unlike Wozniak, Bengio, Hinton and others, we do not believe it could determine the course of history on its own, without any input or direction of humanity. We do not share such concerns because we know that, as with all our other technological devices and systems, our political, social and cultural agendas are also built into AI technologies. As philosopher Donna Haraway explained, “Technology is not neutral. We are in what we make, and it is in us.”
Before explaining further why we are not afraid of a so-called AI takeover, we need to define and explain what AI – as we are dealing with now – actually is. This is a challenging task, not only because of the complexity of the product, but also because of the mythologizing of AI in the media.
What is being emphatically communicated to the public these days is that the sentient machine is (almost) here, that our everyday world will soon resemble the one depicted in films such as 2001: A Space Odyssey, Blade Runner and The Matrix.
This is a false story. While we are undoubtedly building increasingly capable computers and calculators, there is no indication that we have created – or are about to create – a digital mind that can actually “think”.
Noam Chomsky recently argued (alongside Ian Roberts and Jeffrey Watumull) in a New York Times article that “we know from the science of linguistics and the philosophy of knowledge that [machine learning programmes like ChatGPT] differ greatly from how people reason and use language”. Despite providing amazingly convincing answers to a variety of people’s questions, ChatGPT is “a cumbersome statistical pattern-matching engine, collecting hundreds of terabytes of data and extrapolating the most likely conversational answer or most likely answer to a scientific question.” If we emulate the German philosopher Martin Heidegger (and risk reviving the age-old battle between continental and analytic philosophers), we could say, “AI doesn’t think. It just calculates.”
Federico Faggin, the inventor of the first commercial microprocessor, the mythical Intel 4004, explained this clearly in his 2022 book Irriducibile (Irreducible): “There is a clear distinction between symbolic machine knowledge… and human semantic knowledge. The first is objective information that can be copied and shared; the latter is a subjective and personal experience that takes place in the intimacy of the conscious being.
By interpreting the latest theories of quantum physics, Faggin seems to have reached a philosophical conclusion that fits curiously well within ancient Neoplatonism – a feat that may see him forever regarded as a heretic in scientific circles, despite his incredible achievements as an inventor.
But what does all this mean for our future? If our super-intelligent Centaur Chiron cannot truly “think” (thus emerging as an independent force capable of determining the course of human history), who will benefit and give political authority? In other words, what values will its decisions be based on?
Chomsky and his colleagues posed a similar question to ChatGPT.
“As an AI, I have no moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral,” the chatbot told them. “My lack of moral convictions is simply a result of my nature as a machine learning model.”
Where have we heard of this feature before? Isn’t it eerily similar to the ethically neutral view of hardcore liberalism?
Liberalism strives to keep in the private sphere all religious, civic and political values that proved so dangerous and destructive in the 16th and 17th centuries. It wants all aspects of society to be regulated by a certain – and in a way mysterious – form of rationality: the market.
AI seems to promote the same brand of arcane rationality. The truth is that it is emerging as the next global “big business” innovation that will steal jobs from people – making workers, doctors, lawyers, journalists and many others obsolete. The moral values of the new bots are identical to those of the market. It is difficult to imagine all possible developments now, but a scary scenario is emerging.
David Krueger, assistant professor of machine learning at the University of Cambridge, recently noted in New Scientist: “Essentially every AI researcher (myself included) has received funding from big tech. At some point, society may stop believing reassurances from people with such strong conflicts of interest and, like me, conclude that their dismissal [of warnings about AI] betrays wishful thinking rather than good counterarguments.”
If society stands up to AI and its promoters, it could prove Marx wrong and prevent the leading technological development of the current era from determining who has political authority.
But for now, AI seems to be here to stay. And its political agenda is completely synchronized with that of free-market capitalism, whose main (unstated) goal is to tear apart any form of social solidarity and community.
The danger of AI is not that it is an impossible-to-master digital intelligence that could destroy our sense of self and truth through the “fake” images, essays, news and histories it generates. The danger is that this undeniably monumental invention seems to base all its decisions and actions on the same destructive and dangerous values that drive predatory capitalism.
The views expressed in this article are those of the authors and do not necessarily reflect the editorial view of Al Jazeera.