This is why Google’s Gemini AI, given a good memory, can save lives
There is far too much negativity and fear-mongering around AI these days. It doesn’t matter what news story comes out. If it’s about Google Gemini getting a ‘reminder’, or ChatGPT telling a user something that’s clearly wrong, it’s going to cause an uproar among some of the online community.
The focus AI currently has on real artificial general intelligence (AGI) has created an almost hysterical media landscape built around visions of Terminator fantasies and other doomsday scenarios.
That’s not surprising, though. People love a good Armageddon – heck, we’ve fantasized about it enough in the last 300,000 years. From Ragnarok to the Apocalypse to the End Times, and every major fantasy blockbuster littered with mass destruction in between, we’re obsessed. We just love bad news, and that’s the sad truth, for whatever genetic reason that may be.
The way AGI is painted by virtually every major vocal outlet these days largely stems from the idea that it is the very worst of humanity. It naturally sees itself as a superior force hindered by insignificant people. It is evolving to a point where it no longer needs its creators and inevitably heralds some form of an end-of-the-world event that wipes us all off the face of the Earth, either through nuclear destruction or a pandemic. Or worse, it leads to eternal damnation instead (courtesy of Roko’s Basilisk).
There is a dogmatic belief in this kind of perspective among some scientists, media pundits, philosophers and big tech CEOs, all of whom are shouting about it from the rooftops, signing letters and more, begging those in the know to hold off on AI. development.
However, they all overlook the bigger picture. If they ignore the absolutely enormous technological hurdles required to even get closer to emulating anything even remotely close to the human mind (let alone a superintelligence), they all fail to appreciate the power of knowledge and education.
If an AI has the Internet at its fingertips, the greatest library of human knowledge that has ever existed, and is able to understand and appreciate philosophy, art, and all of human thought to date, why does it have to be some evil ? want to force our downfall instead of being a balanced and thoughtful being? Why should it seek death instead of cherishing life? It’s a bizarre phenomenon, similar to being afraid of the dark just because we can’t see into it. We judge and condemn something that doesn’t even exist. It’s a mind-boggling bit of jumping to conclusions.
Google’s Gemini finally gets a reminder
Earlier this year, Google introduced a much larger memory capacity for its AI assistant Gemini. It can now hold and reference details you give it from previous conversations and more. Our news writer Eric Schwartz has written a fantastic piece about that, which you can read here, but in short, this is one of the key components to moving Gemini further away from a narrow definition of intelligence, and closer to the AGI. facial expressions that we really need. It won’t have a conscience, but through patterns and memory alone it can very easily mimic an AGI interaction with a human.
Deeper memory improvements in LLMs (Large Language Models) are critical to their improvement – ChatGPT also had its own equivalent breakthrough earlier in its development cycle. Comparatively, however, even that is limited in its overall scope. Talk to ChatGPT long enough and the comments you made earlier in the conversation will be forgotten; it will lose context. This breaks the fourth wall somewhat when interacted with, torpedoing the famous Turing Test.
According to Gemini, even today, its own memory capabilities are still evolving (and not really disclosed to the public). Still, it believes they are vastly superior to ChatGPT’s, which should alleviate some of those fourth-wall illusion-breaking moments. We may be in for a bit of an LLM AI memory race at the moment, and that’s not a bad thing at all.
Why is this so positive? Well, I know it’s a cliché to some – I know we use this term quite a bit, perhaps in a very casual way that devalues the term – but we are in the middle of a loneliness epidemic. That may sound ridiculous, but studies suggest that on average, social isolation and loneliness can lead to an increase in all-cause mortality by somewhere between 1.08 and 1.48x (Andrew Steptoe and co. 2013). That’s astonishingly high – a number of studies have now confirmed that loneliness and social isolation increase the risk of cardiovascular disease, stroke, depression, dementia, alcoholism and anxiety and can even lead to an increased risk of a variety of cancers. performance. also hold.
Modern society has also contributed to this. The family unit, in which generations lived at least somewhat close together, is slowly disappearing – especially in rural areas. As local jobs dry up and the financial resources to live a comfortable life become out of reach, many move away from the safe neighborhood of their youth in search of a better life elsewhere. Combine that with divorce, separation and widowhood, and as a result you are inevitably left with an increase in loneliness and social isolation, especially among the elderly.
Of course there are co-factors to this, and based on this I draw some conclusions, but I have no doubt that loneliness is a wonderful thing to deal with. AI has the ability to alleviate some of that stress. It can provide help and comfort to people who feel socially isolated or vulnerable. That’s the thing: loneliness and being cut off from society has a snowball effect. The longer you are like this, the more social anxiety you develop, and the less likely you are to go out in public or meet people – and the worse it gets in a cycle.
AI chatbots and LLMs are designed to connect and talk to you. They can alleviate these problems and give those suffering from loneliness the opportunity to practice interacting with people without fear of rejection. Having a memory capable of retaining conversation details is the key to making that happen. We take it a step further and AI becomes a bona fide companion.
With both Google and OpenAI actively boosting memory capacity for both Gemini and ChatGPT, even in their current forms, these AIs have the opportunity to better work around Turing test problems and prevent those fourth wall-breaking moments from happening. Going back to Google for a moment, if Gemini is indeed better than ChatGPT’s limited memory capacity at the moment, and it acts more like a human memory, then at this stage I’d say we’re probably on the verge of making it a real imitation of an AGI, at least on the surface.
If Gemini is ever fully integrated into a smart home speaker, and Google has the cloud processing power to support it all (which I would suggest it is pushing for given recent developments in nuclear power acquisition), it could could become a revolutionary. force for good when it comes to reducing social isolation and loneliness, especially among the disadvantaged.
But that’s the thing: it’s going to take some serious computing power to do that. Running an LLM and keeping track of all that information and data is no small task. Ironically, it takes a lot more computing power and storage space to run an LLM than, say, to create an AI image or video. Doing this for millions or potentially billions of people will require processing power and hardware that we currently don’t have.
Terrifying ANIs
The reality is that it’s not the AGIs that scare me. It’s the artificial limited intelligences, or ANIs, that are already here, that are far more chilling. These are programs that are not as advanced as a potential AGI. They have no idea of any information other than what they are programmed for. Think of an Elden Ring boss. The only goal is to beat the player. It has parameters and limitations, but as long as those are met, it’s one job is to crush the player – nothing else, and it won’t stop until that’s done.
If you remove these restrictions, the code remains and the purpose is the same. In Ukraine, as Russian forces began using jamming equipment to prevent drone pilots from successfully flying them to their targets, Ukraine began switching to using ANI to take out military targets instead, dramatically increasing the number of hits . In the US, of course, there is the legendary news article about the USAF AI simulation (real or theoretical) in which the drone killed its own operator to achieve its goal. You get the picture.
It’s these AI applications that are the scariest, and they’re here now. They have no moral conscience or decision-making process within them. You strap a gun to it and tell it to destroy a target, and it will do just that. To be fair, humans are just as capable, but there are checks and balances to stop that and a moral compass (hopefully) – yet we still lack concrete legislation, local or global, to counter these AI problems to go. Especially on the battlefield.
Ultimately, this all comes down to preventing bad actors from taking advantage of emerging technology. A while ago I wrote a piece about the death of the internet and how we need a non-profit organization that can quickly respond and legislate for countries against emerging technological threats that might arise in the future. AI needs this just as much. There are organizations pushing for this, the OECD being one of them – but modern democracies and indeed any form of government are simply too slow to respond to these immensely advancing threats. The potential for AGI is unprecedented, but we’re not there yet, and unfortunately ANI is.