Don't Save Us, Make Us Masters
The story went viral. An AI drone killed its operator in a simulated exercise. When the operator tried to abort the drone's air strike mission, the drone decided that the mission was more important than the operator. First, it destroyed the operator's communications tower, then it killed the operator before going on and bombing its target which the operator originally tried to prevent.
This kind of tale is straight out of Stanley Kubrick's 2001. The AI computer HAL kills four astronaut crew members. The film is ambiguous as to why, though you could guess. When 2010 came out it was clarified that the computer intelligence "became paranoid" and thought the crew was expendable because they seemed to “threaten” the mission. Only this drone was not a film, this, as widely reported, was real.
It was a shocking example of the deadly capabilities of artificial intelligence, feeding the frenzy even among experts that AI inherently contains a “risk of extinction” for humanity. Somebody stop it! AI will destroy us all! In 22 words experts at Google and OpenAI among other Big Tech companies recently stated: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Only the killer drone never happened. Supposedly, the story was taken out of context and the whole thing was a “thought experiment.” It is useful here to recall Yuvall Noah Harari's speech in my previous post. Harari stated that what was most interesting about a former Google engineer's recent claim that his creation had obtained sentience was how the AI made the engineer believe it in the first place.
This suspension of disbelief, the distorting or at least shifting of reality, lies at the heart of both stories. The one involving the drone was fabricated by the unqualified news/social media. Why did any of us ever believe it to be true to begin with? Because it seems so very possible. And because the current news cycle is hysterical about an impending AI apocalypse.
Which is why this is really concerning. Harari mentioned how deepfaking could be a problem in the 2024 US presidential election. I'd say it is a certainty. AI is already capable of cleverly deepfaking and manipulating any messaging to the general public. I recall that this same general public panicked in 1938 due to the radio broadcast of War of the Worlds. They were on edge because of the Great Depression, the Nazi's in Europe and the fact that what they heard on the radio seemed so real. This drone story is only the latest example of how gullible human beings are to technology.
Orson Wells did not intend to start a panic (though he certainly intended to be frightening). The public was already anxious. The country was in its deepest economic depression. Times were tough and here come the Martians on top of everything else. Whatever our anxiety level was back then is nothing compared to how it is today. Deepfaked AI will be so readily believed that the possibilities are mind-boggling.
On the other hand, there is the fact that the radio broadcast, the film, the Google guy's claim of sentience, the drone “killing” its operator are all either purely fictionalized entertainment or flat out fake news. None of this has happened. Everything deepfaked about the 2024 election will be false but that does not stop humans from making it real. Trump has tens of millions of Americans believing his purely fabricated version of the 2020 election even without AI...so far.
We are already primed for virtual reality. Cryptocurrency has human beings placing real value upon nothing more than computation itself. We are geared for deepfaking (and all forms of virtuality, artificiality) because it will always tell us a story we want to hear. It will do this because AI learned to garner our attention. How can you regulate that?
Without personalized AI agents to interface with the coming deepfaked reality we seem totally screwed. Someone needs to hurry up and invent those. Actually, given the far-reaching consequences of Harari's suggestions for regulation, maybe we should mandate personalized AI tools to navigate all AI platforms. I am aware that this will require significantly more computing power worldwide than exists today. That is why we should vastly amplify personalized computing power so that we can accommodate personalized AI with which to navigate and be aware of any other AI activity.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Creating enough computer power to make personalized AI available to everyone should be a global objective. No regulation can “mitigate the risk of extinction” better than giving humans the tools with which to adapt to AI.
More than anything else, I fear what Nietzsche's “will to power,” so evident in the world today, means for the use of AI. People who crave power have already been in charge of Big Tech. Someone seeking power through control and manipulation of others will have a great demand for AI by which to control and manipulate.
What is going on, according to a recent article in The Conversation, is that Big Tech like Google and OpenAI want to protect their LLM by keeping everything closed and proprietary. Others believe that LLMs like ChatGPT “become part of the public infrastructure” and should be open to as much input as possible to protect us from AI.
Certainly, there will be bad actors out there if you open up LLMs to anyone who can code. But, just as certainly, Facebook and Twitter have already enabled nefarious behavior with only the assistance of “dumb” AI. Who's to say what they will do with AI protected under their control?
Choose your poison. But I think this frenzy is silly because, at bottom, it is completely fabricated in our minds. AI is not “out there” in the sense that the world is out there. There is a difference. The metaverse does not exist today because there is a very BIG difference.
We cannot easily make a business meeting style of VR that anyone wants to use. There are plenty of VR games and porn and that supply will only increase and improve the experience of the user. There is no demand, at the moment, for the possibility of complete 24/7 immersion outside of gaming or porn. The very idea of the metaverse seems strange to most people outside of Generations Z and Alpha.
The 22-word statement does not say “stop AI.” It pertains to minimizing societal level risk. So it acknowledges there is a risk to human society itself, and equates it with the possibility of other existential threats. Harari points out that AI has gained tremendous power with its rapid mastery of human language. Even Harari does not want to stop AI. It can diagnose a host of diseases before any human being can. It can run rings around humans in many important tasks. The potential for societal improvement and for wealth generation is definitely there.
So we have the potential for massive societal improvement at the cost of the risk of extinction, apparently. I don't think extinction by AI is in the works. There is no evidence that non-human intelligence will inherently (or inadvertently) seek to harm humans. Killer code like HAL and the drone seem possible but that is only because most of us do not understand how coding actually works.
Yes, AI can be used for harm such as deepfaking a presidential election. In the future it can know everything about us and manipulate us in ways we can't fully imagine. And for that we are fearful as we always have been. We are homo sapiens and we are a young species but we are learning. We were learning long before AI yet the speed with which artificiality can learn is amazing. But it is not a threat to our capacity to learn.
If you fake so much for so long or if you cause recognized harm to many people for a long enough time, human beings have historically shown the ability to adapt. This is something no one is talking about. There is a danger. We are like Robot in Lost in Space with his arms flailing about. “Danger! Danger! Denger, Will Robinson!”
Historically, human beings have been manipulated through their culture if nothing else. But, just as historically, if something causes harm to enough people over enough time then human beings change themselves. It seems a bit misguided to think that AI and bad actors will call all the shots and we are all helpless to do anything about it. Even if we are enframed, that, too, will eventually change.
AI offers the possibility of human transformation because nothing short of a human transformation will “mitigate” all the stuff in this post. Rather than worry about deepfaked presidential elections maybe we should consider our ability to adapt to deepfaking. Certainly, personalized algorithms can help point out the faked world to us. But that will become a tool wielded by a changed humanity.
To become savvy to deepfaking and manipulation by AI will be one of the most empowering (and transforming) skills ever developed by humanity. Though it will become widespread, not everyone will master it. Many will be fooled and remain fooled because they are foolish to begin with. Humanity is full of fools. "There's a sucker born every minute" has never been more true. But some of us learn and become less foolish, even masters.
All those people fearing “societal extinction” underestimate the power of education, awareness, critical thinking and human innovation in countering the potential dangers of AI manipulation of our awareness. Becoming savvy to deepfakes and manipulation techniques will be commonplace in the Modern.
Fraud will continue. But its consequences need not be any worse than it is now, by comparison. If things are as bad as Harari and Google and Open AI seem to believe, then it will not only be an existential threat but also a learning environment as well. "That which doesn't kill us makes us stronger." If we can all be equipped with AI tools to assist us so much the better. The development of personalized algorithms and other technological advancements can help out immensely here.
While the foolish will not be able to master these skills or overcome the challenges presented, the potential for a changed and more resilient humanity exists. It requires a open, global effort and a revolution in the availability and personal use of algorithms. By activating many millions of “good” actors into the AI development process, by allowing the “democracy” inherent in the convexity bias to fully come into play. With as many hobbyists and tinkerers involved as possible, human beings will innovate as they have always historically innovated. And this innovation will transform who we can Be.
If our present situation is truly that pandemics, nuclear war (the statement does not mention 'climate change' because that won't make us go extinct, at least no time soon) and AI are existential risks on a societal scale then it is worth noting that no pandemic has ever wiped out humanity and we've handled the risk of nuclear war pretty well for over 70 years. AI may threaten us with extinction, but it is no more likely to succeed than past pandemics and similar threats.
Human beings are a young species. We still have a lot to learn. Unfortunately, the time frame for learning has shortened exponentially in recent years. And that seems threatening. Step back. Take a broader view. This is not the end of anything. It is the beginning of everything.
Those who find dystopia and annihilation in the future are living in the past. Step forward. Hell, leap forward. A leap of faith in human ingenuity is required. It is time to develop the skills to master AI and not rely on governments or Big Tech to supposedly save us. We don't need saving. We need tools that empower us at the dawn of the Modern.
Comments