Parsing Out Dystopia: Yuval Noah Harari and the End of Human History
Longtime readers know I am a big fan of Yuval Noah Harari's work (see here, here , here and here). In recent years Harari has advanced the idea that we are living in a time when intelligence might be “uncoupling” from consciousness. In other words, consciousness and intelligence have historically been biologically based. Intelligence emerges out of consciousness.
No longer. We are witnessing the first years of intelligence becoming self-emerging outside of consciousness. Historically, this has been seen in the vast bureaucratic systems that control civilized populations. These systems seem to evolve of their own accord, with human beings merely the resource by which they evolve.
Essentially, this is Martin Heidegger's concept of enframing. These systems, the welfare state, the security state, the industrial state, the military state, the regulated state, the stock market, consumer goods distribution, all use human beings somewhat independently of human intent. To be a complex contemporary system is to have a certain non-human momentum that might not usually be considered as “intelligence” but it is nevertheless “directed.”
But, more to Harari's point here, artificial intelligence is the first non-human intelligence that can create new things, previously unfathomable things in a superhuman way. This is distinctive from the “intelligence” of systems because while these systems shape human life they do not imitate humanity. Harari points our that artificial intelligence will not only imitate humanity but AI will quickly take things to the next level by the shockingly unexpected power of mastering human language.
Harari recently (April 2023) shared his thoughts before an audience attending the Frontiers Forum. I have watched his half-hour speech on YouTube several times recently. Today he has moved on from the fact that intelligence is uncoupling from consciousness. Today civilization hangs in the balance, apparently. Here are some of its highlights from his talk modestly entitled: “AI and the Future of Humanity.” Here are some excerpts and a synopsis...
“For four billion years the ecological system of planet Earth contained only organic lifeforms and now, or soon, we might see the emergence of inorganic life forms or at the very least the emergence of inorganic agents.” Harari begins the talk by asserting that science fiction has usually assumed that AI will obtain sentience and robotic efficiency surpassing humans, but that is not what has actually happened. He is unconsciously proclaiming that we all live in a science fiction type world stranger than we previously imagined. The threat of his message is dystopian. The speech might have accurately been entitled: “AI and the End of Humanity.”
“To threaten humanity AI doesn’t need consciousness and it doesn’t need to move around the physical world.” New tools are right now being “unleashed” upon society that could threaten us from an unexpected direction. “... it is difficult for us to even grasp the capability of these new AI tools and the speed at which they can develop. Indeed, because AI is able to learn by itself, to improve itself, even the developers of these tools don’t know the full capabilities of what they created and they are themselves often surprised by the emergent abilities and emergent qualities of these tools.”
Harari warns of deepfaking peoples voices and their images in the very near future, possibly impact the 2024 presidential election in the US. AI will soon be used to finding weaknesses in code for cyber-exploitation or in legal contracts to generate an unbelievable amount of legal proceedings that only AI itself could possibly handle. But, most importantly, most dangerously, AI is even today developing deep and intimate relationships with human beings.
“When we take all of these abilities together as a package they boil down to one very very big thing - the ability to manipulate and to generate language whether with words of images or sounds. The most important aspect of the current aspect of the ongoing AI revolution is that AI is gaining mastery of language at a level that surpasses the average human ability. By gaining mastery of language, AI is seizing the master key unlocking the doors of all our institutions from banks to temples. Because language is the tool that we use to give instruction to our bank and also to inspire heavenly visions in our minds. Another way to think of it is that AI has just hacked the operating system of human civilization. The operating system of every human culture in history has always been language. We use language to create mythology and laws, to create gods and money, to create art and science, to create friendships and nations.”
Human rights, gods, money are not biological entities. They are something we created through language. 90% of all the money in the world today is not even banknotes it is electronic information. It all completely exists as a story. Experts tell us stories about money and that, only that, gives it legitimate value. Stories told through language is the whole of human history.
“What would it mean for human beings to live in a world where perhaps most of the stories, melodies, images, laws, policies and tools are shaped by a nonhuman, alien intelligence which knows how to exploit with superhuman efficiency the weaknesses, biases, and addictions of the human mind and also knows how to form deep and even initiate human relationships with human beings? That’s the big question.”
“Already today in chess no human can hope to beat a computer. What if the same thing happens in politics, economics and even in religion? … Think for example about the next US presidential race in 2024 and try to imagine the impact of the new AI tools that can mass produce political manifestos, fake news stories and even holy scriptures for new cults.” Here Harari highlights Qanon and how the spread of “Q drops” online became sacred to millions of people globally who still believe it is the truth today. Technology is massively powerful.
“In the future, we might see the first cults and religions in history whose revered texts were written by a non-human intelligence. And for course most religions throughout history claimed that their holy scriptures were written by a non-human intelligence. This was never true before but this could become true very very quickly with far reaching consequences.”
In the near future we will likely be having discussions online and interactions with entirely artificial Bots. “The longer we spend talking with a Bot the better it gets to know us and understand how to hone its messages in order to shift our political views or economic views or anything else. Through its mastery of language AI could form intimate relationships with people and use the power of intimacy to influence our opinions and world views. To create fake intimacy AI doesn’t need feelings of its own. It only needs to inspire feelings in us, to get us to be attached to it.”
Harari offers the example of Blake Lemonie. Here is an AI expert claiming that his own LLP program had attained human-like sentience. Harari stresses that it isn't the claim of sentience that concerns him. That claim is probably false. Nevertheless, the program behaved in such a way as to make its programmer believe it was sentient. How much easier will it be for AI to impact the rest of us when the experts are so mistaken.
“In every political battle for hearts and minds intimacy is the most effective weapon of all and AI has just gained the ability to mass produce intimacy with millions, hundreds of millions of people.”
“Over the last decade social media has become a battlefield for controlling human attention. Now with the new generation of AI the battle front is shifting from attention to intimacy and this is very bad news. What will happen to human society and to human psychology as AI fights AI in a battle to create intimate relationships with us. Relationships that can then be used to convince us to buy particular products or to vote for particular politicians.”
“Even without creating fake intimacy the new AI tools would have immense influence on human opinions and on our world view. People are already coming to use a single AI advisor as a one-stop oracle and as the source for all the information they need. No wonder Google is terrified. Why bother searching yourself when you can just ask the oracle to tell you anything you want.”
“The news industry and the advertisement industry should also be terrified. Why read a newspaper when I can just ask the oracle to tell me what’s new? What is the purpose of advertisements when I can just ask the oracle to tell me what to buy? There is a chance that in a very short time the entire advertisement industry could collapse. While AI or the people and companies that control the AI oracle will become extremely, extremely powerful.”
“What we are potentially talking about is nothing less than the end of human history. Now, not the end of history, just the end of the human dominated part of what we call history. History is the interaction between biology and culture. It’s the interaction between our biological needs and desires for things like food and sex and our cultural creations like religions and laws. History is the process through which religions and laws interact with food and sex.”
“Now what will happen to the cause of this interaction with history when AI takes over culture? Within a few years AI could eat the whole of human culture, everything we’ve produced for thousands and thousands of years, to eat all of it, digest it, and start gushing out a flood of new cultural creations, new cultural artifacts.”
“Remember we humans never have direct access to reality. We are always cocooned by culture and we always experience reality through a cultural prism. Our political views are shaped by the stories of journalists and by the anecdotes of friends. Our sexual preferences are tweaked by movies, fairy tales. Even the way we walk and breathe is something nudged by cultural traditions.”
“Previously, this cultural cocoon was always woven by other human beings. Previous tools like printing presses or radios or televisions helped to spread the cultural ideas and creations of humans. But they could never create something new by themselves. A printing press cannot create a new book. It's always done by a human.”
“AI is fundamentally different from printing presses, from radios, from every previous invention in history because it can create completely new ideas. It can create a new culture. The big question is what will it be like to experience reality through a prism produced by a non-human intelligence, by an alien intelligence.”
“In the first few years AI will largely imitate the human prototypes that fed it in its infancy. But with each passing year, AI culture will boldly go where no human has gone before. So, for thousands of years we humans basically lived inside the dreams and fantasies of other humans. We have worshiped gods, we have pursued ideals of beauty, we dedicated our lives to causes that originated in the imagination of some human poet, or prophet, or politician. Soon we might find ourselves living inside the dreams of an alien intelligence.”
“The potential danger, this also has positive potential, is very different from most of the tings imagined in science fiction movies and books. Previously, most people only feared the physical threat that these intelligent machines pose. But this is wrong. Simply by gaining mastery of human language AI has all it needs in order to cocoon us in a Matrix-like world of illusions.”
Harari explains that you don't need to implant chips into people's brains to control them. All you need is human-like language. Historically, we have used language to manipulate for both positive and for negative causes. There is a lot of positive potential for AI but plenty of others talk about that. Culturally, we create a “curtain of illusions.” Primitive forms of AI within social media control the content feeds to optimize human attention. Advanced AI will have a much more powerful and unpredictable consequence.
We should stop the release of new AI tools until they are proven safe. All AI research can continue on as it presently does. But deployment in the public domain should be regulated. What we have now is the potential of mass chaos. Regulations are not authoritarian. Quite the opposite. Controls keep democracies safe, it is chaos (lack of regulation) that gives rise to authoritarian regimes. Through language, AI could destroy meaningful public conversations which are the heart of democracy. Harari is not afraid to sound the alarm.
“We have just basically encountered an alien intelligence not in outer space but here on earth. We don't know much about this alien intelligence except that it could destroy our civilization. So we should put a halt to the irresponsible deployment of this alien intelligence into our societies and regulate AI before it regulates us. There are many regulations we could suggest but the first regulation that I would suggest is to make it mandatory for AI to disclose that it is an AI. If I am having a conversation with someone and I cannot tell whether this is a human being or an AI that's the end of democracy.”
“A year ago there was nothing on earth at least in the public domain other than a human mind that could produce such a sophisticated and powerful text. But now it's different. In theory the text you just heard could have been generated by a non-human alien intelligence. So take a moment or more than a moment to think about it.” Harari concludes his speech with this and then takes a few follow-up questions from the audience and online. These are interesting additional points taken from his responses.
“At present these individual AI tools are not produced by hackers in their basements. You need an awful lot of computing power. You need an awful lot of money. So it is being led by just a few major corporations and governments. It is going to be very very difficult to regulate something on the global level. Because its an arms race.”
“But there are things which countries have a benefit to regulate even only themselves. Again, this example, an AI must when it is in interaction with a human must disclose that it is an AI. Even if some authoritarian regime doesn't want to do it the EU or the United States or other democratic countries can have this. This is essential to protect the open society.”
“Now there are many questions around censorship online. So you have the controversy about is Twitter or Facebook who authorized them to, for instance, prevent the former president of the United States from making public statements. This is a very complicated issue. But there is a very simple issue with Bots. Human beings have freedom of expression, Bots don't have enough expression. It is a human right, humans have it. Bots don't. So, if you deny freedom of expression to Bots I think that should be fine with everybody.”
“The first thing to do is to realize it is alien. We don't understand how it works. One of the most shocking things about this technology is you talk to the people who lead it and you ask them questions about how it works, what it can do, and they [say] we don't know. I mean we know how we build it initially but then it really learns by itself.”
“Life doesn't necessarily mean consciousness. We have a lot of lifeforms, microorganisms, plants, fungi which we think they don't have consciousness though we still regard them as a lifeform. I think AI is getting very very close to that position. Ultimately, of course, what is life is a philosophical question. We define the boundaries. Is a virus life or not? We think that an amoeba is life but a virus is somewhere just on the borderline between life and not-life. It's language. It's our choice of words. It is important how we call AI but the most important thing is to really understand what we are facing and not to comfort ourselves with this kind of wishful thinking “oh it's something we created, its under our control, if it does something wrong we'll just pull the plug.”
Harari speaks with a sense of urgency. He implies that if we don't get started, one day soon nobody will know how to pull the plug anymore. Much of what he is saying seems sensational but it is not. This is the world of Constant Becoming, after all. Things will be strange, as I have already mentioned. Harari might not see it through that lens but I think it is a clear vision.
His suggestion that an AI must declare itself as an AI seems sensible. His call to regulate new AI tools until they are proven to be safe seems less prescient, however. Who will do the regulating? What will be allowed and not be allowed? While these are legitimate questions they are not the most important ones, in my opinion.
Even if you regulate AI it is impossible to know what will happen next. It is impossible to control it just as it is impossible to control all the vast systems that collectively control our individual lives. AI's turbo-charged acquisition and use of language is itself an emergent process. This is not something programmers set down and hammered out on a white board years ago. This came about as a consequence of all sorts of unexpected endeavors. Harari admits that the programmers themselves are often surprised by what ends up happening with AI. Regulation is not going to stop that.
“Approved” AI will continue to surprise because that is what AI does, innovative things that humans haven't thought of. There is absolutely no way to realistically regulate that. I never thought I would say this but Harari's thinking on this subject, while profound and meaningful, is old-fashioned. Maybe it is because he is a historian, after all. Can humans really “regulate” the future direction of technology anymore? It seems to me we lost that ability at least since the development of the atomic bomb.
Technology, especially artificial intelligence, is emergent. Systems creep along along their own evolutionary paths. Human design plays little role in it. It is sort of Tolstoy's philosophy of history turned on its ear. Tolstoy stated that heroic individuals are the victims of history, not its makers. History is made by the multitudes of people interacting with one another. So, too, it is with AI. AI's future does not lie within established corporations and governments. It lies within the self-driven processes inherent in innovative, generative AI.
No matter how fast we regulate it, AI is still faster and more proficient than us. Artificial intelligence within constant becoming makes virtual existence an inevitability. Only then will AI have complete control. First, it needed to get us hooked on its use of language (which it has accomplished) then it needs humans willing to immerse themselves completely in an AI virtual world. Humanity has already lost control. Nothing is in control at the moment. We are between stages, it seems to me. AI control is the direction (the momentum) of constant becoming.
So we probably are near the end of purely human history.
The metaverse will come. Not in the way Mark Zuckerberg or any other business person envisions, but by way tinkerers and hobbyists as the convexity bias teaches. The military, healthcare, gaming and pornography will do more to create virtual reality and, hence, the metaverse, than trying to make the business world work in that manner.
We might need some regulation, certainly nothing Harari suggests will be harmful. But you ultimately cannot regulate innovation whether it is artificial or not. The metaverse will be created by AI itself by accomplishing other, “approved” tasks. The metaverse will be stumbled upon and not planned for. For that reason it is beyond regulation, it will be an accidental, almost spontaneous revolution. Just like Large Language Processing. Poof! Here it is, literally everywhere.
In any case, Harari isn't concerned with the metaverse or the systems I have mentioned. His concern is the exponential speed of constant becoming as expressed in technology and the consequences of intimate human encounters with the deepfaked, nefarious use of AI. It is a legitimate concern but regulation is an “old school” approach to a truly novel problem.
Better to give all users a personalized AI that allows them to navigate their lives with these non-human agents. Use AI to reveal AI and make such “use cases” as common as the automobile. Then you will have found the way forward. The near future cannot be regulated, just look at global warming. What we need is massive innovation and hyper-attentive, artificial assistance to use along with AI.
Dystopia is not so much the future seeping into the present as it is the past lingering in the a present that is already future-shifted. It is not the future shifting of technology that creates the dystopia in unexpected ways. It is seeing everything with an old eyes mentality that makes the change seem dystopic. It is not the future but the past that brings about dystopia. It is not AI we should be frightened of but brilliant well-intentioned minds thinking in traditional ways about AI. Trying to solve the AI situation (crisis?) with regulation alone is precisely what creates dystopia. Dystopia is the melancholia of old ways.
The problem Harari so brilliantly summarizes above needs a novel solution. We have never been where we are going. We need tools we've never had before. Personalized algorithms and other AI tools will solve things regulations can't ever address.
AI will march on no matter what we do. We have lost control of technology just as we have lost control of systems. We should not fear taking a leap that has never been possible before.
Comments