Technology: How We Lost Control - Part Three
[Read Part One] [Read Part Two]
We are lucky. We are all living in the most exciting time in human history. Sure there are a multitude of troubles and threats facing humankind, but we stand of the threshold of greatness. I'm talking Nietzschean-type uber-greatness. We have the unique opportunity to experience a greatness of living without parallel. Longer lives, greater wealth, profound new meanings and purposes. The birth of the Modern.
We are on the verge of unimagined societal transformation. The coming wave of AI, biotech, AR, nanotech, VR and other emergent industries will likely increase global market evaluations by 30% to 50%, some claim even more. The coming technologies will not only generate more prosperity than ever before, it will make us better informed to better manage our lives, likely eliminate poverty through basic universal income made possible by the immense wealth, vastly improve our lifespan through healthcare superior to human-provided care alone, experience multiple possible worlds and places, enjoy realistic virtual relationships, serve as masters for robotic servants of all kinds, and greatly expand the variety of foods and cultures and entertainment beyond what we get to relish today.
20 years ago there was no iPhone. There were clunky Blackberries which were seen as cool at the time and made the world more convenient, but the iPhone came along and unexpectedly changed our very Being. 20 years from now our Being will be changed again and again multiple times by technology. We are on the verge total transformation through constant becoming, which is why there is so much urgent resistance and caution. It is a threatening situation. We will not be the same. Humanity is about to truly get leave the Middle Ages, finish the Enlightenment and finally enter the Modern.
In Part One, I mentioned Mustafa Suleyman, the former Google exec and current big tech pioneer. I bought his book entitled The Coming Wave the same day as I saw his fascinating short interview on CNBC, the day it came out. I began reading it literally minutes after seeing him in a fresh video online, which was the whole point of him doing the interview. This in and of itself has never been possible before. It is a small example of why we live on the verge of the greatest times in human history.
Yet, Suleyman is wise to point out all the ways this technology simultaneously threatens humanity. His book is a sobering account of the very processes I discussed in Part One and Part Two. He sees all this vast and incredible potential for good...and for bad. He rightly calls this “the great dilemma” facing humanity.
For Suleyman, this is all nothing new. He talks about how the same problems faced humanity with the printing press, railways, telephones and automobiles. These are examples of what he calls “the endless proliferation” of technology through history. But, the great dilemma makes the fundamental and significant changes brought about by these technologies insignificant by comparison. He points out that historically “Once established, waves are almost impossible to stop.”
“We are approaching an inflection point with the arrival of these higher-order technologies, the most profound in history. The coming wave of technology is built primarily on two general-purpose technologies capable of operating at the grandest and most granular levels alike: artificial intelligence and synthetic biology. For the first time core components of our technological ecosystem directly address two foundational properties of our world: intelligence and life. In other words, technology is undergoing a phase transition. No longer simply a tool, it’s going to engineer life and rival—and surpass—our own intelligence.” (pp. 75 – 76)
Without realizing it, Suleyman offers a great example of convexity in action. “Everything leaks. Everything is copied, iterated, improved. And because everyone is watching and learning from everyone else, with so many people all scratching around in the same areas, someone is inevitably going to figure out the next big breakthrough. And they will have no hope of containing it, for even if they do, someone else will come behind them and uncover the same insight or find an adjacent way of doing the same thing; they will see the strategic potential or profit or prestige and go after it. This is why we won’t say no. This is why the coming wave is coming, why containing it is such a challenge. Technology is now an indispensable mega-system infusing every aspect of daily life, society, and the economy. No one can do without it. (page 183)
But we risk losing losing our alleged control (which we actually don't have anymore), of surveillance (even though we have already surrendered our privacy), of human harm though AI-enabled viruses, biometric hacking, power-hungry and driven for human domination. The difference between utopia an dystopia seems to be a razor's edge. I personally doubt this. All this is just furthering the enframing process which has been going on since the first atomic bombs but only now are we wising up to the existential threat even though it has already been happening for decades.
It is also a superb example of convexity for another reason. In Part Two I partly defined convexity bias as when we: “focus too heavily on the potential benefits of a new technology, such as increased efficiency, productivity, or convenience, without fully considering the potential risks or downsides.” This is precisely where we currently find ourselves with AI, biotechnology, nanotechnology and virtual reality today. Forward without consideration is the dark side of constant becoming. We are racing with little regard for the consequences. Actually we seem to be racing forward but, it is more accurate to say that we are moving slowly. Rather, it is technology that advances, the acceleration of our enframed Being within an atmosphere of development beyond our direct control.
Still, for Suleyman, as for Nietzsche, this is fundamentally about power. “Technology is ultimately political because technology is a form of power. And perhaps the single overriding characteristic of the coming wave is that it will democratize access to power.” (page 206) Not power in its raw form, technological power as in wealth generation. This single factor alone ensures the coming wave and the further enframing of our Being. Humans never walk away from money.
“This will be the greatest, most rapid accelerant of wealth and prosperity in human history. It will also be one of the most chaotic. If everyone has access to more capability, that clearly also includes those who wish to cause harm. With technology evolving faster than defensive measures, bad actors, from Mexican drug cartels to North Korean hackers, are given a shot in the arm. Democratizing access necessarily means democratizing risk. We are about to cross a critical threshold in the history of our species. This is what the nation-state will have to contend with over the next decade.” (pp. 207-208)
Governments will try to regulate things but regulation alone is not enough for what is coming. Suleymann believes that regulation is but one aspect of what is needed. What we need is a multifaceted approach to “containment” that does not stifle the transformational potential of the coming wave. He thinks we need an “Apollo Program” for this. Globally, today there are a few hundred people working on possible containment initiatives for AI and all the rest. He thinks we need hundreds of thousands of people working on it, something similar to the vast numbers NASA once employed to get us to the Moon. He argues that we desperately need a global “we” with which to address tachnology.
He believes “choke points” and audit practices can be put into place where we can shut down various aspects of the wave that might evolve in an unsatisfactory way. New corporations need to emerge that are devoted to technological containment. He has a ten point plan that involves international agreements.
Well, good luck with all that. We don't have this type of organization working on any of humanities problems, of which technology is only one. A recent historic congressional hearing on artificial intelligence and related technologies brought the likes of Bill Gates, Mark Zuckerberg and Elon Musk among many other Big Tech gurus together. It will likely result in federal regulation of some sort. But the truth is no one knows what to do. According to Fortune “we are running at full speed toward a cliff.”
Suleyman's suggested approach is really just a pipe dream. He has no real solution for what is about to happen. No one can solve a problem most of us don't even see nor understand and whose very existence is now out of our control You can't regulate or organize your way through the convexity bias and the inevitability of techno-emergence as discussed in Part Two. But Suleyman does talk about the development of personalized intelligence, which is what I have previously expressed as necessary. For me, Pi is the key.
“...we are finding ways to encourage our AI called Pi—for personal intelligence—to be cautious and uncertain by default, and to encourage users to remain critical. We’re designing Pi to express self-doubt, solicit feedback frequently and constructively, and quickly give way assuming the human, not the machine, is right. We and others are also working on an important track of research that aims to fact-check a statement by an AI using third-party knowledge bases we know to be credible. Here it’s about making sure AI outputs provide citations, sources, and interrogable evidence that a user can further investigate when a dubious claim arises.” (page 302)
The development of a Pi that can successfully navigate the AI realm is more critical than any possible regulation or policy of containment. Each of us going into the Modern, each of us living with constant becoming, must embrace Pi in the form of a personalized algorithm with which to interface with the world and with all of the coming wave. Make us masters of technology by putting innovation to work directly in our personal lives.
In Part One I quoted Suleyman talking about the possibility of every person having a digital “CEO” within the next five years. This is critical. It is also incredibly exciting. Empowered by personalized algorithms that accomplish tasks for us intelligently (read and summarize my emails, handle my searches for information or products, respond to the Pi of my friends, pay my bills, make my reservations, remember birthdays, plan my week, suggest vacation destinations, etc.) as well as understands the wave itself in the manner Suleyman suggests seems to me to be the best place to prepare ourselves for the possible negative use of the coming technology. Pi can warn us and advise us. Quite obviously, Pi is the ultimate “self-auditing choke point.”
In my opinion, Suleyman is trying old tools against a (possible) novel threat. Pi is a new tool, a necessary tool. Every one of us needs something like this /today/. But, let's say it is widely available in five years. Then we can worry less (uncomfortably less) about whether the divided world can agree upon treaties or establish a vast bureaucratic program of containment. The Pi choke point at the personal level can offer protection enough until some other legitimate innovation comes along. Make us masters of the future and we will ride the wave as it disrupts the entire world. That is my answer for now. Stop the various existential problems at the end user point. Then see how well the global “we” thing works should it somehow emerge out of the divisive nothingness that presently exists between people, corporations and governments.
In a recent interview of Suleyman with Yuval Noah Harari, the latter makes a critical point, for which Suleyman, with good intentions, cannot really answer. Harari is asked by the interviewer if containment is possible when you have all the tension that we currently have between all the world's big players. Harari replies: “This is the biggest problem. If it was a question of humankind versus a common thread of these new, intelligent, alien agents here on Earth then yes, I think there are ways we can contain them. But if the humans are divided among themselves, and are already in an arms race, then it becomes almost impossible to contain this alien intelligence.” (Harari is practicing a bit of personal branding by referring to the coming wave as equivalent to “an alien invasion.” He is using this story-telling tool to create a sense of urgency while simultaneously making his voice original and distinct. His labeling is actually silly hysteria but that's how this business works.)
To which Suleyman replies: “It's a very fair question. That's why I have always been calling for the cautionary principle. We should take some capabilities off the table and classify those as high risk. Frankly, the EU AI Act which has been in draft for three and half years is very sensible as a risk-based framework that applies to each domain whether it's healthcare or self-driving or facial recognition...Autonomy, for example, has clearly the capability to be high risk. Recursive self-improvement, the same story. So, this is the moment when we have to develop a cautionary principle, not through any fear-mongering but as a logical sensible way to proceed.” He thinks Harari is practicing a bit of fear-mongering. That's true enough.
I can see Harari's point. Quite clearly, no one controls the coming (“alien”) wave. Technology is doing its own thing via the environment articulated in Part Two and human beings are completely disorganized. Globally, let alone in the fractious United States, we cannot agree on much of anything. As I said, good luck with your containment and regulations. Give me my powerful Pi and let me (and it) handle this.
We will see more change in the next ten years than in the previous twenty. Probably more than the previous century! Obviously, this means change will happen with breath-taking speed (the speed of constant becoming), if the psychologically disabled masses of humanity don't figure out a way to make it illegal – which they won't because of the potential wealth involved if nothing else. Since the coming wave will, before anything else, facilitate human convenience and consumption (and wealth), it will attract everyone, even those of us still psychologically living in the Middle Ages.
I'm inspired and optimistic about the coming wave but I dread the psychologically disabled worse than the future bad actors. Old-wired, common, familiar brains will both cause and experience tremendous dissonance and possibly threaten the necessary pace of constant becoming. As I have said before, the greatest problem of our time is how fast we can rewire our brains. The coming wave demands it. Generations Z and Alpha will thrive on all this stuff. They are more prepared for it than most Boomers like me realize. It (and all that goes with it) will fit them like a glove.
We can be masters of our Being in the coming wave by using personalized AI (Pi) “CEO's”. We can be wealthier, healthier, and live a longer, fuller, more enriching life because of the benefits of the coming wave. I am a free spirit and welcome it with open arms. Bring on the change! Transformation this big doesn't happen in every human generation and I am about to experience the greatest one so far. But neither you nor I will face it with Pi alone. Pi is just another tool, a new and necessary tool. We need to foster new tools and new skills for the coming of the Modern. I plan to discuss what that early next year.
Comments