Technology: How We Lost Control - Part Two
[Read Part One]
Nassim Nicholas Taleb is a respected author, philosopher, and statistician, Taleb's work focuses on the ways in which humans try to predict and control the future, along with the limitations and dangers of relying too heavily on such predictions. He is a proponent of the idea of "antifragility," which states that some systems and institutions actually benefit from shocks and disruptions, rather than being harmed by them.
In 2012 he wrote this article where he rather boldly states: "Textbooks tend to show technology flowing from science, when it is more often the opposite case...In such developments as the industrial revolution (and more generally outside linear domains such as physics), there is very little historical evidence for the contribution of fundamental research compared to that of tinkering by hobbyists...a random process characterized by 'skills' and 'luck', and some opacity, antifragility —the convexity bias— can be shown to severely outperform 'skills'. And convexity is missed in histories of technologies, replaced with ex post narratives." (originally quoted in a blog post here.)
Taleb uses the term "convexity bias" for what he sees as the antifragility of technological development. He contends that most human innovation and emergent technology does not come from planned methods of research. Convexity bias is actually a financial term, but for Taleb it refers to a tendency in human thinking to overestimate the benefits of certain actions or policies, while underestimating the potential costs or risks associated with those actions. This bias arises when individuals or institutions are exposed to complex, non-linear systems or environments, where small changes can have large and unpredictable effects. In such cases, individuals are often be tempted to take actions that appear to offer significant upside potential, without fully understanding the potential downside risks.
Taleb applies convexity bias to the risks and uncertainties associated with the rapid technological advancement. Technological progress is seen (as it was by the authors mentioned in Part One) as a self-reinforcing feedback loop, where each new development leads to further innovations and advancements. The developing entity develops itself. This process creates a “convex payoff function,” where the benefits of technology increase exponentially, while the risks and uncertainties associated with its development and deployment are often overlooked or underestimated.
This convexity bias can manifest
itself in a number of ways. For example, individuals or organizations
(outside of certain fields such as medicine and healthcare) generally
focus too heavily on the potential benefits of a new technology, such
as increased efficiency, productivity, or convenience, without fully
considering the potential risks or downsides.
The rapid pace of technological change makes it difficult for individuals and organizations to keep up with the latest developments and to fully understand the implications of new technologies. This creates a situation where people are making decisions based on incomplete or outdated information, further exacerbating the convexity bias and increasing the potential risks associated with technological progress.
In the case of technological development, antifragility and convexity bias in tandem “severely outperform skills” developed via corporate/government research and planning. At the moment, it is the force of this tandem that “controls” how technology will continue to develop. It is to this force that humans lost control of technology.
I have stated that no one controls technological development right now. Actually, it is adrift within this sea of big tech corporate planning and random tinkerers and hobbyists. All of which are subject to convexity. The relationship between convexity bias and technology's control of its own development is to be found in the fact that the self-reinforcing feedback loop of technological progress leads to a convex payoff function that amplifies the potential benefits of technology. This creates a situation where the development and deployment of technology is driven primarily by the desire for increased efficiency, productivity, convenience and consumption. This is the free market techno-reality within technopoly.
For decades now, this convexity bias has played an increasing role in the influence that technology has over its own development via feedback, as a reflection back upon itself. As technology becomes more sophisticated and autonomous, it becomes increasingly difficult for humans to understand and predict its behavior and to fully account for the potential risks and uncertainties associated with its development and deployment. This further exacerbates the convexity bias in an obvious karmic feedback loop. In fact, this effect is a great example of what I mean by the term “karma.” As does the cosmos, technology has karma beyond the actions of human beings.
The convexity bias is not just about potential over-confidence in new things. It favors pleasure and rewards in technical development. It attempts to minimize the painful and non-rewarding outcomes (though we often regret the results). This is literally a living sociological algorithm. As I have said, technology mostly develops by emergence or tinkering and/or planning. Most tinkerers innovate faster than most corporate plans due to the convexity bias. Historically according to Taleb, technology will (blindly but inevitably) develop along paths tinkerers find rewarding more so than corporate plans or government control. Which is a major reason why you can't stop the loss of human control by “reining in big tech.”
The only way to “control” future technology is to understand the effects of planning and bias inherent in how technology evolves. Though haphazard, this is a system of control all its own. Generally, results are fine and everybody's happy. But not always. Furthermore, emergence happens whether human beings want it to or not. You can not anticipate specific emergence, it is always a strange surprise. We are the first human beings in history to live in a world heavily created by all kinds of near simultaneously emerging technologies with an acceleration of change. This karmic force in the world is as self-evident as it is ubiquitous.
For the past few decades examples abound of technological development that was unpredictable. Tinkerers and planners surprise themselves through their own efforts. Or they fail completely. Those that fail completely do not develop. Those that surprise themselves tend to develop toward the surprise without equal regard to the consequences. This is the inherent existential nature of the convexity bias. Results of development that bring about genuine technological change can rarely be precisely predicted.
Thus technology is not controlled outside of this basic algorithm of convexity (though other factors may control it as well. Convexity is not the only thing going on here, only the most influential. I've mentioned corporate planning too. Nevertheless...) There is no “who” in the control of technological development today. There is only the “whatness” of the convexity of tinkerers and planners in proximity to convenience and consumption. Once more, nothing is more precious to the contemporary world than consumption and convenience. It actually blinds us to convexity or feedback loops.
Just as surely as humans lost control, technology is mostly governed through convexity (antifragility) and imbued with emergence. Development depends on maximum techno/human rewards and minimal techno/human regrets (as foreseen). Development is predestined (but not predictable, things only happen the way they happen in hindsight) to seek techno/human satisfaction while sometimes exacting a high price. It is not predestined in a moral or even ethical sense (there is no god or regulatory system outside of convexity, planning and emergence - including Chinese authoritarianism). Predestined, rather, to serve consumption and convenience.
This is simply how humans have evolved to behave within technopoly. Technically perceived efficiency of consumption and convenience determines what comes next. This is a chaotic process with multiple possible rewards, multiple possible outcomes, some of which will not be favored while others are heavily favored. The more heavily favored an innovation, the more exponential the change in society: the internet, the iPhone, chatbots, etc. The unfavored is always an unexpected consequence (the effect of screen time, the lack or privacy, the preference for online socialization to direct human interaction).
We live in a time when AI is seriously considered as a threat by a lot of people. I understand that but it seems a waste of time. To begin with, nothing is going to “stop” whatever it is AI will do next. We no longer control that. So being more worried than fascinated by AI seems wasteful in the moment. But it brings up an interesting question.
Is it even reasonable to consider whether AI will develop in a way not conducive to the convexity bias (emergence is the exception)? That is the most basic question with regard to the future of technology. With that in mind, under what conditions would AI develop along paths that are harmful to the most humans? Of course, AI could be steered in non-ethical ways that create human suffering but, generally speaking, history does not work like that and AI is not bound so much by the electricity that keeps it running as it is bound to history itself. Technologies used for nefarious means by villainous people have been around forever but have never been the dominant force of technological development. On the other hand, as we have previously established, technology has, without intentionality, had an enframing impact upon human Being.
As such, and as I have hinted at in Part One, technological development is a force (or forces) in the world. Another example of this force besides its enframing qualities is the technology subculture of social media, which is a truly ubiquitous and powerful influence globally. This is undeniable. And while the giants like Meta and Apple design “use cases” for society under mostly corporate planning, they end up reacting to how their products are received by the enframed masses. Meta scaled-back its heavy emphasis on the metaverse (which will still eventually happen – through convexity or sheer emergence) to pivot to personalized chatbots, for example. This is a force Meta and Apple (among many others) can capitalize on. But that by no means indicates that they control what they are doing. They are reacting and tinkering more than controlling. Even in the best laid plans of major corporations, convexity controls itself while emergence happens unpredictably.
For the remainder of this essay, I want to term this force of technology as "The Force" (obvious nod to Star Wars). The true cause of The Force lies in the confluence of various technological, cultural, and social factors that have led to the widespread adoption and influence of social media in modern society. These include the development of the internet, the rise of mobile devices, the growth of user-generated content, the power of network effects, and the personalization of user experiences, among others. Together, these factors have created an unpredictable phenomenon that has transformed the way people communicate, consume information, and interact with each other and with brands. In fact, you are encouraged to “brand” yourself largely because of social media.
The Force did not exist at the turn of the century. The widespread use and influence of social media platforms like Facebook and YouTube, which are major components of The Force, were not present in the 1990s. Back then, you still had to pay for long distance telephone calls. While the concept of social networking existed at that time, it was not as advanced or widely adopted as it became in the 2000s and 2010s with the rise (mostly out of convexity, Mark Zuckerberg was a tinkerer) of social media platforms like Facebook, Twitter, and Instagram. These platforms created a new era of digital socialization that has become a ubiquitous and powerful force within and upon society.
Released in 2007, the iPhone opened extraordinary access to social media platforms and other digital content (most of which emerged from nothing – emergent technology) from anywhere and at any time, which has increased the frequency and intensity of social media use. (That is an example of corporate planning led by Steve Jobs.) Social media platforms, in turn, have leveraged user data and engagement to drive digital advertising revenue, which has fueled the growth and expansion of these platforms.
User behavior played a critical role in shaping The Force. People's increasing reliance on social media for socialization, entertainment, and information has created a feedback loop that reinforces the centrality of social media in people's lives. This, in turn, has further accelerated the growth and influence of The Force.
Fundamentally, the emergence of The Force was not a deliberate outcome of any of the individual components that contributed to it. No company or person or process gave Being to The Force. Rather, it is an emergent phenomenon (outside of convexity and planning), a result of the complex and dynamic interaction between these diverse components.
The Force is not only emergent but it is (mostly) filtered through convexity bias, which means that its effects are unpredictable and beyond the control of any individual or group. While certain technologies or social structures may be developed with the intention of shaping The Force in a particular direction, the actual effects that emerge will be quite different and unexpected, due to the complex non-linear interactions between various factors. If nothing else, apply Chaos Theory here. As a result, it is difficult or impossible to fully anticipate or control the effects of The Force, which sometimes lead to unintended consequences or even negative outcomes (screen time issues, loss of privacy, etc.).
The simple fact is The Force will be shaped by a wide range of factors that are beyond human control. While individual human actions and decisions can certainly influence The Force, they are just one of many factors that will shape its ongoing evolution. The whole process is utterly indifferent to all human attempts to control it.
Ellul and Postman argued that technology possesses autonomous and deterministic power that shapes society and human behavior in ways that are often beyond human comprehension. They each proclaimed The Force (in its pre-Force stage) was already beyond human control through the sheer prevalence of technique. While humans may create and develop technologies, once these are released into the world, they take on a Being of their own and their development is influenced by a multitude of factors that are beyond human control or even beyond human expectation. Both the authors saw this, Ellul saw its initial mechanical expression, Postman's vision was computerized.
I don't know exactly when but at some time around the development of the atomic bomb, the evolution of technology abruptly and increasingly began to transform into a self-organizing and self-reinforcing feedback loop on an order higher than previously in history. At that time, the effects of individual actions were amplified and transformed in ways that became much more difficult or impossible to discern, let alone predict.
That's how/why human beings lost control of technology. And how/why technology will develop into the future. One day, perhaps more advanced AI will take control of technological development (or humans might regain control, that depends on how fast our brains rewire). But, for now, the haphazard embryonic soup of the convexity bias, systems of corporate planning, and techno/human emergent innovation controls what happens next. Not humans.
(to be continued)
Comments