Select Quotes from Yuval Noah Harari's Nexus
Note: These quotes were captured by me while reading Nexus on my kindle app. They spoke to me as the most essential and important in the book for the purposes of the next couple of posts. I offer these as raw content for what follows.
“...the naive view assumes that the antidote to most problems we encounter in gathering and processing information is gathering and processing even more information. While we are never completely safe from error, in most cases more information means greater accuracy.” (p. xvi).
“According to this view, racists are ill-informed people who just don’t know the facts of biology and history. They think that “race” is a valid biological category, and they have been brainwashed by bogus conspiracy theories. The remedy to racism is therefore to provide people with more biological and historical facts. It may take time, but in a free market of information sooner or later truth will prevail. The naive view is of course more nuanced and thoughtful than can be explained in a few paragraphs, but its core tenet is that information is an essentially good thing, and the more we have of it, the better. Given enough information and enough time, we are bound to discover the truth about things ranging from viral infections to racist biases, thereby developing not only our power but also the wisdom necessary to use that power well. This naive view justifies the pursuit of ever more powerful information technologies and has been the semiofficial ideology of the computer age and the internet.
“The naive view of information is perhaps most succinctly captured in Google’s mission statement “to organize the world’s information and make it universally accessible and useful.” Google’s answer to Goethe’s warnings is that while a single apprentice pilfering his master’s secret spell book is likely to cause disaster, when a lot of apprentices are given free access to all the world’s information, they will not only create useful enchanted brooms but also learn to handle them wisely.” (p. xviii)
“Given the magnitude of the danger, AI should be of interest to all human beings. While not everyone can become an AI expert, we should all keep in mind that AI is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, AI can process information by itself, and thereby replace humans in decision making. AI isn’t a tool—it’s an agent.” (p. xxii).
“The naive view further believes that the solution to the problems caused by misinformation and disinformation is more information.” (p. 10)
“To conclude, information sometimes represents reality, and sometimes doesn’t. But it always connects. This is its fundamental characteristic. Therefore, when examining the role of information in history, although it sometimes makes sense to ask “How well does it represent reality? Is it true or false?” often the more crucial questions are “How well does it connect people? What new network does it create?” It should be emphasized that rejecting the naive view of information as representation does not force us to reject the notion of truth, nor does it force us to embrace the populist view of information as a weapon. While information always connects, some types of information—from scientific books to political speeches—may strive to connect people by accurately representing certain aspects of reality. But this requires a special effort, which most information does not make. This is why the naive view is wrong to believe that creating more powerful information technology will necessarily result in a more truthful understanding of the world. If no additional steps are taken to tilt the balance in favor of truth, an increase in the amount and speed of information is likely to swamp the relatively rare and expensive truthful accounts by much more common and cheap types of information.” (pp. 16 – 17)
About seventy thousand years ago, Homo sapiens bands began displaying an unprecedented capacity to cooperate with one another, as evidenced by the emergence of inter-band trade and artistic traditions and by the rapid spread of our species from our African homeland to the entire globe. What enabled different bands to cooperate is that evolutionary changes in brain structure and linguistic abilities apparently gave Sapiens the aptitude to tell and believe fictional stories and to be deeply moved by them. Instead of building a network from human-to-human chains alone—as the Neanderthals, for example, did—stories provided Homo sapiens with a new type of chain: human-to-story chains. In order to cooperate, Sapiens no longer had to know each other personally; they just had to know the same story. (p. 19)
“Intersubjective things like laws, gods, and currencies are extremely powerful within a particular information network and utterly meaningless outside it.” (page 27)
“Contrary to the naive view, information isn’t the raw material of truth, and human information networks aren’t geared only to discover the truth. But contrary to the populist view, information isn’t just a weapon, either. Rather, to survive and flourish, every human information network needs to do two things simultaneously: discover truth and create order.” (page 37)
“While over the generations human networks have grown increasingly powerful, they have not necessarily grown increasingly wise. If a network privileges order over truth, it can become very powerful but use that power unwisely.” (page 38)
“More specifically, documents changed the method used for creating intersubjective realities. In oral cultures, intersubjective realities were created by telling a story that many people repeated with their mouths and remembered in their brains. Brain capacity consequently placed a limit on the kinds of intersubjective intersubjective realities that humans created. Humans couldn’t forge an intersubjective reality that their brains couldn’t remember. This limit could be transcended, however, by writing documents. The documents didn’t represent an objective empirical reality; the reality was the documents themselves. As we shall see in later chapters, written documents thereby provided precedents and models that would eventually be used by computers. The ability of computers to create intersubjective realities is an extension of the power of clay tablets and pieces of paper.” (page 46)
“In defense of bureaucracy it should be noted that while it sometimes sacrifices truth and distorts our understanding of the world, it often does so for the sake of order, without which it would be hard to maintain any large-scale human network.” (page 54)
“But historically, the most important function of religion has been to provide superhuman legitimacy for the social order. Religions like Judaism, Christianity, Islam, and Hinduism propose that their ideas and rules were established by an infallible superhuman authority, and are therefore free from all possibility of error, and should never be questioned or changed by fallible humans.” (page 71)
“The book became an important religious technology in the first millennium BCE. After tens of thousands of years in which gods spoke to humans via shamans, priests, prophets, oracles, and other human messengers, religious movements like Judaism began arguing that the gods speak through this novel technology of the book. (page 74)
“The dream of bypassing fallible human institutions through the technology of the holy book never materialized. With each iteration, the power of the rabbinical institution only increased. “Trust the infallible book” turned into “trust the humans who interpret the book.” Judaism was shaped by the Talmud far more than by the Bible, and rabbinical arguments about the interpretation of the Talmud became even more important than the Talmud itself. This is inevitable, because the world keeps changing. The Mishnah and Talmud dealt with questions raised by second-century Jewish shipping magnates that had no clear answers in the Bible. Modernity too raised many new questions that have no straightforward answers in the Mishnah and Talmud.” (page 81)
“The naive view of information posits that the problem can be solved by creating the opposite of a church—namely, a free market of information. The naive view expects that if all restrictions on the free flow of information are removed, error will inevitably be exposed and displaced by truth. As noted in the prologue, this is wishful thinking.” (page 91)
“In fact, print allowed the rapid spread not only of scientific facts but also of religious fantasies, fake news, and conspiracy theories. Perhaps the most notorious example of the latter was the belief in a worldwide conspiracy of satanic witches, which led to the witch-hunt craze that engulfed early modern Europe. Belief in magic and in witches has characterized human societies on all continents and in all eras, but different societies imagined witches and reacted to them in very different ways.” (page 92)
“While it would be an exaggeration to argue that the invention of print caused the European witch-hunt craze, the printing press played a pivotal role in the rapid dissemination of the belief in a global satanic conspiracy.” (page 96)
“Such claims fueled mass hysteria, which in the sixteenth and seventeenth centuries led to the torture and execution of between 40,000 and 50,000 innocent people who were accused of witchcraft.” (page 96)
“Witch hunts were a catastrophe caused by the spread of toxic information. They are a prime example of a problem that was created by information, and was made worse by more information.” (pp. 100 – 101)
“In other words, the scientific revolution was launched by the discovery of ignorance. Religions of the book assumed that they had access to an infallible source of knowledge. The Christians had the Bible, the Muslims had the Quran, the Hindus had the Vedas, and the Buddhists had the Tipitaka. Scientific culture has no comparable holy book, nor does it claim that any of its heroes are infallible prophets, saints, or geniuses. The scientific project starts by rejecting the fantasy of infallibility and proceeding to construct an information network that takes error to be inescapable. Sure, there is much talk about the genius of Copernicus, Darwin, and Einstein, but none of them is considered faultless. They all made mistakes, and even the most celebrated scientific tracts are sure to contain errors and lacunae.” (page 103)
“To summarize, a dictatorship is a centralized information network, lacking strong self-correcting mechanisms. A democracy, in contrast, is a distributed information network, possessing strong self-correcting mechanisms.” (page 119)
“However, democracy doesn’t mean majority rule; rather, it means freedom and equality for all. Democracy is a system that guarantees everyone certain liberties, which even the majority cannot take away.” (pp. 123 – 124)
“Printed newspapers were just the first harbinger of the mass media age. During the nineteenth and twentieth centuries, a long list of new communication and transportation technologies—such as the telegraph, the telephone, television, radio, the train, the steamship, and the airplane—supercharged the power of mass media.” (page 152)
“Mass media made large-scale democracy possible, rather than inevitable. And it also made possible other types of regimes. In particular, the new information technologies of the modern age opened the door for large-scale totalitarian regimes. Like Nixon and Kennedy, Stalin and Khrushchev could say something over the radio and be heard instantaneously by hundreds of millions of people from Vladivostok to Kaliningrad.” (page 153)
“Just as modern technology enabled large-scale democracy, it also made large-scale totalitarianism possible. Beginning in the nineteenth century, the rise of industrial economies allowed governments to employ many more administrators, and new information technologies—such as the telegraph and radio—made it possible to quickly connect and supervise all these administrators. This facilitated an unprecedented concentration of information and power, for those who dreamed about such things.” (page 160)
“As noted earlier, democracy encourages information to flow through many independent channels rather than only through the center, and it allows many independent nodes to process the information and make decisions by themselves. Information freely circulates between private businesses, private media organizations, municipalities, sports associations, charities, families, and individuals—without ever passing through the office of a government minister. In contrast, totalitarianism wants all information to pass through the central hub and doesn’t want any independent institutions making decisions on their own.” (page 176)
“It should be clear that hatred toward the Rohingya predated Facebook’s entry to Myanmar and that the greatest share of blame for the 2016–17 atrocities lies on the shoulders of humans like Wirathu and the Myanmar military chiefs, as well as the ARSA leaders who sparked that round of violence. Some responsibility also belongs to the Facebook engineers and executives who coded the algorithms, gave them too much power, and failed to moderate them. But crucially, the algorithms themselves are also to blame. By trial and error, they learned that outrage creates engagement, and without any explicit order from above they decided to promote outrage. This is the hallmark of AI—the ability of a machine to learn and act by itself. Even if we assign just 1 percent of the blame to the algorithms, this is still the first ethnic-cleansing campaign in history that was partly the fault of decisions made by nonhuman intelligence.” (page 199 – 200)
“Because I have discussed consciousness more fully in previous publications, the main takeaway of this book—which will be explored in the following sections—isn’t about consciousness. Rather, the book argues that the emergence of computers capable of pursuing goals and making decisions by themselves changes the fundamental structure of our information network.” (page 204)
“Computers could potentially become more powerful members than humans. For tens of thousands of years, the Sapiens’ superpower was our unique ability to use language in order to create intersubjective realities like laws and currencies and then use these intersubjective realities to connect to other Sapiens. But computers may turn the tables on us. If power depends on how many members cooperate with you, how well you understand law and finance, and how capable you are of inventing new laws and new kinds of financial devices, then computers are poised to amass far more power than humans.” (pp. 206-207)
“Traditionally, AI has been an abbreviation for “artificial intelligence.” But for reasons already evident from the previous discussion, it is perhaps better to think of it as “alien intelligence.” As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien.” (page 217 – 218)
“The main message of the previous chapters has been that information isn’t truth and that information revolutions don’t uncover the truth. They create new political structures, economic models, and cultural norms. Since the current information revolution is more momentous than any previous information revolution, it is likely to create unprecedented realities on an unprecedented scale.” (page 219)
“In tax literature, “nexus” means an entity’s connection to a given jurisdiction. Traditionally, whether a corporation had nexus in a specific country depended on whether it had physical presence there, in the form of offices, research centers, shops, and so forth. One proposal for addressing the tax dilemmas created by the computer network is to redefine nexus. In the words of the economist Marko Köthenbürger, “The definition of nexus based on a physical presence should be adjusted to include the notion of a digital presence in a country.” (page 222)
“The computer network has become the nexus of most human activities. In the middle of almost every financial, social, or political transaction, we now find a computer. Consequently, like Adam and Eve in paradise, we cannot hide from the eye in the clouds.” (page 235)
“In a world where humans monitored humans, privacy was the default. But in a world where computers monitor humans, it may become possible for the first time in history to completely annihilate privacy.” (page 241)
“Like the Soviet leaders in Moscow, the tech companies were not uncovering some truth about humans; they were imposing on us a perverse new order.” (page 263)
“As the harmful effects were becoming manifest, the tech giants were repeatedly warned about what was happening, but they failed to step in because of their faith in the naive view of information. As the platforms were overrun by falsehoods and outrage, executives hoped that if more people were enabled to express themselves more freely, truth would eventually prevail. This, however, did not happen. As we have seen again and again throughout history, in a completely free information fight, truth tends to lose. To tilt the balance in favor of truth, networks must develop and maintain strong self-correcting mechanisms that reward truth telling. These self-correcting mechanisms are costly, but if you want to get the truth, you must invest in them. Silicon Valley thought it was exempt from this historical rule. Social media platforms have been singularly lacking in self-correcting mechanisms.” (page 263 – 264)
“To be clear, most YouTube videos and Facebook posts have not been fake news and genocidal incitements. Social media has been more than helpful in connecting people, giving voice to previously disenfranchised groups, and organizing valuable new movements and communities. It has also encouraged an unprecedented wave of human creativity.” (page 266)
“Both Napoleon and George W. Bush fell victim to the alignment problem. Their short-term military goals were misaligned with their countries’ long-term geopolitical goals.” (page 269)
“For Clausewitz, then, rationality means alignment. Pursuing tactical or strategic victories that are misaligned with political goals is irrational.” (page 270 – 271)
“The alignment problem turns out to be, at heart, a problem of mythology. Nazi administrators could have been committed deontologists or utilitarians, but they would still have murdered millions so long as they understood the world in terms of a racist mythology. If you start with the mythological belief that Jews are demonic monsters bent on destroying humanity, then both deontologists and utilitarians can find many logical arguments why the Jews should be killed. An analogous problem might well afflict computers. Of course, they cannot “believe” in any mythology, because they are nonconscious entities that don’t believe in anything. As long as they lack subjectivity, how can they hold intersubjective beliefs? However, one of the most important things to realize about computers is that when a lot of computers communicate with one another, they can create inter-computer realities, analogous to the intersubjective realities produced by networks of humans. These inter-computer realities may eventually become as powerful—and as dangerous—as human-made intersubjective myths.” (page 285)
“While experts should spend lifelong careers discussing the finer details, it is crucial that the rest of us understand the fundamental principles that democracies can and should follow. The key message is that these principles are neither new nor mysterious. They have been known for centuries, even millennia. Citizens should demand that they be applied to the new realities of the computer age. The first principle is benevolence. When a computer network collects information on me, that information should be used to help me rather than manipulate me.” (page 311)
“The second principle that would protect democracy against the rise of totalitarian surveillance regimes is decentralization.” (page 312)
“A third democratic principle is mutuality. If democracies increase surveillance of individuals, they must simultaneously increase surveillance of governments and corporations too.” (page 313)
“A fourth democratic principle is that surveillance systems must always leave room for both change and rest. In human history, oppression can take the form of either denying humans the ability to change or denying them the opportunity to rest. (Page 314)
“The most important human skill for surviving the twenty-first century is likely to be flexibility, and democracies are more flexible than totalitarian regimes. While computers are nowhere near their full potential, the same is true of humans. This is something we have discovered again and again throughout history. In the coming decades the economy will likely undergo even bigger upheavals than the massive unemployment of the early 1930s or the entry of women to the job market. The flexibility of democracies, their willingness to question old mythologies, and their strong self-correcting mechanism will therefore be crucial assets. Democracies have spent generations generations cultivating these assets. It would be foolish to abandon them just when we need them most. Unfathomable In order to function, however, democratic self-correcting mechanisms need to understand the things they are supposed to correct. For a dictatorship, being unfathomable is helpful, because it protects the regime from accountability. For a democracy, being unfathomable is deadly.” (page 326)
“The increasing unfathomability of our information network is one of the reasons for the recent wave of populist parties and charismatic leaders. When people can no longer make sense of the world, and when they feel overwhelmed by immense amounts of information they cannot digest, they become easy prey for conspiracy theories, and they turn for salvation to something they do understand—a human.” (page 334)
“What I would like to point out here is only that democracies can regulate the information market and that their very survival depends on these regulations. The naive view of information opposes regulation and believes that a completely free information market will spontaneously generate truth and order. This is completely divorced from the actual history of democracy. Preserving the democratic conversation has never been easy, and all venues where this conversation has previously taken place—from parliaments and town halls to newspapers and radio stations—have required regulation. This is doubly true in an era when an alien form of intelligence threatens to dominate the conversation.” (page 345)
“the first unambiguous evidence for organized warfare appears in the archaeological record only about thirteen thousand years ago, at the site of Jebel Sahaba in the Nile valley. Even after that date, the record of war is variable rather than constant. Some periods were exceptionally violent, whereas others were relatively peaceful.” (page 388 – 389)
“The Roman Empire spent about 50–75 percent of its budget on the military, and the figure was about 60 percent in the late-seventeenth-century Ottoman Empire. Between 1685 and 1813 the share of the military in British government expenditure averaged 75 percent. In France, military expenditure between 1630 and 1659 varied between 89 percent and 93 percent of the budget, remained above 30 percent for much of the eighteenth century, and dropped to a low of 25 percent in 1788 only due to the financial crisis that led to the French Revolution. In Prussia, from 1711 to 1800 the military share of the budget never fell below 75 percent and occasionally reached as high as 91 percent. During the relatively peaceful years of 1870–1913, the military ate up an average of 30 percent of the state budgets of the major powers of Europe, as well as Japan and the United States, while smaller powers like Sweden were spending even more. When war broke out in 1914, military budges skyrocketed. During their involvement in World War I, French military expenditure averaged 77 percent of the budget; in Germany it was 91 percent, in Russia 48 percent, in the U.K. 49 percent, and in the United States 47 percent. During World War II, the U.K. figure rose to 69 percent and the U.S. figure to 71 percent. Even during the détente years of the 1970s, Soviet military expenditure still amounted to 32.5 percent of the budget. State budgets in more recent decades make for far more hopeful reading material than any pacifist tract ever composed. In the early twenty-first century, the worldwide average government expenditure on the military has been only around 7 percent of the budget, and even the dominant superpower of the United States spent only around 13 percent of its annual budget to maintain its military hegemony. Since most people no longer lived in terror of external invasion, governments could invest far more money in welfare, education, and health care. Worldwide average expenditure on health care in the early twenty-first century has been about 10 percent of the government budget, or about 1.4 times the defense budget. For many people in the 2010s, the fact that the health-care budget was bigger than the military budget was unremarkable. But it was the result of a major change in human behavior, and one that would have sounded impossible to most previous generations. The decline of war didn’t result from a divine miracle or from a metamorphosis in the laws of nature. It resulted from humans changing their own laws, myths, and institutions and making better decisions. Unfortunately, the fact that this change has stemmed from human choice also means that it is reversible. Technology, economics, and culture are ever changing. In the early 2020s, more leaders are again dreaming of martial glory, armed conflicts are on the rise, and military budgets are increasing.” (page 391 – 392)
“Populists tell us that power is the only reality, that all human interactions are power struggles, and that information is merely a weapon we use to vanquish our enemies. This has never been the case, and there is no reason to think that AI will make it so in the future. While many information networks do privilege order over truth, no network can survive if it ignores truth completely. As for individual humans, we tend to be genuinely interested in truth rather than only in power. Even institutions like the Spanish Inquisition have had conscientious truth-seeking members like Alonso de Salazar Frías, who, instead of sending innocent people to their deaths, risked his life to remind us that witches are just intersubjective fictions. (page 400 – 401)
“This book has argued that the fault isn’t with our nature but with our information networks. Due to the privileging of order over truth, human information networks have often produced a lot of power but little wisdom.” (page 402)
“The good news is that if we eschew complacency and despair, we are capable of creating balanced information networks that will keep their own power in check. Doing so is not a matter of inventing another miracle technology or landing upon some brilliant idea that has somehow escaped all previous generations. Rather, to create wiser networks, we must abandon both the naive and the populist views of information, put aside our fantasies of infallibility, and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms. That is perhaps the most important takeaway this book has to offer.” (pp. 403 – 404)
Comments