Master Enframing from Within: Heidegger, Harari, and APIs
Hopefully, my last several posts (links above) have set up our ability to now combine two brilliant thinkers that no one would usually attempt to discuss collectively - a 20th century philosopher who was also a Nazi and a 21st century Jewish historian who philosophizes. But that is precisely what I want to do here. It seems to me that the works I've discussed by Heidegger have an almost collaborative quality compared with the concerns raised in Harari's Nexus. In brief, I think Harari is describing the contemporary conditions for enframing in a way Heidegger could not have foreseen but which, nevertheless, accentuate Heidegger's relevance today which, in turn, properly contextualizes Harari's concerns with AI information systems.
Heidegger's concept of "Enframing" is one of the most penetrating insights into technology ever formulated. In his essay "The Question Concerning Technology" (1954), Heidegger articulates how technology is not merely a collection of tools, but fundamentally a way of revealing the world that changes everything—including human beings—into "standing-reserve:” resources to be optimized, ordered, and exploited.
"Enframing," Heidegger tells us, "is the gathering together of that setting-upon which sets upon man, i.e., challenges him forth, to reveal the real, in the mode of ordering, as standing-reserve." This dense formulation captures how modern technology doesn't simply change what we do; it transforms how reality appears to us by “setting upon” (directly impacting) how our lives are “ordered” (as standing-reserve). Under the technological gaze, rivers become hydroelectric power sources, forests become lumber supplies, and even humans become "human resources," a department/idea that never existed before the Industrial Age.
The danger in this transformation isn't the technology itself, but the way it becomes the dominant—potentially exclusive—mode through which we understand Being. When Heidegger warns that "Enframing conceals that revealing which, in the sense of poiesis, lets what presences come forth into appearance," he points to the existential risk: the loss of our ability to encounter human beings in any way other than as resources to be calculated, optimized, and utilized.
This connection between Heidegger and Harari may seem unlikely given their disparate backgrounds and historical contexts. And yet, their analyses of technology's impact on human existence reveal striking parallels. While Heidegger was responding to the industrial technologies of his era—hydroelectric dams, manufacturing systems, radio broadcasts—his analysis proves to be a great example of my concept of prescient readiness. What Heidegger could not have anticipated was how algorithms and digital infrastructure would intensify enframing to a degree far beyond the mechanical systems of his time.
As we saw previously, Nexus traces humanity's relationship with information networks from prehistoric storytelling to algorithmic eco-systems. Harari demolishes the "naïve view" that more information inevitably leads to greater truth and wisdom, demonstrating instead how information networks navigate the tension between discovering truth and creating social order, often privileging the latter.
What troubles Harari is essentially digital enframing—the reduction of human experience to data points to be analyzed, predicted, and manipulated. Social media transforms human connection into engagement metrics. Dating apps reduce romantic compatibility to algorithmic matching. Surveillance capitalism converts human behavior into prediction products. In every case, human beings become standing-reserve for data extraction and algorithmic processing.
Harari's "alien intelligence" (his preferred term for advanced “AI”) represents the culmination of this digital enframing—a technological system that doesn't just order human experience but increasingly shapes it according to non-human logic. When he points to the 2016-2017 anti-Rohingya violence in Myanmar as "the first ethnic-cleansing campaign in history that was partly the fault of decisions made by nonhuman intelligence," he breaths life (or death) into the dangers Heidegger theorized.
Unfortunately, despite their penetrating analyses of technological enframing, both thinkers ultimately arrive at solutions that appear inadequate to the scale of the challenges and transformation they describe. That does not disparage their fine analysis of our challenges, it merely states it is a lot easier to define a problem of this magnitude than it is to resolve it. Nevertheless, I want to attempt that here.
Heidegger invokes Hölderlin's poetic line: "But where danger is, grows / The saving power also." He suggests this "saving power" might emerge through a shift in human awareness toward “meditative thinking” and our relationship to technology—a recognition of enframing as just one mode of revealing rather than the only way of relating to reality. While his concept of "releasement" offers a philosophical stance toward technology—using technical devices while not allowing them to claim us exclusively—his proposed path often remains frustratingly vague, bordering on mystical. When he writes about waiting for a "more essential revealing," he offers little concrete guidance for navigating our technological condition other than the rather vague suggestion of “meditative thinking and poetizing.”
Moreover, there seems to be a contradiction between Heidegger's penetrating analysis of enframing as an ontological condition and his suggestion that we can somehow maintain an inner distance from it. In "Discourse on Thinking" (1949), Heidegger makes the astonishingly naïve claim that we can "use technical devices, and yet with proper use also keep ourselves so free of them, that we may let go of them any time." This statement is profoundly at odds with reality and contradicts his own deeper insights about technology.
If technology truly shapes our understanding of being itself—as his concept of enframing contends—then how could we possibly "let go" of it at will? Digital technologies form the essential infrastructure of our existence today. Banking, healthcare, education, governance, and social relationships are now thoroughly mediated by technological systems. One cannot simply "let go" of electricity, telecommunications, or digital information systems and all that consumption, convenience, and entertainment without severe consequences. The very suggestion reveals a failure to reckon with how deeply technology has transformed not just what we do, but who we are.
This "happy-go-lucky" portion of Heidegger's discourse reveals a startling lack of imagination to the very forces he elsewhere analyzes with such profound insight. His claim that technology "does not affect our inner and real core" contradicts our contemporary understanding of how deeply technologies shape cognition, attention, social dynamics, and even identity formation. Nothing could be further from the truth than suggesting we can simply leave technology alone and it won't bother us. This voluntaristic solution undermines the power of his own analysis of enframing as an ontological condition.
Similarly, Harari, after masterfully documenting the revolutionary nature of our current information crisis in Nexus, offers solutions that seem strikingly conventional given the transformative challenges he describes. His four “democratic principles”—benevolence, decentralization, mutuality, and flexibility—while valuable, appear insufficient for addressing the unprecedented challenges of algorithmic reality. “Mundane” work to re-establish “self-correcting mechanisms” is his solution. How's that working for you?
Both thinkers seem to recoil from fully embracing the implications of their own analyses. They identify transformations so fundamental that they alter what it means to be human, yet propose responses that presume we can somehow restore or protect traditional human values against technological encroachment. This represents a failure of imagination—a reluctance to consider that the solution might require not the preservation of humanity against technology but the evolution of humanity within it.
Let's address these shortcomings with a novel idea. An idea worthy of the transformational era in which we find ourselves instead of the half-measures and vagueness offered us by Heidegger and Harari. It all rests on one simple fact.
You cannot do anything online, digitally or virtually, from ordering dinner to communicating with co-workers without algorithms. Algorithms oversee and convert everything you do into data to be used by government and corporations. After computing power itself, of course, algorithms are the building blocks of our already existing digitally enframed world. What's more, which algorithms you are most often watched by define you as data out there in the algorithmic universe. What does this universe look like?
Let's engage in a thought-experiment. I am going to assign 100 points for all the algorithms in the world today and divide them among various sectors of enterprise and power. It would break down something like this:
Corporate Sector: 65 points
Big Tech giants (Google, Amazon, Microsoft, Meta, Apple): 47 points
Financial institutions: 8 points
Other commercial entities: 10 points
Government Sector: 30 points
US government (military, intelligence, research): 10 points
Chinese government: 10 points
European governments: 5 points
Other governments: 5 points
Academic/Research Sector: 5 points
Universities and research institutions: 4 points
Medical/healthcare institutions: 1 point
Individuals: 0 points
This distribution reveals a startling reality. Despite our dependence on algorithms for virtually every aspect of our lives, individuals have essentially zero algorithmic sovereignty. We exist as data points in a massive, immersive algorithmic universe designed to serve corporate profits and government control, with no meaningful algorithmic power of our own. We are the battery power for the Matrix, to use a now well-worn analogy.
What both Heidegger and Harari miss is the possibility that the answer to enframing might be found within technology. Not in either poetic awareness few really possess or democratic ideals most people don't think about any more but, rather, by reconfiguring it to serve the history of human flourishing. Instead of their vagaries and inadequacies, I want to offer the concept of the Aletheic Personalized Interface (API) as a potential emergent solution that addresses the fundamental problems both thinkers identify.
The API represents a radical inversion of our current algorithmic paradigm. Instead of humans serving as standing-reserve for corporate and governmental algorithms, APIs would put at least some algorithmic power in service of individual human beings. These would be personalized, customized, secure, and non-autonomous algorithms designed to navigate the enframed digital world on behalf of their human owners.
The term "Aletheic" deliberately invokes Heidegger's use of aletheia (truth as unconcealment or revealing), suggesting that these interfaces would serve as conduits for revealing the world in ways that extend beyond mere utility or resource extraction. They would be "personalized" in the deepest sense—not just reflecting user preferences (the absurb sense of “personalized algorithms” today) but embodying their privacy, values, goals, and ways of being in the world. Most importantly, they would be “revealing,” that is, personal algorithms would recognize, negotiate, navigate, summarize, protect, inform us about other algorithms intruding into our lives (into the API). For the first time in the digital age, humans would be able to “see” the countless algorithms that are watching them intensely right now.
Online “consent” would mean something completely differently than it does today. APIs could truly inform you of the “terms of agreement” for various websites, allowing you to make a more informed choice. They could fully understand such “terms” and explain (reveal) them to you. Does anyone really read and understand what they click away when they agree to another algorithm’s or set of algorithms “terms and conditions”? Now, instead of clicking “OK” just to move things along your API could evaluate and summarize/explain it to you.
In fact, you could pre-program the API to do it for you. But, beyond that even, APIs would constantly monitor every “terms and conditions” agreement to verify the activity of the algorithms it authorizes. Nothing gets passed the API to the human person unless the human person prefers it that way. They could opt-out of emails you don't want or could be potentially malicious. They could protect your data from cyberattack.
They could do even more. All your communications could be generated from an LLM-type interface that could even be trained to assist you with work, handling practically any administrative tasks, allowing you to manage your digital/virtual enframed life by exception rather than get bogged down in the mindless detail of mountainous personal data, unread emails, forgotten commitments, touching base with family and friends. They could be trained to do virtually everything you do today, freeing you up far more completely than you have ever envisioned while allowing you, once again, to manage your life by exception...or not. The thing is, now it is your choice.
What if every person online owned a personal, customizable, completely secure algorithm that dealt with all the other algorithms out there in cyberspace? Algorithms for each person that could be as powerful as any owned by governments and corporations today? It would literally change the world and transform your life.
The API proposal acknowledges we cannot escape the algorithmic infrastructure of modern life. We cannot function without the internet and other parts of the artificial structure of reality. Our lives as we actually live them would be impossible without algorithms overseeing us. The API doesn't promise liberation from technological enframing but some level of mastery within it.
To be clear, APIs would not preserve traditional humanism against technological encroachment. Instead, they would enable the evolution of a new form of being human within algorithmic structures—what we might call "algorithmic humanism."
This transformation acknowledges Heidegger's insight that technology fundamentally alters our relationship to Being itself. But, APIs wouldn't protect some pre-technological essence of humanity that Heidegger vaguely articulates but would allow us to evolve our humanity within technological structures, turning enframing itself into a vehicle for enhanced agency.
In practical terms, APIs would function as cognitive extensions, autonomously managing routine digital tasks while preserving human decision-making for exceptional circumstances. They would intercept any manipulative algorithms deployed against you, providing transparency where now everything is hidden and allowing us to make choices with full information. APIs are the eyes into the full extent of the vast algorithmic universe.
This approach directly addresses Harari's concerns about algorithms knowing us better than we know ourselves. Rather than competing with algorithmic systems for self-knowledge, we would incorporate them into an extended self—algorithmic systems aligned with our values and goals rather than with corporate profit motives (but without threatening their continued profitability).
The API concept resonates with certain aspects of Heidegger's notion of "releasement,” which he develops in "Discourse on Thinking." While his implementation is deeply flawed, the philosophical stance itself—finding a middle path between rejection and surrender—has merit. Releasement represents a changed, somewhat strange, stance toward technology that avoids both naive rejection and passive acceptance.
Where Heidegger goes wrong is in suggesting this releasement can be achieved through mere attitude or will. When he claims we can "let technical devices enter our daily life, and at the same time leave them outside," he fails to recognize that technological systems have become constitutive of human existence rather than optional additions to it. The API concept corrects this naivety by acknowledging that we need new technological infrastructure to navigate our new reality—we need vigilante algorithms to manage algorithms.
Unlike Heidegger's unrealistic suggestion that we can simply "let go" of technology when we wish, APIs would provide practical tools for maintaining agency within technological systems. They wouldn't pretend we can stand outside technology, but would instead give us some degree of meaningful control from within it.
From a Heideggerian perspective, APIs represent a potential "saving power" that emerges not outside technology but within it. They would transform our relationship with technology from passive consumption to active mastery (or at least revealing a way of navigation), creating what Heidegger called a "free relationship" to technology.
For Harari, APIs would address his concerns about algorithmic governance by ensuring that algorithms will either mask and/or inform their human owners of the algorithmic universe currently monopolized by corporate or governmental interests. APIs would actually provide the "self-correcting mechanism" he calls for in Nexus, but in a more direct and personalized form than institutional safeguards alone could offer. This is the idea that was “provoked” in me when I read his thought-provoking book.
APIs offer a counterbalance to the data monopolies Harari fears will dominate the future. They represent a democratization of algorithmic power, changing the concentration of data processing in the hands of corporations and governments. Those with all the algorithms won't like that, of course, and will fight to keep their massive monopoly. I'll deal with that later. I realize this idea is fully idealistic.
APIs require no technological breakthroughs. The Amazon algorithms are already powerful enough to be repurposed for an individual's use. We know exactly how to design a customizable algorithm. They already exist – for everyone but individuals. The challenge is a perfectly scalable manifold increase in computing power to handle billions of customized algorithms. This is an immense challenge of scale and infrastructure, not technical possibility—no less (or more) possible than our going to Mars or establishing a colony on the Moon. Though I admit the possibility of APIs actually coming about is low. There's no technical reason this won't work.
The most profound aspect of the API concept is its recognition that what it means to be human will change. That's going to be a major source of opposition working against the advancement of personalized algorithms. Most of the time, humans don't want change, especially of their fundamental expectations. But, now and then, for no single reason, they change. History shows this. Rather than desperately clinging to an increasingly untenable humanism, APIs would enable the evolution of "algorithmic selves"—an integration of human identity with technological systems that expands rather than diminishes human agency. APIs would serve as firewalls for individuals to navigate, inform, organize, work and acquire entertainment in the digital and virtual realm of human reality.
These realms clearly already exist, only today all the aletheia, all of the revealing and un-concealment reside with corporations and governments. We are literally living and working inside a reality where algorithms completely control what we see and monitor our private use by totalitarian means. I think this is the very essence of enframed Being.
The API transformation recalls Heidegger's insight that technology is not simply something humans create and use, but a fundamental way in which Being reveals itself in our era. The concept doesn't reject this revealing but transforms our relationship to it, allowing for a more conscious and intentional participation in technological ordering.
For Harari, whose work has consistently explored humanity's capacity for self-transformation—from his books Sapiens (2015) to Homo Deus (2017) to Nexus (2024)—APIs offer a path that recognizes the inevitability of change while preserving human values and agency within it. Instead of fearing the "alien intelligence" of AI, we would develop hybrid systems that extend human intelligence while remaining under human control.
The unlikely intellectual partnership I've constructed between Heidegger and Harari reveals how the philosophical concepts developed in the mid-20th century can illuminate our 21st century digital reality. Harari's analysis of information networks and algorithmic surveillance gives concrete form to the enframing that Heidegger theorized, while Heidegger's philosophical framework helps us understand the deeper implications of the technological developments Harari describes.
The API represents a philosophical coup—a way of turning enframing against itself. Instead of being passive subjects of technological ordering, humans would become active directors of it. We would remain "standing-reserve" in Heidegger's sense, but as "conscious, informed, and strategically effective standing-reserve."
This approach acknowledges that we cannot escape the algorithmic infrastructure as it exists today. The internet, digital platforms, and algorithmic systems have become essential to contemporary existence and will likely only become even more ingrained as we evolve. But through APIs, we might transform our relationship to these systems from one of blind submission to active mastery.
The API solution embraces what both Heidegger and Harari seem reluctant to fully confront: that the human relationship with technology has always been one of co-evolution rather than opposition or even control. Throughout history, major technological shifts—from writing to agriculture to industrialization—haven't just changed what humans do but who humans are.
Rather than fearing this transformation or trying to contain it within its present corporate/government construct, the API concept suggests we lean into it strategically and purposefully. It signals that the "saving power" Heidegger quoted from Hölderlin's poem truly comes from the very danger itself. The answer is not to “let go” of it, that's impractical and simpleton, rather it will emerge through an embrace with danger, to transform the construct from within.
The API concept is a profound bet on human adaptability, creativity, and agency, qualities that both Heidegger and Harari seem to underestimate in their analyses. The solution to technological enframing isn't institutional safeguards, after all, but in the historically resilient human capacity for innovation, improvisation and self-reinvention in the face of existential challenges.
The algorithmic self is as transformational as it is technically possible. Our human Being will change. It already has. No one asked your permission. They obviously didn't need it. But...it is still possible to preserve human values and agency by directing technology toward individual human flourishing. This vision reconciles the strange pairing of a Nazi-era philosopher with a contemporary Jewish historian, demonstrating how even the most unlikely intellectual partnerships can yield insights vital for navigating our enframed lives.
(Assisted by Claude. Illustration by ChatGPT.)
Comments