We Are All George Will
Every week there's another headline about AI detection. Universities buying software. Publishers running probability scores. Startups promising to tell you whether something was written by a human or a machine. Ninety-two percent likely AI. Flagged for review.
Is this the right response to what's happening?
Detection treats writing like contraband. It assumes there's a pure human product out there that needs protecting from algorithmic contamination. The detector becomes a customs agent inspecting sentences for statistical residue. It's already losing. Generators improve. Detectors update. Generators improve again. The surface patterns that once betrayed AI output are disappearing. Even when detection works, it doesn't work for long. And even when it catches something — what exactly has it caught? If I draft a paragraph, run it through a model for tightening, then reshape it in my own voice, what is the detector actually measuring? If a historian dictates notes into software that structures them, is that text less human?
Detection obsesses over purity. Writing has rarely been pure. Editors shape prose. Friends read drafts. Spellcheck intervenes. The solitary unaided author was always mostly a myth. Detection is a losing arms race built on a false premise. Useful in narrow institutional settings — academic integrity, legal filings, journalism. Culturally? The line is already dissolving.
The deeper issue isn't that AI generates text. It's that AI generates facks at industrial scale. Authentic-seeming content with no authentic source behind it. The George Will channel was handcrafted. What's coming is automated. Detection can't keep up with a factory floor that never closes.
Yuval Noah Harari is having a bit of an existential crisis right now. His proposed rule: every AI should identify itself as an AI. Simple. Clean. Regulatory. The classic Enlightenment move — if a thing is dangerous, label the thing.
That sounds right until you think about it for thirty seconds.
This is what I'd call the Einstellung Effect. A mind so practiced at solving one type of problem that it reaches for the same tool regardless of whether the tool fits. Harari's solution space is "better regulation, same old same old." Label the AI. Police the boundary. Protect the public.
The problem isn't that regulation is useless. It isn't. The problem is that it doesn't touch the actual instability. Which is not that AI can speak. It's that identity can now be simulated. Authority can be skinned. Voice can be detached from body. This has always been the case to some extent. Forgery and identity theft is all-too-common. But this is amplifying all that in unprecedented ways.
You can regulate the label. You cannot regulate the loss of the thing the label was supposed to point to.
McLuhan said the medium is the message. Part One argued he's been reversed — the message is now the medium. When George Will's voice becomes the delivery system for someone else's content, the content becomes the medium.
For most of human history, individual authenticity wasn't the currency. Belonging was. You were a son of, a member of, a citizen of, a believer in. Your credibility came from where you stood in the hierarchy. Not from your singular interiority.
The idea that every person must cultivate a unique intellectual fingerprint that stands independently of tribe — that's early-Modern. A beginning, not a destination. The Enlightenment cracked it open. The Romantic era tried to make it feel noble. Social media mass-produced it as performance. And now the AI era is making it existentially necessary in a way it never was before.
Most people aren't ready for that. We didn't evolve for it. We evolved for small-group reputation. Local trust networks. Oral memory. Now we're expected to project coherent identity across digital archives that never forget, in an environment where our voice can be cloned and our face can be swapped.
That's a new requirement. Nobody sent the memo.
Not regulation. Not detection. Something more demanding than either. A new human obligation to establish and maintain verifiable personal authenticity. Not perfect — many people won't bother, and that's fine, pluralism includes indifference. But for anyone who wants their public voice to mean something, the burden has shifted. Not onto the machines. Onto you.
What does that actually look like? Authenticated digital credentials tied to your identity. A verifiable public record of your intellectual output — essays, videos, posts — cryptographically signed and traceable. Anything bearing your name that doesn't carry that signature isn't you. Full stop. The question stops being "was this written by AI?" and becomes "did this come through my verified channel?" That's not science fiction. Public-key cryptography already does exactly this. The technology exists. What doesn't yet exist is the cultural expectation that serious people will use it.
Where Harari wants to regulate the tools, I'm suggesting we build something in ourselves that the tools can't replicate. Let's raise the bar on authenticity itself.
The self-as-brand is early-Modern — a beginning, not a destination. The Enlightenment cracked open individual identity as a concept. The Romantic era tried to make it feel noble. Social media mass-produced it as a performance. Now everyone is expected to have a voice, a perspective, a take. A personal intellectual fingerprint that stands independently of tribe.
But this isn't the arrival of the Modern world. It's the prequel to it. What's coming — what the AI era is forcing into existence — is something more demanding than personal branding. Something more structural. The actual Modern world will require individuals to do something humans have never done at scale: maintain verifiable, continuous, accountable intellectual identity over time.
Most people aren't ready for that. Not because they're incapable, but because nothing in our evolutionary history prepared us for it. We evolved for small-group reputation. Oral memory. Local trust networks where your neighbors vouched for you and you vouched for them. Now we're expected to project coherent identity across digital archives that never forget, to audiences we'll never meet, in an environment where our voice can be cloned and our face can be swapped. Nobody sent the memo that this was now required. But the requirement is here regardless.
The pattern is consistent across every major technology that has shifted the ground under human identity.
Writing changed memory. The printing press changed authority. Photography changed truth claims. Radio changed charisma. Television changed presence. The internet changed identity.
Each time, the initial panic was about the technology. Each time, the actual adaptation was human. Photography didn't destroy truth — it forced journalism to develop new standards of verification. The internet didn't destroy identity — it moved credibility from institutional endorsement to reputation networks, blue checks, citations, cross-references. Imperfect, but adaptive.
AI changes authorship. And the adaptation required is the same as always. Not retreating. Not regulating the technology into submission. Renegotiating what authenticity means under new conditions.
This isn't unprecedented. It just feels that way because we're inside it.
Other people are circling the adaptation from different angles. Vitalik Buterin and the proof-of-personhood crowd want cryptographic systems that verify you are a real, unique human. Adobe and Microsoft are pushing the Content Authenticity Initiative — cryptographically signed media that carries a verifiable chain of custody. Jaron Lanier has been arguing for years that anonymity and frictionless copying erode human agency, that we need more identity solidity, not less.
These are all technical approaches. Useful. Necessary even. But they solve only half the problem. Blockchain can verify that a given key signed a given document at a given time. It cannot verify that the human behind the key has any intellectual continuity whatsoever. It authenticates the signature. It says nothing about whether there's a coherent mind behind it. The technical layer and the human layer are both required. Most people are only talking about the first one.
Authenticity in the AI era isn't a vibe. It's continuity over time. The way your questions evolve rather than reset. The way your thinking deepens instead of oscillating with trends. The record of risks you've taken publicly. The mistakes you've owned. The contradictions you've worked through.
That's very hard to fabricate at scale. You can fake tone. You can fake cadence. You can produce a convincing essay. What's genuinely difficult is sustaining a coherent intellectual organism across years — responding to events in ways that remain recognizably you while still growing.
The fear is that AI erodes human authenticity. But the real erosion comes from humans who never cultivated authenticity in the first place. If your public voice is already derivative, trend-chasing, borrowed from whoever you read last week — it's trivially replaceable. Not by malicious actors. Just by a decent language model with a good prompt.
Deepfakes parasitize existing authenticity. They need something solid to imitate. If there's nothing solid there, no impersonation is even necessary. Your facks were already doing the job.
Not just verifiable credentials — though those matter. Not just better regulation — though that has a role. The deeper requirement is that individuals become custodians of their own intellectual continuity. That they treat their public voice as something worth building and maintaining across time. That they stand behind what they say.
This won't be universal. Many people won't bother. That's fine — pluralism includes indifference. But for anyone who wants their words to carry weight in a world of synthetic voices, the burden has shifted. Inward.
Literacy once separated those who could participate in public discourse from those who could not. Digital literacy did the same. Epistemic literacy — the ability to navigate simulation, to maintain traceable identity, to take responsibility for your intellectual output — may be the next foundational skill. AI doesn't destroy authenticity. It raises the bar for it. The humans who meet that bar will become more distinct, more valuable, harder to replace. The ones who don't will blur into the noise — indistinguishable from the generative systems they were afraid of.
Most people building personal AI tools are optimizing for convenience. Smoother friction. Faster drafts. Better scheduling. The AI as a productivity layer on top of existing life. That's genuinely useful and I'm not dismissing it. But it's not the most interesting thing you could build. And in a world where authority is downloadable and voices are synthetic, it may not be the most important thing either. There's a different function worth thinking about. Less discussed. More consequential.
If the problem is that authority can be skinned and streamed — that voice can be detached from body, that gravitas is now a downloadable aesthetic — then the response isn't prohibition. You can't ban masks. You can only strengthen the face behind them.
Which means the personal AI assistant isn't primarily a productivity tool. It's a sovereignty layer. Your Alethaic Personalized Interface (API) — whatever form it takes — should sit between you and the information environment. It filters. It protects attention. It manages your engagement with The Complex on your terms rather than the platform's. That part is already understood by some people building in this space. The authorship function isn't.
Your API should cryptographically sign everything you intentionally publish. Not as a technical novelty. As hygiene. The same way you'd put your name on something. Except unforgeable. It timestamps your outputs. It verifies incoming media claiming to be from others. It flags unsigned content impersonating known entities. It manages provenance in the background, invisibly, the way spellcheck manages spelling. Anything bearing your name that doesn't carry that signature isn't you. Full stop. The question in the world this creates isn't "Was this written by AI?" It's "Is this signed?"
That's a cleaner epistemic world.
The technical architecture matters. But it's secondary to this. You want your API to be as much like you as possible. Not a generic assistant. Not Siri with your calendar. Something cultivated. Trained on your actual intellectual history — your patterns of thought, your values, your thresholds, the trajectory of your thinking across time. Not your style as a template but your mind as an organism. An epistemic twin that evolves with you.
If it's built that way, authentication isn't a bolt-on feature. It's intrinsic. The API knows what you would plausibly say because it has continuity with your past expressions. It can refuse to authorize outputs that don't cohere with your established arc unless you explicitly override it. It becomes harder to impersonate you convincingly not because impersonation is illegal but because the real you is too specific, too continuous, too integrated to fake at scale. Deepfakes need something vague to work with. A cultivated intellectual organism isn't vague.
Right now, most people treat AI assistants as productivity tools layered on top of their existing life. Faster search. Better drafts. Smarter scheduling. The AI is downstream of the person.
What I'm describing inverts that relationship in a specific way. The AI becomes the perimeter — the interface between your integrated self and the synthetic environment you're navigating. It doesn't replace your thinking. It protects the conditions under which your thinking can remain yours.
Humans have always externalized cognitive burdens into tools. Writing externalized memory. Clocks externalized temporal precision. Maps externalized spatial reasoning. None of those tools replaced the human capacity they served. They extended it.
The personal API extends identity. Specifically the kind of identity that the actual Modern world is beginning to require — verifiable, continuous, accountable across time.
None of this works without the human layer. The API doesn't create authenticity. It preserves the trace of intentional authorship. The coherence still has to come from you. The integration still comes from lived practice — from having actually built something over time that is recognizably, specifically, continuously you. Without that, you're just signing noise. The sovereignty layer protects what's already there. If nothing is there — if your public voice is derivative, trend-chasing, borrowed from whoever you read last week — the API just authenticates the emptiness faster.
A fake George Will delivered real outrage and I almost didn't notice. The floor tilted when I click on the channel page. That tilt was a recognition moment — not just about media ecology, but about what's coming. We are in the early stages of a world where the self isn't assumed. It's built. Verified. Maintained. Claimed. But, intimately and ultimately, something that can be simulated. Replicated.
The message is now the medium, and in that world, facks are the default currency. Frictionless. Pre-assembled. Ready to move you before you think to check. The only counter-move is being too real to replicate. Too specific. Too continuous. Too integrated into an actual history that preceded this morning.
That's what George Will actually has. Decades of it. The simulated version is compelling precisely because the real thing exists to imitate. It's a parasite that needs a host.
We are all George Will. Only without his notoriety, our fack channels will perpetuate until there are possibly different versions of all of us all over the place.
Something like API would solve this emerging challenge.
Comments