We Are All George Will
Every week there's another headline about AI detection. Universities buying software. Publishers running probability scores. Startups promising to tell you whether something was written by a human or a machine. Ninety-two percent likely AI. Flagged for review. Is this the right response to what's happening? Detection treats writing like contraband. It assumes there's a pure human product out there that needs protecting from algorithmic contamination. The detector becomes a customs agent inspecting sentences for statistical residue. It's already losing. Generators improve. Detectors update. Generators improve again. The surface patterns that once betrayed AI output are disappearing. Even when detection works, it doesn't work for long. And even when it catches something — what exactly has it caught? If I draft a paragraph, run it through a model for tightening, then reshape it in my own voice, what is the detector actually measuring? If a historian dictates notes in...