Suddenly you are not the main character of your own story
Mass unemployment due to AI is coming. This isn’t a doomer scenario anymore. Major FAANG companies are asking managers what percentage of work can be automated; “dark” factories in China run with minimal staff and save on lightbulbs by removing humans—and the lights—from their warehouses. I talked about this in Slop Is Enough, arguing that mediocre agents are sufficient to replace humans in most of our tasks. But beyond mediocrity, it points to something deeper. As AI gets faster and better, you’re left with a few choices:
- Learn a craft that would not be automated (yet)
- Learn how to use AI in your day to day workflow
- Retire and buy an egg farm
If you choose #2—which seems safest and requires the least change—something else happens. For coders, it means delegating the fun parts of the craft to an AI and keeping the boring ones (writing specs, checking LLM outputs, sometimes doing code review of AI slop). For most white-collar work, it’ll be the same. The high-value work shifts away from you. You’ll try to find comfort in the MBA bullshitters who insist the remaining 5% is where the moat is, but don’t doubt it: you just handed your role and usefulness to a model.
Anthropic researchers: “Even if AI progress completely stalls today and we don’t reach AGI… the current systems are already capable of automating ALL white-collar jobs within the next 5 five years”
— NIK (@ns123abc) May 23, 2025
It’s over. pic.twitter.com/LYxKx0tFx7
The Sloppening will lift millions out of mediocrity and help the imbeciles become yesterday’s above-average workers. For everyone else, it means work as we knew it has ended. The feeling of being lost trying different paths to reach a solution gets delegated to what they now call “test-time compute,” and the endgame is obvious: a world where you don’t have to do anything because the AI takes those paths without you. If you’re lucky, a few “thoughts” will surface in the reasoning traces, but those are just scraps to help you claw for meaning in what your sentient AI is doing instead of you
Believers and the people who control the narrative will tell you that you have more agency now. But guess what: It’s just a transitional artifact, and it won’t last. Humans are still useful only because LLMs have made waiting tolerable again, propping up that agency story. They will tell you that the marginal cost of software is lower so you can build more. They‘ll say that things that take 2 weeks will only take a day. They don’t mention that soon inference will be so fast that even the waiting won’t need you: sub-second LLMs will generate software as part of their answers and fully conceal that any code is running to reach a conclusion. You, the bottleneck, will be totally erased from the equation.
It’s like realizing you’re not the main character of your own story and taking a back seat to everything that mattered to you. If your goal was to learn, progress, achieve greatness in the ways that used to be valued… that’s gone. In a generation or two, what define us a human, this desire to learn, experiment, think, do trial and error, fail and get stuck in a problem will be designed out. We will make fun of the few people that still choose that path, and when the last of them falls quiet, mankind inner monologue will simply fade away - unnoticed and unmissed.