The other day I was talking with a colleague about our experiences writing with AI. We compared notes on when it had genuinely helped versus when it had been counterproductive or failed to produce work of sufficient quality. As we talked, I realised something unexpected. My problem wasn’t really about output quality. It was about infinite malleability.
When you generate a version of something with AI, producing a second version is trivially easy. And a third. And a fourth. The temptation is to keep iterating rather than moving on with the actual task. In this way, AI becomes a new source of procrastination. A tool that promises to boost productivity can end up making you less productive, consuming more time than writing the thing yourself would have taken.
I’ve started calling this phenomenon the Sisyphean Plateau.
Traditional writing had natural brakes built in. You’d get tired. Retyping was tedious. Time was limited. These frictions were annoying, but they were also protective – they forced you to stop eventually, whether you felt ready or not. AI removes all of these brakes. The constraint shifts from “can I afford another revision?” to a much harder question: “do I actually know what I want?”
The cheapness of iteration exposes something uncomfortable. When revision costs nothing, you discover that clarity of intent was always the real bottleneck. And without that clarity, you can iterate forever.
The Sisyphean Plateau isn’t just about lacking discipline or being easily distracted. There’s a behavioural psychology mechanism at work that makes this trap genuinely difficult to escape.
Each AI revision functions like pulling a slot machine lever. Sometimes you get a genuine improvement (a better phrase, a clearer argument, a more elegant structure). Sometimes nothing meaningful changes. Sometimes it actually gets worse. This unpredictability is the key. Behavioural psychologists call it a variable reward schedule, and it’s the same mechanism that makes gambling so compelling. B.F. Skinner demonstrated decades ago that variable reinforcement creates behaviours highly resistant to extinction. The anticipation of a potential win keeps you pulling the lever long after the expected value has turned negative.
AI-assisted writing has become a kind of Skinner Box for text. The occasional genuine improvement – that moment when the AI nails exactly what you were trying to say – releases just enough reward to keep you iterating. Meanwhile, the plateau remains invisible. You might be at 85% of optimal and genuinely improving, or you might be shuffling words laterally without progress. The variable rewards create a kind of psychological fog machine, obscuring the point where productive revision ends and Sisyphean labour begins.
What makes this particularly insidious is who it catches. The trap doesn’t ensnare careless users who fire off a prompt and accept whatever comes back. It catches sophisticated users, people trying to maintain agency, engage thoughtfully, and produce genuinely good work through iterative collaboration with AI. The very engagement that represents responsible AI use becomes the vulnerability.
So what do we do about it? Awareness alone doesn’t solve this. Understanding the psychology doesn’t make the fog lift. The solutions need to be structural rather than motivational.
One approach is to deliberately shift AI’s role as your text matures. Use it as a generator early on, when iteration genuinely improves the work. Then transition to using it as an editor with bounded scope. Finally, shift to using it as a critic – asking specific, answerable questions like “what are the three biggest weaknesses here?” Criticism has natural endpoints in a way that generation never does. Another approach is to establish external stopping criteria before you begin: time limits, iteration caps, or simply asking a colleague to tell you when a draft is ready to move forward. External constraints work better than internal heuristics because they don’t get lost in the fog.
The deeper point is that AI’s greatest strength, frictionless iteration, becomes its vulnerability past a certain threshold. Those productivity gains from the early phase get burned up if you can’t detect when you’ve crossed onto the plateau.
I don’t think this makes AI writing tools bad or unusable. But it does suggest we need to think about workflow design, not just prompting technique.
The Sisyphean Plateau is real, it’s psychologically grounded, and identifying it is the first step toward not getting trapped up there on the top, circling round, just enjoying the view.


Leave a comment