I recently came across Andrej Karpathy’s keynote presentation at the 2025 AI Startup School, where he reflected on the shift in programming and software engineering from what he calls Software 1.0 through to Software 2.0 and Software 3.0. Software 1.0 is traditional programming (writing explicit instructions in languages such as C++ or Python). Software 2.0 is neural networks (training models through data rather than coding logic). Software 3.0 is natural language prompts (telling AI what you want in plain English and letting it generate both reasoning and code).
It struck me that Karpathy’s framework is useful not only for thinking about software engineering, but also for understanding interaction paradigms and our expectations for people’s literacy in using AI. Each transition doesn’t just change how programmers work. It fundamentally transforms how humans relate to computational systems. And we’re living through the most recent shift right now, the move to Stochastic Computing, largely unprepared for its demands.
Stochastic Computing
For seventy years, our relationship with computers has been built on determinism. You give a computer the same input, you get the same output. Every time. Errors were aberrations, consistency was the highest virtue, and predictability was the foundation of usability. Traditional human-computer interaction principles (such as Nielsen’s heuristics, the standards we teach in every HCI course) assume this world. And why not? These principles make perfect sense when unexpected behaviour indicates a failure.
But stochastic systems shatter these assumptions. Large language models, image generators, the AI tools we’re now using daily – they are probabilistic by design. The same prompt can yield different results. Not because something’s broken, but because variability is the norm. This isn’t a bug to be fixed. It’s the defining characteristic of the era we’ve entered.
What does it mean when unpredictability becomes a feature rather than a failure mode? When the system is designed to surprise you?
Well for a start it creates a bit of an uncertainty paradox. Users need enough predictability to feel in control, to maintain a sense of agency in their interactions. Yet stochastic systems are inherently unpredictable. We can no longer be certain what constitutes an error. Outputs aren’t right or wrong in any absolute sense, but rather more or less valuable in different contexts, for different purposes, in different moments. Traditional usability principles don’t just need tweaking. They need fundamental reconceptualisation.
The shift demands we stop thinking about software as tools (instruments we control with precision) and start thinking about software as partners. Entities we collaborate with. Whose contributions we must evaluate. This sounds reasonable (and is in fact the advice of countless prompt engineering gurus) but comes with a demand, as genuine collaboration requires mutual competence. Partners need to evaluate each other’s work, negotiate division of labour, determine when to accept or reject suggestions. They need to be peers.
But what happens when one partner vastly outperforms the other? When you’re a novice trying to collaborate with an AI that demonstrates capabilities you can’t match? You face a critical power asymmetry, a competence gap that prevents authentic partnership. You can’t effectively evaluate contributions when you lack the expertise to assess them. You can’t revise what you can’t judge. The anxiety creeps in: does my contribution even justify calling this my work?
I suspect that this asymmetry affects learners and domain novices most severely. Experienced professionals possess the expertise to critically evaluate AI outputs, to recognise when suggestions are brilliant or subtly flawed (although even they may need prompting – no pin intended!). Novices lack this evaluative capacity, making them vulnerable to uncritical acceptance of outputs they cannot properly assess. The gap becomes self-reinforcing; accepting AI work without understanding it prevents learning the very competencies needed to evaluate future contributions.
But don’t think you have escaped. This applies to you too. We are all novices in fields that are not our own.
Probabilistic Literacy and the Augmentation Gap
Current work on AI literacy focuses primarily on what AI systems are, how they work, their societal implications, ethical boundaries. These competencies matter. Understanding that language models predict tokens rather than “know” facts, recognising bias in training data, navigating questions about authorship and creativity – all of which are undoubtedly important.
But we’re missing something more immediate and practical. We need literacy for actually working with these systems in real time. Not calculating probabilities or understanding transformer architecture. Developing what I’d call probabilistic literacy, an intuitive awareness for navigating uncertainty. Recognising patterns that signal unreliability. Building a “spidey sense” for problematic responses. Learning when to verify versus when to accept. Treating AI outputs as hypotheses requiring validation rather than authoritative answers.
This literacy prevents the slide into cognitive atrophy, when AI becomes what I’ve elsewhere described as “the thief of reason,” stealing the cognitive work that writing, coding, and problem-solving demand. Probabilistic literacy maintains critical engagement. It enables questions like: Does this response contradict itself? Should I rephrase this prompt and see if I get consistent answers? What would happen at the edges of this solution? Importantly it would be independent of our domain expertise.
Without this probabilistic literacy, we risk creating what amounts to an augmentation gap. A new form of digital divide separating those who can interact effectively with stochastic systems from those who cannot. On one side: people with domain expertise developed through traditional learning, and who’ve also cultivated probabilistic literacy through experience and reflection that enables them to stray from their domain. They extract enormous value from AI collaboration whilst maintaining cognitive autonomy. On the other side: those experiencing progressive deskilling, growing dependence on systems they cannot critically evaluate, cognitive atrophy masked as productivity.
The gap threatens to compound over time. Early mastery enables further learning. Early failure makes catching up progressively harder.
Karpathy’s framework helps us see we’re not just getting new tools. We’re entering a new relationship with computation itself. A stochastic world where machines surprise us by design and where partnership replaces control.
Where literacy means learning to trust systems that are fundamentally unpredictable.


Leave a comment