In Denis Villeneuve’s film Arrival, humanity makes contact with an alien species whose language is unlike anything on Earth. As the linguist Louise Banks learns to read and write this language, something unexpected happens. She doesn’t just gain the ability to communicate with the aliens. She begins to think differently. Their language carries with it an entirely new epistemology — a way of understanding time, causality, and experience that was simply unavailable through English. The film dramatises an old and controversial idea from linguistics: the Sapir-Whorf hypothesis, which proposes that the structure of your language shapes how you perceive and reason about reality. Different languages divide experience differently, making certain distinctions
habitual and others nearly invisible. Learn a new one and you don’t just gain new words. You gain new thoughts.
I’ve been thinking about Arrival recently, not because of aliens, but because of artificial intelligence – after all, what is that if not another type of alien? We spend increasing amounts of our working lives interacting with AI systems, and the dominant mode of interaction is the chat interface: a scrolling exchange of messages; human and machine taking turns. It feels natural. It mirrors conversation. But if the Sapir-Whorf hypothesis tells us anything, it’s that the medium through which we conduct our thinking is never neutral.
In 1962, Douglas Engelbart made an argument that extended Sapir-Whorf beyond language itself. In his foundational paper Augmenting Human Intellect, he proposed that the tools we use to manipulate external symbols — pencils, typewriters, hypertext systems — actively shape our cognitive processes. Not just our outputs. Our thinking. He formalised this as the H-LAM/T system: Human using Language, Artefacts, and Methodology, in which they are Trained. All four components co-constitute intellectual capability. Change one and you reshape the whole system. Change the tool, change the thought.
This is a stronger claim than it first appears. It says that the choice of knowledge tool is not a productivity decision. It is an epistemological commitment. The tool doesn’t just help you think faster. It determines the shape of the thinking you can do.
There’s a way to make this more precise. Jon Evans has argued that human language itself functions as a kind of latent space, the same mathematical concept that underpins how large language models work. Speakers compress rich, high-dimensional experience into the lower-dimensional medium of words. Listeners decompress the signal into their own understanding. The compression is lossy, inevitably, but the losses are productive. Ambiguity fuels metaphor. Gaps between encoding and decoding spark novel interpretation. And crucially, meaning in this space has geometry – similar concepts cluster together, analogical relationships have direction, and the structure of the space encodes knowledge that neither speaker nor listener may be consciously aware of.
If language gives thought a geometry, then Engelbart’s insight follows naturally. Tools that extend language (that give us new ways to manipulate symbols externally) give thought new geometries too. Different tools offer different representational structures, and those structures define what it is possible to think.
Consider the options. A folder hierarchy is a tree: each item has exactly one parent, and every piece of information occupies a single position. Trees are excellent for classification and retrieval. But they enforce mutual exclusivity. A note about language as a latent space must live in either the AI folder or the Linguistics folder, never both simultaneously. The geometry makes categorical reasoning natural and cross-domain reasoning invisible.
Tags and categories give you sets: overlapping membership, faceted classification, the ability to ask which items share multiple properties. Sets make co-occurrence visible but offer no structure within the overlap. You can filter and intersect, but you cannot reason spatially or relationally about what you find.
(Getting tags and folders mixed up can thus seriously limit what you can do with a tool – see my strongly worded advice on using Zotero as an example.)
Wiki-style links give you graphs: nodes connected by edges with no enforced hierarchy. Relationships become first-class citizens. Every link is an explicit claim that two ideas relate. The geometry enables traversal and the discovery of unexpected connections between ideas that would sit on different branches of a tree. What you can see in a graph that you cannot see in a hierarchy is the bridge between distant domains.
Spatial canvases (the kind of two-dimensional arrangement you get in tools like whiteboards or spatial hypertext systems) add something else again. Two ideas can be “near” each other without being formally linked. You can cluster, sort, and detect gaps through physical arrangement, reasoning through the space rather than merely about it.
And then there is the line. Sequential, ephemeral, single-threaded. Each utterance follows the last. There is no persistent spatial structure, no branching, no ability to juxtapose non-adjacent ideas. The geometry of a line makes sustained dialogue natural but makes comparison, restructuring, and the accumulation of persistent conceptual structure nearly impossible.
The line is the geometry of the chat interface. And right now, the chat interface is becoming the dominant mode of interaction with AI systems. This represents a convergence on the simplest possible representational geometry at precisely the moment when the complexity of what we need to reason about is arguably increasing. Users cannot easily compare alternatives, track branching explorations, or build persistent conceptual structures. Insights from one exchange cannot be spatially arranged alongside insights from another. Everything flows past in a single temporal stream.
This might not matter if chat were simply adding a conversational capability alongside existing tools. Don Norman, in Things That Make Us Smart, distinguishes between additive and substitutive technology: additive technology extends human capability whilst keeping humans in the loop; substitutive technology replaces a human function entirely. A calculator is additive when the human decides what to calculate and interprets the result. It becomes substitutive when the system takes over the decision-making itself.
If chat interfaces are additive – one tool among many, used for what they’re good at – then their linear geometry is unproblematic. It serves its purpose. But there are signs that chat is becoming substitutive, replacing richer knowledge tools rather than supplementing them. When people use ChatGPT not just for quick questions but as their primary environment for research, analysis, writing, and reasoning, the substitution narrows the representational space available for cognition. Engelbart’s framework predicts that this narrowing won’t merely be inconvenient. It will reshape the kinds of thinking that are possible.
In Arrival, learning an alien language expanded what Louise Banks could think. The question we should be asking about our AI tools is whether they are doing the same thing. Or the opposite.


Leave a comment