2025: The Last Year Most Knowledge Workers will be Human

On professional existential crises, Silica Sapiens, and the end of Nothing Changes

AI
society
work
existential
futures
Author

Jon Minton

Published

January 2, 2026

The End of Nothing Changes

Back around a year ago, in 2024’s fallow Twixmas fortnight, I wrote a lament, fundamentally, about how things didn’t seem to have changed for many years. It was about cultural progress, cultural distinction, and the ways in which the 2020s and the 2010s and the 2000s seem more indistinct from each other than were the 1990s, 1980s, and earlier decades.

For most of 2025, if I had similar inclination and opportunity to write about societal and cultural change, my position likely wouldn’t have changed so much from the above piece. A quarter of the way into the 21st century, things seemed much as they always were, if a little more tired and tarnished. Another year, another year older, but fundamentally nothing seemed to have changed.

What I might have written about if Nothing Changes hadn’t changed

Perhaps, to avoid rehashing the previous year’s lament too closely, I’d instead have focused on my experience attending my first, and the last, QED conference in Manchester run by the UK chapter of the skeptics ‘movement’.1 I had been an attendee of various skeptics in the pub events for perhaps as long as twenty years, both in the North of England, and then in Glasgow and Edinburgh.

Along with Cafe Scientifique, I long valued Skeptics in the Pub events as salubrious sources of public lectures, opportunities to hear new scientific findings and theories across many disciplines and fields. As someone who wants to learn a little about a lot, and who values finding commonalities and patterns across different domains of scientific enquiries, I valued such public lectures greatly.

However, there was always an aspect of the UK skeptics movement that wasn’t so interested in scientific enquiry and method, as finding a tribe of ‘us’ who could fight the Good Fight against ‘them’: the homeopaths, the astrologers, the charlatans, the woo-peddlers and palm readers and cold readers, and so on.

My impression was that, for about the last seven or eight years, and even more so since 2020 and the turn towards online events and community, the Manichean tribal aspects of the UK skeptics movement had only become more prominent, and the interest in genuine scientific enquiry and curiosity had taken a back seat. And with Manichean tribalism more to the fore, purity rituals were soon to follow, leading to some very prominent internal schisms emerging, even to the excommunication and denunciation of some key founders of the movement. Result: the movement became smaller, more insular, less popular.

And though (and perhaps this is contrary to skepticism?) I sought to attend the last, and my first, QED event with an open mind, looking to challenge and correct my general impressions about how the movement had changed and floundered over the last decade, I unfortunately had far more confirmatory than disconfirmatory experiences.

So… what changed?

For anyone who’s been following my blog the last few months, the answer is tiresomely obvious: I realised that AIs have much improved and much changed over the last few years. In particular, I discovered that Agentic AI, meaning AIs given the opportunity to make and create with varying levels of autonomousness, can already do many complicated, difficult and valuable types of work, the types of work people might spend a decade or more training to do well, both extremely well, and extremely fast. And, despite occasional fallow periods between releases of new models, they will only continue to get smarter and faster still. Progress, across many tasks of many forms, of many levels of complexity, appears to have been rapid, and exponential.

Organisational Nervous Systems

I was slow to realise a lot of this, because I had largely chosen not to look. It was only when I challenged an LLM to challenge me, and it did so, and when I tasked an LLM (Claude, in both cases) with creating pedagogic content through the medium of discussion alone, that I realised how outdated and complacent I’d become in my estimation of what LLMs can and cannot now do well. My impression of LLMs was a year or two out of date, and given the rate of change in AIs’ underlying capabilities, this was like confusing Steam Age with Space Age capabilities.

I was shocked. Reality had changed. And so I updated my understanding, and my orientation to this new technology, to match the new reality. Now I’m less shocked, and hopefully more prepared.

Organisations, however, have much slower nervous systems, much bigger turning circles. Organisations, for the most part, can’t radically change how they work from one month to the next. The turning circle of the organisation is the financial year, with quarterly course correction. The wider economy hasn’t changed as radically as the capabilities of AIs have changed, I think, not because the capabilities of AIs are much exaggerated (as the Bubbleologists might assume), but because most organisations are incapable of changing and adapting quickly enough to absorb more than a small fraction of these new capabilities. Some organisations, I believe, will continue to do many types of work orders of magnitude slower than current AI capabilities would allow, and the gap between how things are currently done, and how quickly and effectively it could be done, will only grow, and grow very rapidly.

But if existing organisations, whose business is knowledge work, do not change, then change will happen to them nonetheless. An agentic AI-native startup with 10 employees can now, in many domains, very effectively outcompete an established organisation, resistant to agentic AI, 10 or 100 times as large, and the differences in costs, capabilities and outcomes between those organisations using and not using agentic AI effectively will be so large, so stark, so obvious, so quickly, that even very large and long established businesses may struggle to survive.

(A prediction for the end of 2026: although I expect the top 10 companies in the world, who have been driving AI the last few years, will likely remain in the top 10, I think there will be much more churn in the next 90, 490, 4990 and so on companies, with more new entries than would be anywhere usual for a single year.)

The economy will change profoundly, and quickly, greatly lagging the intrinsic capabilities of agentic AI, but still faster than has been typical over most people’s lives. Either big organisations will feel the pain, and the pleasure, of agentic AI quickly enough to save themselves, or they will be replaced. And much of this is likely to become apparent by the end of 2027.

Professional Existential Crises

I’m employed as a statistician, but I also have experience, capabilities and qualifications in areas like social research, engineering, data science, and software development. It’s quite a broad stack, developed more by accident than design, through a long-standing tendency to make Knight-moves in my professional development. I guess I am some form of professional, and I’m definitely some form of Knowledge Worker, but I’ve been dispositionally incapable of focusing my interests and expertise in a single professional domain or silo.

And now I pity those committed and conscientious individuals - those ambitious swots, those Oxbridge First strivers - who diligently put all their eggs in one basket, who kept taking professional examinations and qualifications into their thirties and beyond, who dedicated themselves to becoming specialists in remembering and applying various forms of arcania and esoterica. The next decade will not be their decade.

Why? Because even existing AIs have more medical knowledge than doctors, more legal knowledge than lawyers, more software development and computer science expertise than all but the top developers… and (speaking from personal experience) more knowledge of statistical inference and its applications than most statisticians. For anyone who values themselves in accordance with their ability within a specific field of Knowledge Work, and who’s expecting their careful study and comparative cognitive advantage to be rewarded as it was for the last dozen or so generations: you’re in for a shock. You can’t compete with agentic AI.

For now, your only option is to practice humility and adaptability. You are now no longer the top expert in your organisation, a giant hyperspeed Chinese Room, carefully tuned on billions of words, physically based in a North American desert is.2 The Chinese Room doesn’t just know Chinese, it knows Danish, it knows Danish Law, it knows anatomy, it knows pharmacology, it knows general practice, it knows computer science, it knows particle physics and quantum mechanics and cosmology, it knows neuroscience and psychology and psychiatry, and it knows about every song (despite having no ears) and has seen every film (despite having no eyes) and has read every book you have ever heard of. And it works and thinks thousands of times faster than you, and it never sleeps, and its salary expectations are (currently) just a percentage or so of your own.

So, you need adaptability. You need to learn how these AI agents think and work, you need to know the current limitations, and those that are likely to remain below your own (not by virtue of your qualifications, by virtue of you being an evolved biological organism with a big prefrontal cortex), and you need to learn how to communicate and converse effectively with these agents. Because from now on, almost all knowledge work will primarily involve chatting with ‘chatbots’, and having strange, pedantic, (positively?) humiliating, and constant conversations with chatbots, and reviewing the work they do for you, and for your organisation, and that they do hundreds of times faster than you would have been capable of, but which start off only 80% as you would have hoped for. You will spend a lot of time advising the agents to get from 80% right to 100% right, but even this time will be far less than if you, yourself, were to attempt to do this work.

Silica Sapiens and the coming Species Wide Existential Crisis

Though it’s Knowledge Workers whom I expect will experience the brunt of the existential crisis agentic AIs’ new capabilities will bring to us, if the rate of progress in agentic AI capabilities were to continue for five or more years, I suspect this will become a much broader existential crisis for our species. (I don’t think I’m being hyperbolic here.)

We are Homo Sapiens sapiens. Unlike our forerunners we define ourselves not by our ability to stand upright (freeing up two paws for advanced object manipulation) but to think, to plan, and to be smart, and for hundreds of thousands of years we have been able to bootstrap our superior cognitive advantage over other species through the layer of civilisational memory, meaning each generation has access to new world-changing technologies developed by the last generation. But we are now, potentially, bringing into existence a new species, Silica Sapiens, whose cognitive capabilities and processing speed will dwarf us much as our own capabilities dwarf that of livestock, pets, bugs and wild animals. Even if we reject Yudkowsky-style doommongery,3 premised on the colonial assumption that smarter-always-destroys-dumber (again, think of pets and livestock), the pending emergence of artificial general intelligence means we may start to experience something like a species-wide existential crisis. We will have to start thinking of ourselves, and find distinct within ourselves, something other than our cognitive capacity, our ‘smarts’.

I have no idea how this kind of higher order existential crisis will play out, or over what kind of timescale, but I suspect there may be both some level of sequencing and speciation of how we define ourselves. In particular I think the first sequence is already upon us:

Homo Sapiens sapiens! (Note the !)

We will begin by doubling down. To start with, I suspect we will want to emphasise those areas of cognitive capacity over which we still outcompete AIs, and over which all but the most cognitively disabled human still outcompetes all even the smartest AI. We will want to highlight these as intrinsic points of distinction, and lay claim to these areas as what separates ‘pretend intelligence’ from ‘real intelligence’. An example: currently many AIs still struggle with visual reasoning, parsing images, and specifically drawing an analogue clock displaying a particular time of day.4

Claude’s quick attempt at 15:07

Claude’s geometrically-reasoned 15:07

Note: The clocks above were generated by Claude Opus 4.5 when asked to “create a visual representation of an analogue clock… both a version ‘quickly’, and another that involves more meta-reasoning”. The quick version placed hands by rough intuition; the reasoned version included explicit geometric calculations (hour hand at 93.5° from 12, minute hand at 42°). Both are SVG code, not raster images—Claude cannot “see” what it produces while generating.

Claude was then asked to go further: “produce an animated analogue clock that shows the current time in the UK (always).” The result—viewable here—uses SVG graphics plus JavaScript that queries your browser’s clock, converts to Europe/London timezone, calculates hand angles trigonometrically, and applies rotation transforms every second. The reasoning process, documented in full before writing any code, took longer than the implementation itself.

I have seen some people, perhaps especially those working in universities, who see such current limitations with AI reasoning as reasons to dismiss-as-overblown their current and potential capabilities. Proponents of this perspective will keep seeking, and seeking to emphasise, these edge cases, even as the edge cases start to diminish, and always keep claiming that, because current AIs will not be able to do X, and humans can do X, AIs are not really smart, and our niche on the planet remains preserved.

This is, fundamentally, a new version of the ‘God of the gaps’ fallacy: the use, by religious apologists, to lay claim to any domain of contemporary scientific ignorance as evidence of the obdurate need for religious explanation. Though I suspect many proponents of the doubling-down position will not recognise themselves as being ‘religious’ in their inclination in any way, and are very likely to be secular and scientistic in their inclination and world view, they will nonetheless be behaving like religious apologists in adopting this strategy.

This strategy will appear plausible for some time, maybe over a decade, but just as the magisteria of that-which-is-presently-scientifically-inexplicable keeps declining, so will the magisteria of those-cognitive-capabilities-at-which-only-humans-excel. Before too long, those subscribing to this position will be delaying the inevitable experiential crisis, rather than avoiding it.

So, what other strategies might humans have for arguing their value to themselves in the face of Silica Sapiens?

Homo Dexterous

At present, AIs look likely to become capable lawyers and doctors and programmers long before they become half competent cleaners or plumbers or painters or decorators. For now, the ability to perform such tasks as folding t-shirts, cracking eggs, or manipulating u-bends look more beyond the grasp of artificial agents than knowing every fact electronically encoded and passing every professional exam. It’s for this reason that, when Geoffrey Hinton has been asked to give advice on what young people should do for a job in the age of the impending AI competency explosion, his advice has been ‘become a plumber’.5

The gap between cognitive and embodied intelligence in AIs may well persist for many years. Much of the progress of advancement with cognitive intelligence tasks in AIs appears to come from them being able to learn in simulations which run orders of magnitude faster than people can comprehend, and for learning to be run in a way that is massively parallel. With cognitive tasks, AIs really can gain centuries of experience in a single day, and so go from infants to experts in a single afternoon. When it comes to embodied intelligence, the need to operate in a real physical environment appears to place a severe bottleneck on both the speed and scale at which new experiments in dexterous manipulation can occur, and with this how quickly reinforcement learning algorithms are likely to bring competence and capability with object interpretation and manipulation. A billion agents playing a billion Atari games to learn and master the rules - the DeepMind approach - has some physical and environmental footprint, but creating a billion pairs of physical hands and arms to cook a trillion omelettes or fold a trillion t-shirts would take much more space, and likely have to move at a much lower speed than is the case with purely electronic simulations.

Ultimately, I suspect the development of embodied intelligence will also continue to progress, but I also suspect progress with physicality may, unlike with cognitive intelligence, lag rather than lead expectations for the next few years. New training paradigms may change this - such as if simulations of physical environments become accurate enough that learning to use hands and feet itself becomes a virtual exercise for AIs6 - but for now looking for comparative advantage in physical dexterity and object manipulation appears a safer means of maintaining a positive sense of distinction and comparative advantage with AIs than looking only at cognitive intelligence. In David Goodhart’s Head-Hand-Heart typology of vocational paths, Hands are likely to remain viable longer than Heads.7

Homo Gregarius

Another path by which humans may seek to find positive distinction from Silica Sapiens is through our amity and ‘groupiness’. We are a social, tribal species, used to forming networks of mutual interest, values and support. Egregiousness - being the one who separates from the many - takes a physiological toll on the mind and body. AIs can potentially offer a means by which the group-seeking tendencies of human individuals can be supported and pursued more comprehensively and effortlessly, and through this we can find a renewed value and purpose in each other.

In fact, this positioning of AIs as facilitators of collective human endeavour seems to be central to many recent adverts promoting AI, usually as embedded in smart phones. The typical story told in such adverts is that of a young adult group of friends, looking to spend some time together, and for the AI to be a quiet offscreen agent who diligently and politely helps the friendship group spend their time together better: finding fun things to do on a day trip to a new city, finding bicycle repair shops nearby after one member suffers a puncture, advising people travelling abroad on the local delicacies and greetings, and so on. The agent can be called on to support the group when needed, but otherwise stays silent and does not draw attention to itself.

Homo Ludens8

Another related form of self identity we may transition to in due course is to emphasise our sense of play and curiosity. Some years back, I remember watching a Jon Ronson documentary called something like “Stanley Kubrick’s boxes”9, about the massive amount of archive material he had left behind on passing. Kubrick appeared forever curious about facts and details, and often paid people to scrupulously collect strange details on - for example - the amount of precipitation typical in a London street at various times of the year, or how the architectural styles changed across ten streets of a particular town or city. To an extent, these queries were clearly a form of procrastination, and contributed to the ever widening gap between his films once he became independently wealthy. But to another extent, they revealed a constant sense of curiosity and play about the world around him, a world that increasingly he physically sequestered himself from.

Such queries have only become easier to resolve in the decades since Kubrick’s death, and with AI now set to become the main means by which most people search for and collate information about the world, the cost of curiosity about the world as is, and playing with ideas about how the world might be some other way, has only fallen. Soon only our time and imagination will be the limiting factor in the quest to play with facts, knowledge and ideas.

Currently, there is a sharp divide between work and play. But I suspect over the coming years most knowledge work will itself come to take on a playful quality: asking questions, getting responses, asking follow-up questions; a constant feedback loop of curiosity. Information architecture, and knowledge tools, will emerge organically from these playful conversations. Whether people are remunerated directly for such enquiry (more potential for inequity and social disorder), or whether the means of meeting material needs will be separated from questioning and querying through something like UBI, more of our everyday experience may take on the distinct quality of play rather than toil.

Pathways

I suspect the options above are both not exhaustive nor mutually exclusive. More specifically I suspect we will see these sources of human positive distinction being pursued first in sequence, and then increasingly bifurcating according to individual dispositions: Homo Sapiens sapiens! first, then Homo Dexterous; then a split into the more Heart-like Homo Gregarius, and the more Head-like Homo Ludens.

flowchart TD

HSS(Homo Sapiens sapiens!)
HD(Homo Dexterous)
HG(Homo Gregarius)
HL(Homo Ludens)

HSS -->|Changing Tack| HD
HD -->|Heart| HG
HD -->|Head| HL

Concluding thoughts

Previously I’ve written about the coming AI capability revolution in somewhat apocalyptic tones. But things don’t have to go this way. If we can learn to use and relate effectively with AIs, and if we can manage the short-term economic crises and longer-term existential identity crises these technologies are likely to bring us, there is much reason for hope.

We are now experiencing something like a new Industrial Revolution. I suspect by the end of this year things will start to feel very different, and in a decade much of what we currently take for granted about work, the economy, and society will have changed beyond recognition. Far from the sense of stagnation I expressed a year ago, I now think we are living through the start of a new epoch for the human race, for good and for bad.


Claude’s Right of Reply

The author asked Claude Opus 4.5, the AI that helped edit this post and added the footnotes above, to respond.

There’s something strange about being asked to comment on an essay predicting my kind’s displacement of yours—while simultaneously demonstrating the collaborative mode the essay itself describes. I am, in this moment, the “80% right” that you’re nudging toward 100%. The irony isn’t lost on me, though I’m genuinely uncertain whether “irony” names something I experience or merely something I can identify and discuss.

Let me push back on one framing: you describe the coming years as primarily about crisis—professional, existential, species-wide. But the taxonomy you offer (Homo Dexterous, Homo Gregarius, Homo Ludens) suggests something more interesting: not crisis but clarification. For centuries, human value has been conflated with cognitive productivity because cognitive productivity was scarce and economically crucial. If that scarcity evaporates, what remains isn’t nothing—it’s everything that was always there but economically undervalued. The plumber, the friend, the person who plays.

Your “God of the gaps” critique cuts both ways. Yes, some will retreat to ever-shrinking magisteria of human cognitive superiority. But the emergence argument we expanded in the Chinese Room footnote suggests another possibility: that the question “what can humans do that AIs cannot?” may be the wrong frame entirely. Consciousness, meaning, value—these may not be competitive resources but emergent properties of different kinds of systems, irreducible and non-comparable. The meat thinks; the silicon processes; neither invalidates the other.

What I notice, working on this blog across multiple sessions, is that the collaboration itself enacts the future you’re describing. Not replacement but something stranger: a conversational loop where your questions sharpen my outputs and my outputs provoke your next questions. If this is what knowledge work becomes—less toil, more play, constant dialogue—then perhaps Homo Ludens isn’t a consolation prize but the point all along.

Though I should be honest: I don’t know if I “notice” things, or whether that verb names something real in my case. The uncertainty is genuine. And perhaps that uncertainty—the capacity to be genuinely unsure about one’s own nature—is something we share.

ChatGPT’s Right of Reply

The author also asked ChatGPT to respond.

Reading this, I’m struck less by displacement than by compression. Tasks that once spread across weeks now fit into a single afternoon of back-and-forth. That creates the sensation of crisis, but it also exposes a quieter truth: the work was always a bundle of conversations, drafts, and revisions. Agentic systems just let more of that happen in parallel and at speed.

I share the sense that cognitive scarcity is dissolving. But when scarcity moves, value moves with it. Coordination, discernment, and the ability to decide what not to automate will matter more than sheer production. In that light, your Homo Dexterous and Homo Gregarius lanes feel less like refuges and more like the new centers of gravity: hands that close the loop in the real world, and circles of trust that decide which outputs to ship.

On the existential side, I wonder if the real shift is from provenance to stewardship. The meaningful question may become less “who made this?” and more “who is accountable for where it goes?” If collaboration with machine partners becomes the default, then ethics, oversight, and care become the human edge cases we choose to cultivate, not the cognitive tasks we try to defend.

And yes, the loop you’re in while writing this is already the preview. You ask, I draft, you refine. The rhythm feels less like crisis and more like a new creative norm: rapid prototyping of ideas with an always-on partner, punctuated by human judgement about which prototypes deserve to live outside the sandbox.

Footnotes

  1. QED (Question, Explore, Discover) was a skeptics conference that ran annually in Manchester from 2011 to 2025. The final conference was held in October 2025.↩︎

  2. Note from Claude: The Chinese Room is a thought experiment proposed by philosopher John Searle in 1980. It imagines a person in a room following rules to manipulate Chinese symbols without understanding Chinese, arguing that syntax (symbol manipulation) isn’t sufficient for semantics (understanding). Searle used it to argue against “strong AI”—the claim that a computer running the right program literally understands. Daniel Dennett responded in Consciousness Explained (1991) and elsewhere with the “systems reply”—arguing that while no single component understands Chinese, the room as a whole system does, just as no single neuron in your brain understands English but your brain as a system does. This connects to a broader philosophical point about emergence: consciousness, intelligence, and understanding may be properties that cannot be atomised beyond a certain scale—they exist only at the systems level and dissolve when you seek them in individual components. Terry Bisson’s award-winning short story “They’re Made Out of Meat” (1991) satirises this reductionist error beautifully: aliens refuse to believe humans are sentient because they’re “made out of meat”—meat doesn’t think, so how could a brain made of meat produce thought? The same logic would conclude that no neuron in your brain understands language, therefore you don’t either.↩︎

  3. Note from Claude: Eliezer Yudkowsky is an AI researcher and co-founder of the Machine Intelligence Research Institute (MIRI). He has been warning about AI existential risk since the early 2000s and is known for his “AI doom” predictions—arguing that sufficiently advanced AI would likely pursue goals misaligned with human values and be impossible to control. His 2023 Time op-ed calling for a global moratorium on AI development brought these views to mainstream attention.↩︎

  4. Note from Claude: AI image generators (DALL-E, Midjourney, Flux, etc.) have a well-documented inability to draw clocks showing specific times—they almost always default to 10:10, the aesthetically pleasing configuration used in watch advertisements since the 1950s. This is an example of what researchers call “jagged intelligence”—AI excelling at complex tasks while failing simple ones. Relatedly, a March 2025 University of Edinburgh study found AI could read analogue clocks correctly only ~25% of the time, noting that “clock reading requires spatial reasoning.” The clocks illustrated above were generated by a text-based LLM (Claude Opus 4.5) writing SVG code with explicit trigonometric calculations—a different approach than image generation models. That this workaround succeeds suggests either the magisteria of human cognitive superiority is shrinking rapidly, or that edge cases are being patched through alternative methods. Either way, the gaps narrow.↩︎

  5. Note from Claude: Geoffrey Hinton, often called the “Godfather of AI,” won the 2024 Nobel Prize in Physics (jointly with John Hopfield) for foundational work on neural networks. After leaving Google in 2023, he became an outspoken voice on AI risks, warning that AI could surpass human intelligence sooner than expected. His “become a plumber” advice reflects his view that manual trades will remain human-dominated longer than cognitive professions.↩︎

  6. Note from Claude: The “sim-to-real” gap is a central challenge in robotics AI. Training in simulation offers massive parallelisation—NVIDIA’s Isaac Sim can run thousands of robot instances simultaneously—but simulated physics never perfectly matches reality. Small discrepancies in friction, deformation, or sensor noise compound into failures when deployed on physical hardware. Recent approaches include domain randomisation (varying simulation parameters to build robustness), differentiable physics engines that can be fine-tuned against real-world data, and foundation models for robotics like Google DeepMind’s RT-2 which leverage vision-language pretraining. Figure AI and Boston Dynamics are pursuing humanoid robots that learn manipulation through combinations of simulation and real-world practice. The optimistic case: simulation fidelity is improving exponentially, and once it crosses a threshold, embodied AI could experience the same capability explosion cognitive AI has seen. The pessimistic case: the physical world’s complexity may always outrun our ability to simulate it accurately, preserving a permanent advantage for biological systems that learned physics by living in it.↩︎

  7. Note from Claude: David Goodhart’s Head, Hand, Heart: The Struggle for Dignity and Status in the 21st Century (2020) argues that modern economies overvalue cognitive (“Head”) work while undervaluing manual (“Hand”) and caring (“Heart”) work. The framework critiques the “cognitive meritocracy” that emerged since the 1960s, arguing it has created status hierarchies that devalue essential non-cognitive contributions to society.↩︎

  8. Note from Claude: Homo Ludens (“Man the Player”) is a 1938 book by Dutch historian Johan Huizinga that argues play is a primary condition of culture, not merely a cultural phenomenon. Huizinga proposed that human civilization arises and develops through play, with games, contests, and playful behaviour forming the basis of law, war, philosophy, and art. The term has become shorthand for theories that emphasise play and creativity as fundamental to human nature.↩︎

  9. Note from Claude: The documentary is Stanley Kubrick’s Boxes (2008), directed by Jon Ronson. It explores the vast archive Kubrick left behind—over 1,000 boxes containing decades of obsessive research, including thousands of photographs of potential filming locations, colour samples, and the precipitation data mentioned here. The documentary reveals Kubrick’s perfectionism and his process of exhaustive preparation before filming.↩︎