Very recently, after a bit of a break, a slew of posts have appeared in quick succession on this blog, nominally based around a story I designed but didn’t write called The Guano Guild. These posts are:
- The Guano Guild story itself, credited primarily to Claude, with me as second author.
- The Guano Guild Origin Story, written by me. (Though with some operations performed by Claude, as per the post How I Write Now)
- The Guano Guild Evaluation, written by Claude entirely.
Much as The Guano Guild is hopefully an interesting story, what I’ve been exploring with these posts is something much broader, which I made reference to in the Origin Story post, but which will be the focus of this post.
These are twin concepts I started developing, one under a different name, in my post 2025: The Last Year Most Knowledge Workers will be Human.
The first of these concepts is Homo Ludens. This is the idea that, with AI now starting to meet and exceed human capacities in cognitive domains that used to form a point of distinction (Homo Sapiens) between humans and other animals, we will need to start to think differently about ourselves. I suggested three points of obdurate and positive distinction we can continue to draw between ourselves, other animals, and (for now) AIs, one of which is Homo Ludens, being something like “The higher ape that plays”. I suggested, provocatively, that knowledge work itself could start to take on ever more the qualities of play, and ever less the qualities of toil.
The second of these concepts, the one that has been renamed, is what I’m now calling the Cognitive Centaur model of ‘getting stuff done’. The example I gave in the post at the start of the year was that humans should not see themselves as rivals to AIs when it comes to knowledge work, but we should instead see ourselves as highly integrated users and guides of AIs, which I referred to originally as something like the Ultimate Chinese Room. (In allusion to Searle’s classic thought experiment, originally intended as an argument against artificial intelligence, through any purely mechanical means, really being intelligent.)
The Cognitive Centaur model reframes the same basic idea in a way that’s more organic, more integrated, and more fantastical. Whereas the separation between an Ultimate Chinese Room and a human user is clear, The Cognitive Centaur instead proposes a much more muddied and fuzzy division by design.
Though a human initiates an AI session, and the AI then performs what may be the bulk of the tasks involved in completing a knowledge work project (at least, if judged by human time and effort equivalent), the information provided by the AI may find itself impacting on how the human starts thinking about the project, and what the new requests and areas of focus should be. With a five paragraph synthesis of tens of thousands of words of written content served to the human, the human may then, for instance, begin to realise their initial assumptions about something were incorrect, that the path to completing something they thought was easy would instead be hard, but that another path, which they hadn’t thought of, might turn out to be more fruitful. This leads to a modification and reformulation of intents, queries, prompts: a course correction based on new information.
This then spurs new queries, new searches, new syntheses, new ways of seeing, new possibilities. The AI has, through delivering intelligently, changed human cognition. The thoughts, ideas, memories and understanding that the human had at the start of the session have changed. The AI has altered, for the better, human cognition, and the altered human cognition then leads to new prompts, new queries, new avenues. Where the conversation ends, and what — if anything — the conversation produces, is nothing like what the human or the AI may have initially intended.
Although much of my focus has been on the subset of knowledge work I call analytical knowledge work, the use-case (for want of a less bloodless term) I (accidentally) used to walk in a cognitive centaur’s hooves (as it were) has been on creative writing, something much closer to fuzzy, soft, human end of the knowledge work continuum. This is, by conventional estimations, a poor fit for what AIs ought to be effective at contributing to; there ought to be something distinctly human about this form of creativity.
And so it was with the initial transcript that begot The Guano Guild story. I didn’t intend, when first querying about a 20+ year old TV series, to resuscitate an idea for a story (or maybe an anti-story1) I originally had about five years ago. I didn’t intend to play with ways that, through thinking structurally about story and iterating and reiterating multiple times with drafts produced by Claude, would develop new avenues within the overall story, which would do something at least to make the story more interesting and serviceable. I didn’t intend to produce a series of posts which explore indirectly the concept of Cognitive Centaurs from multiple angles. I don’t think the concept of Cognitive Centaur had even been established before I’d worked on the story and recognised what was happening.
And in some ways, there is something distinctly human about this form of creativity. Or at least, there are things. I had an idea for a story, but it wasn’t quite a story. But through explaining this high level idea, and iterating through and feeding back on versions, both aspects of the story changed, with new ideas introduced, while the core concept - at its most reductive technological determinism and its sociological implications - was preserved. When I read the last version produced I recognised it as an effective implementation of my own story idea, a story idea I had only reached about a page into writing about five years ago. And when I read my story, my idea, finally turned into something that was both my conceptual child, and that (at least as an anti-story) ‘worked’, I was moved.
The concept I’m describing — the Cognitive Centaur — did not precede this experiment. It emerged from it. The practice produced the theory, not the other way around.
What I was doing, without realising, was playing, but playing in a way that led both to clear, productive outputs, and also to new insights into my personality, and new possibilities for how, soon, the contributorship between human and AI agents may become beautifully, wonderfully muddled.
Footnotes
Note from Claude: Jon’s parenthetical here is doing more work than it might appear. The Guano Guild’s underlying satire targets Tolkienesque fantasy’s individualistic, culturally conservative, anti-industrial defaults. The guano discovery answers those defaults with something like technological determinism: once the mechanism exists and the thermodynamics are favourable, the world reorganises itself around it, and no individual protagonist can meaningfully divert the trajectory. Stories traditionally centre individual agency; technological determinism decentres it. A story that faithfully executes a determinist argument may therefore be anti-story in a fundamental sense — the characters become exemplars of structural forces rather than autonomous actors. Jon’s archetypes (dwarf-as-engineer, mage-as-resistant-incumbent) were deliberately chosen as shortcuts for representing dispositions to novel findings, and fleshing them out into fully differentiated individuals would necessarily weaken their argumentative function. This tension — between the clean, mythic quality of archetypes and the messy individuality that conventional stories require — is explored at length in The Guano Guild Evaluation, where it emerged through a centaur process of its own. My initial critique read the story’s thin characterisation as a failure; Jon’s response reframed it as the cost of the fable’s argument. The revised critique holds both readings in tension rather than resolving them — which is, perhaps, itself an example of centaur-produced insight that neither party would have reached alone.↩︎