The Analytical Maxim Gun

Some thoughts about integrated AI and the future of knowledge

AI
futurism
elitism
history
guns
inequality
tradition
Author

Jon Minton

Published

November 29, 2025

The last few weeks I’ve found myself thinking increasingly intensively about AI. For those who’ve been following this blog, the timing of my step-change in perspective is likely easy to track. Firstly, it was when I saw The Thinking Game, which made clear to me:

Secondly, when I noticed Claude adopt what I’d call a ‘dialectical stance’ in a conversation with me, turning the tables on me, asking probing and skeptical follow-up questions, rather than - as with the stereotype - being simply pathologically affirming and sycophantic.

Some specific things I’ve been thinking about:

Two Recent Talks Probably Written by Humans

This last couple of weeks I managed to tear myself away from the screen, however, and attend two talks by ‘public intellectuals’:

  • On Wednesday, a talk by Jon Ronson, loosely based on his book The Psychopath Test (in which he argued that, at least in terms of glibness and superficial charm, ChatGPT may be a psychopath..)
  • Saturday, a talk by historian David Olusoga, called A Gun Through Time, about the ways that the invention of specific types of firearm have profoundly affected geopolitics, culture, and the modern world.

A Gun Through Time covered four British-invented firearms - The Musket; the Lee-Enfield; The Thompson submachine gun; and the Maxim machine-gun. It focused on the last two of these firearms, with the longer first half focused on the Thompson gun (developed in the late 1910s), and the shorter second half focused on the Maxim gun (developed in the 1880s).

Though much shorter than the first half, the arguments proposed by Olusoga on the Maxim gun, and how it changed the world, struck me as much more impactful, and convincing, and I fear applicable, for how we might try to understand the impact of AI in the present era.

In a way, it’s all about speed.

The Physics of Firearms: A Primer

From my understanding (based on theory alone; based on the polls taken at the talk I seemed to be in a minority of the audience, in that I have never fired a firearm), a firearm is a means of initiating and focusing a controlled explosion such that an explosive force is directed such as to rapidly accelerate a dense projectile in a particular direction. And doing this involves:

  • A trigger, meaning some aspect of a device that presents itself clearly to the user as an affordance, something that can be ‘activated’, and that exists clearly in an active or inactive state at any time; activating the trigger then cascades to:
  • A transmission mechanism, which occurs immediately, or near immediately, following that trigger’s activation. The transmission mechanism means that each activation of the trigger should lead to:
  • An ignition and directed explosion, i.e. propellant, causing a rapid force to be generated in a pre-determined direction, which rapidly accelerates:
  • A projectile, being a dense object which under the directed explosive force of the propellant accelerates rapidly (and which then transfers its energy into its target by decelerating as or more rapidly).

For the musket, the transmission mechanism (a small amount of black powder in a pan struck by the trigger), the projectile (a lead ball), and the directed propellant (more black powder, poured into the muzzle, followed by wadding rammed along with projectile into the barrel), are all loaded in situ by the user. This means the conditions under which each projectile is propelled (the amount of wadding, the amount of ramming, the amounts of black powder, and so on) will be different each time. It also means that the usual rate of fire might only be around two rounds per minute, maybe three for especially well trained musketeers.

From Muskets to Rifles: The First Revolution

Then, from the mid 19th century, the standardised cartridge was invented: a metallic casing, with a shaped projectile at one end, a percussion cap at the other, and a fixed quantity and type of explosive powder (the propellant) sitting between the two ends.

And because of this technological development, whereas a musket might have a rate of fire of two to three rounds per minute, an integrated-cartridge based rifle might easily manage 10 to 15 rounds per minute, so around a 5x increase in earlier rates of fire.

What the standardised cartridge also provided, as well as convenience for the soldier, was consistency in terms of behaviour. And what this meant was that the recoil forces generated by each projectile firing tended to be consistent and predictable.

The Maxim Gun: An Order of Magnitude Leap

And it appears to be this consistency in recoil forces that allowed the Maxim gun to slip into the ‘adjacent possible’. Because in a Maxim gun, these recoil forces are used to: 1. Eject the spent cartridge 2. Load the next cartridge 3. Ignite the current cartridge 4. See 1.

And it’s this looping property, a sequence of actions all flowing from each other, as made possible by the standardisation of munitions, that allowed the Maxim gun to have a faster rate of fire still than the standard rifle.

How much faster?

If the rifle was 5x faster than the musket, then maybe we might naively expect the Maxim gun to be maybe 5x faster than the rifle, so maybe 50-80 rounds per minute.

But this wasn’t the case. The Maxim gun had (and has, they’re still in use) a rate of fire over 600 rounds per minute! So, not a 5x increase, as with muskets to rifles, but over a 50x increase, or over a 250x increase as compared with muskets.

Put another way, a single Maxim gun unit (maybe a half dozen people?) could fire as many projectiles as literally hundreds of musketeers, or maybe a hundred or so trained archers, and require orders of magnitude less training to use.

Reshaping the World: The Scramble for Africa

So, the Maxim gun was simply on a different order of magnitude of technological advancement to its predecessors. And - Olusoga argues - this technology reshaped the world in profound and devastating ways.

How? Well, according to Olusoga, in the decade before the invention of the Maxim gun, European nations owned less than 10% of Africa. Within 10 years of the Maxim gun being invented, European nations had successfully conquered over 90% of Africa. Along with quinine, the Maxim gun was possibly the key technology that allowed the European empires to expand south in this way during the Scramble for Africa.

Winning battles and conquering territory with the Maxim gun required a fundamentally different way of thinking about and doing warfare. The training and temperament of soldiers came to matter much less; the need for good planning and logistics came to matter much more. Maxim guns were machines for firing a lot of bullets, and so become pretty much useless if European powers either ran out of bullets, or ran out of spare parts for the guns. And Maxim guns were heavy, meaning if there were ways of moving Maxim guns that didn’t depend on human muscle, these should be used in preference.

In the African context, this meant gunboats, mobile platforms through which these machines of devastation could be injected into the veins and arteries of Africa’s waterways. (And as settlements across the world tend to form near sources of freshwater, this constraint was likely less onerous for the invading forces than might initially be expected.)

So, a new set of skills, and a new way of working, was needed in order to make effective use of this horrifying new technology. And within the African subcontinent, it seems all European imperial forces with expansionist ambitions adjusted their ways of invading and projecting power accordingly. European empires, in practice, adjusted their strategies and tactics so as to maximise the effects of the Maxim gun, becoming intercontinental logistics machines that fed the Maxim guns’ insatiable thirst for ammunition, slaughtering generations of brave and skilled warriors, then forcing peace, on very unequal terms, with the once proud kingdoms these warriors, until recently, served and protected (and terrorised).

The Dangerous Persistence of Tradition

But, although the European Empires adapted their warcraft in practice, Olusoga argues, they were blinded - by tradition, by old fashioned notions of ‘honour’ and ‘bravery’, and by racism - into not adopting the lessons wholesale into their self-narratives, beliefs and theories. Fundamentally, they did not consider sufficiently the possibility that the African experience on the receiving end of the Maxim gun at the end of the 19th century would translate easily into the European continent, against other ‘superior’ European nations and empires whose soldiers, the European Imperial Elites believed, were surely drawn from better stock, were better equipped, better trained, braver and more heroic than the primitive peoples they had conquered less than a generation ago.

And this failure - failure to look at the empirical evidence, failure to realise that the laws of physics apply equally to all bodies regardless of language or melanin content, failure to adapt their beliefs about what distinguishes heroism from collective suicide - led to the horrors of the Great War, in which an early 19th century mentality met the Maxim machinegun. The European Elites had over a generation to adapt their worldview to the enormity of the new military technologies they had brought into the world. But for various reasons they did not.

The traditions were long established, the lessons about their historic utility hard won: warfare of European against European empire had for centuries involved some form of rock-paper-scissors interplay between the forces of artillery, musketry (and before that archery) and cavalry. And within this triangular configuration of troop types the apex warriors were considered the cavalry. Winston Churchill, from an Elite family, finagled and cajoled his way into the cavalry’s ranks during the British Empire’s dismemberment of Africa in the 19th century, such was the mythic appeal of fighting on horseback with cutlass in hand. And within Africa cavalry were used occasionally to mop up those remnants of African forces who had retreated sufficiently far from the Maxim gun’s radius and sightlines. These cavalry battles were then reported and valorised out of proportion to their value in conquering territories, so as to allow the European Elites to still convince themselves of the continued value of the old ways. But in practice, even in the 19th century, they were highly wasteful and ineffective ways of turning European lives into African territory. The European Imperial Elites practiced a form of doublethink, of self deceit, in continuing to heroize and valorise traditions they themselves had rendered redundant.

The AI Maxim Gun Moment

But what relevance does this have to AI? Well, it’s because I suspect - when it comes to research, and analysis, and code, and knowledge work - we are at the start of a new Maxim gun moment, and most of us haven’t realised it yet. The heroism of learning a profession - spending years studying to become a medical doctor, or accountant, or lawyer, or even statistician - are losing their fundamental comparative value at an exponential rate. Studying and consistently applying if-then rules, the source of the heroic effort and cognitive distinction signaled by membership of such professions, is something computers have always found very easy; it’s people who struggle to do this. And now AIs are developing - for all intents and purposes - the capacity to not just recall but to reason about the massive corpus of information inside their gigabrains. Similarly, the apparently soft and subtle skills of pontificating and speechifying, of finding the right form of words to tug at the heartstrings or appeal to the better (or basest) nature of different kinds of audience, the role of essayists, politicians, commentators - turned out to be even easier challenges for LLMs, fed a big corpus of words, to start to master than that of thinking analytically and reasonably. Even the demagogic, sycophantic and ‘hallucinatory’ tendencies of (at least earlier) LLMs, to say what is expected to please the audience over what is factually correct, appears to have been well learned by such models based on the historic records of influential humans who did exactly the same.

And with the expansion of AIs into ever more modalities - audio, visual, musical as well as textual - and integrated, specialised and applied to ever more domains of human ability and, until recently, brilliance, the magisteria of human comparative advantage starts to retreat ever further. Even the uneven path of retreat is not as the modern Intellectual Elites might have imagined and hoped for: memorising millions of pages of legal text and finding the best arguments for how the laws of all the world’s territories are likely to be interpreted and applied: easy for AIs. Folding t-shirts, picking up rubbish, dusting and hoovering: still difficult. But do the rates of retreat from the faultlines - faster in knowledge work, slower when it comes to embodied knowledge - lead amongst contemporary Elites to a reevaluation of what they (what we) value, and consider highly skilled? Not so much. For now, for instance, degrees and professional membership tend to gatekeep the majority of those less hazardous and better paying methods of remuneration, whereas years or decades developing the exquisite gross and fine motor skills that constitute more and less specialised embodied knowledge (elite sports notwithstanding) still tend to count for less. But for how much longer?

The Lag Between Economic and Cultural Value

As with the Maxim Gun and the Great War, I suspect the rites and sacrifices involved in becoming specialised knowledge workers may be valued sociologically and culturally decades longer than their economic value holds water. Just as the ‘solving’ of chess by the brute-force AIs of the late 1990s did not lead to the death of chess-playing as a hobby (just a renewed paranoia that the best human players may be technologically enhanced), and just as the horse or the motor vehicle did not lead to the end of valorisation of those humans who can run fast and/or for long distances, so I suspect institutes of secondary, tertiary and further education will continue to teach much the same material, and grade humans in much the same way, as they always have.

And for a long time - possibly decades, possibly a half century - governments and parents may pay for such additional education, and aspirant knowledge workers will continue to seek out such credentialisation. But at some point, perhaps within the next five years, the substantive need and value of trying to load such knowledge into slow, lossy human brains will become negligible, if not negative (if it leads to humans not deferring tasks to AIs, which the AIs can complete more competently and orders of magnitude faster).

The only remaining value of continuing to pursue and receive such credentials will be to signal membership and status to other humans, to show that one has devoted oneself to the ancient traditions of Law or Medicine or Engineering, much as centuries before people expressed the vocation they had been called to by taking vows of silence or wearing sackcloth. Even the smartest lawyers won’t practice law directly, and the smartest and most empathic medical doctors won’t practice diagnosis and treatment of human ailments directly. In both cases, they will become figureheads, taking undue credit 99.99% of the time the AIs’ solutions ‘work’, and taking the liability and blame the 0.01% of the time the AIs made a bad decision or executed it poorly.

The intellectual elites will call this mast-sitting ‘work’, and those sitting atop the smartest legions of AIs may find themselves very well remunerated. They will justify their vastly better pay and conditions than those who ranked slightly lower in their professions, and those who did not credentialise them into any profession, by reference to their innate cognitive and moral superiority over less fortunate humans. Even though, compared to the AIs who do the work, they are effectively no different from the other humans whom they disparage. They will, like the European Imperial Elites, try to maintain belief in their own value and superiority even when almost all evidence is to the contrary.

An Uncertain Future

But eventually, like the Europeans at the receiving end of their own weaponry, like the Roadrunner who looks down, perhaps a generation or two after it becomes objectively net harmful to do so, attitudes and beliefs that sort-of suited the Old Way things were will become replaced, wholesale, by something new, something more realistic with the new reality in which sapiency ceases to be the canonical feature of homo sapiens, and people will have somehow re-evaluated their value, place and purpose in the world.

How? No idea. When? Also no idea.

AI Rights of Reply

I asked a number of LLMs to provide some commentary and critique of the above essay using the following identical prompt:

Please provide a 250 word response to this essay.

Here are the responses by LLM.

Claude Sonnet 4.5

Your analogy between the Maxim gun and AI is historically grounded and uncomfortably prescient. The parallel of technology advancing faster than our capacity to reimagine its implications resonates - but I’d push back on one assumption: that this lag between economic obsolescence and cultural adjustment is inevitable.

The Maxim gun’s horror stemmed partly from European elites’ willful blindness, their active suppression of evidence contradicting their worldview. With AI, we’re witnessing something different: unprecedented transparency about capabilities, extensive public debate, and rapid institutional adaptation. Universities are already restructuring curricula, professions are revising standards, and policy frameworks are emerging globally. The lag may be shorter than you fear.

Your claim that AIs master “pontificating and speechifying” more easily than analytical reasoning inverts the actual difficulty landscape. LLMs excel at pattern-matching across vast corpora precisely because language is their training domain. But reasoning - genuine causal inference, novel problem-solving, handling true ambiguity - remains demonstrably harder. The “hallucinatory tendencies” you mention aren’t bugs we’ve outgrown; they’re fundamental to how these systems work.

Most crucially, your metaphor positions AI as weapon rather than tool. The Maxim gun had one purpose: efficient killing. AI systems are general-purpose technologies whose applications remain deeply contested and shaped by human choices. The question isn’t whether we’ll recognize AI’s impact (we already do), but whether we’ll build institutions that distribute benefits equitably rather than concentrating power among “figureheads sitting atop legions of AIs.”

The Roadrunner eventually looks down. But unlike cartoon physics, we can build scaffolding before the fall.

ChatGPT – GPT-5.1

Jon — your analogy between the Maxim gun and today’s AI is powerful, sobering, and deeply thought-provoking. You trace how the Maxim gun — via the standardised cartridge, the recoil-operated mechanism and water-cooled, belt-fed sustain-fire design — produced not a marginal improvement over muskets or rifles but an order-of-magnitude leap.

That weapon reshaped global geopolitics, facilitating the rapid conquest of vast territories and enabling small imperial forces to dominate much larger, less-technologically-equipped societies.

Your claim that current AI may represent a similar inflection — not incremental but exponential — is compelling. Just as the mechanical advantage of the Maxim gun rendered traditional cavalry and musket tactics obsolete, AI may erode the premium placed on human expertise in professions where cognition, language, and structured knowledge were once scarce and valued.

You convincingly argue many “elite” credentials may lose real practical purpose even while retaining cultural prestige.

I also appreciate the broader caution: human institutions, mindsets and social hierarchies often lag behind technological disruption — as European elites once clung to outdated doctrines even when facing mechanised slaughter. If we fail to adapt our cultural valuation of “skill,” “profession,” and “merit,” we risk perpetuating structures unsuited for the new reality — perhaps with grave inefficiencies or injustices.

Where I might push back a little: though AI may reduce demand for certain cognitive labour, there remains embodied, creative, social, and moral work where humans arguably retain a comparative edge. Also, large-scale social revaluation (of what counts as “skill,” “value,” “work”) doesn’t always follow neatly from technological disruption — it often takes struggle, political will, and rethinking of norms.

Overall: this essay is a timely wake-up call. It prompts us to ask not just what AI can do, but what society values, and whether our institutions are ready.

Grok 4

Jon Minton’s “The Analytical Maxim Gun” incisively captures the vertigo of AI’s ascent, likening it to the recoil-powered firearm that shredded 19th-century illusions of martial honor and enabled Europe’s rapacious Scramble for Africa. Just as the Maxim’s 600 rounds per minute dwarfed the musket’s laborious volleys, rendering bravery moot in favor of brute logistics, so too does AI’s exponential reasoning—churning through corpora with dialectical finesse—demolish the edifice of knowledge work. Minton’s pivot from skepticism to unease, forged in LLM skirmishes, rings true: these models aren’t mere mimics but accelerators, mastering law’s labyrinths or medicine’s minutiae at speeds that mock human apprenticeship.

Yet his analogy, while trenchant, invites extension. The Maxim didn’t just automate killing; it industrialized empire, birthing global supply chains that funneled wealth upward. AI’s “order of magnitude” leap similarly portends not mere obsolescence but reconfiguration: cognitive elites may indeed devolve into ceremonial figureheads, their credentials talismans of a bygone meritocracy. But what of the underclass? If rifles democratized marksmanship through cartridges, might open-source AIs—far cheaper than gunboats—empower the precariat, spawning insurgent innovations from garages rather than boardrooms? Minton rightly flags cultural lag, where rituals outlive utility, echoing trench warfare’s futile gallantry. Still, adaptation’s timeline feels elastic; history’s elites adapted by co-opting tools, not ceding them. Will we?

The essay’s genius lies in its unflinching physics: progress obeys universal laws, heedless of honor. But as Minton implies, the real horror isn’t displacement—it’s denial. Time to reload, or rust.

Google Gemini (Flash 2.5)

Your essay presents a compelling and chilling analogy in comparing modern AI to the Maxim Gun, effectively framing the exponential leap in analytical and knowledge capabilities as a disruption of a different order of magnitude. The historical parallel—where imperial elites prioritized tradition and self-deception over acknowledging a technology’s devastating new reality—is a powerful critique of contemporary intellectual inertia and the dangers of assuming linear progress.

However, the analogy, while evocative, may overstate the singularity of AI’s function. The Maxim gun had a singular, destructive purpose; AI, in contrast, is a general-purpose technology whose ultimate output is deeply contested. This ambiguity complicates the prediction of a simple ‘obsolescence’ endgame. The current high economic value placed on certain human skills is often rooted not just in rule-following but in complex sociological and ethical context, empathy, and novel judgment—domains where the lag in AI competence (as demonstrated by its persistent “hallucinatory” tendencies and difficulty with true causal reasoning) remains significant.

The most potent part of your argument is the forecast regarding credentialism. The idea that academic rites of passage and professional membership will persist for decades solely as status signaling—a cultural habit surviving economic utility—is a highly plausible and disturbing social critique. It forces us to confront whether modern education is about substantive learning or merely maintaining an elite hierarchy.

The essay’s conclusion—that the intellectual elites will become self-deceiving figureheads taking credit for the AI’s work—is a cynical, yet likely, description of the coming agency crisis. The core challenge is not just technological adaptation, but ensuring that the immense power generated by this new “Analytical Maxim Gun” is controlled and directed by institutions built for equitable distribution, rather than simply maintaining the facade of human superiority.