The Hayekian knowledge problem is epistemic, not computational — AI doesn’t resolve it
Hayek’s argument against central planning is commonly misread as being about insufficient processing power. The deeper point is that economically relevant knowledge is generated by the act of decentralised exchange under genuine stakes — it doesn’t exist prior to and independent of that process. If this is right, even superhuman AI can’t plan centrally, because the information it would need only comes into existence through the market process it would replace.
Explanandum
Does frontier AI’s constraint optimisation capability tip the balance back toward central planning, as some have suggested? Or is the Hayekian objection robust to computational advances?
Substance
The computational reading of Hayek says: the economy contains too much information for any planner to aggregate, so decentralised markets do the job through prices. On this reading, sufficiently powerful AI changes the calculus — if you can process real-time data from millions of producers and consumers, the Hayekian objection falls away.
But Hayek’s deeper point — especially in the 1945 paper “The Use of Knowledge in Society” — is about the kind of knowledge involved, not its volume. Much economically relevant knowledge is tacit (Polanyi’s concept: we know more than we can tell), contextual, and only revealed through action. The shopkeeper’s hunch about stocking levels, the entrepreneur’s sense that a market is ripe for entry, the worker’s feeling that their firm is declining — these aren’t data points waiting to be collected. They’re constituted through engaged activity under genuine stakes.
A central AI planner would process the articulable, measurable, digitisable subset of economic information. It would optimise brilliantly within that subset. But if the tacit dimension is where much real economic value is created — and the Bell Labs case, craft knowledge traditions, and entrepreneurial judgement suggest it is — then the optimisation operates on an incomplete picture.
There’s also a second-order problem: if agents know they’re being optimised over, their relationship to their own decisions changes. In a market, my purchasing decision is mine — I bear the cost and reap the benefit. In a centrally optimised system, I face incentives to game the optimiser. Goodhart’s Law scales with the power of the optimiser.
Supports
- Polanyi’s tacit knowledge concept: skilled judgement can’t be fully articulated as data
- Market experiments consistently show that prices aggregate dispersed information more efficiently than centralised alternatives
- Soviet planning failures reflected incentive problems (lying about output, gaming targets) as much as computational limits
- The Campbell insight: optimisation targets get gamed, and more powerful optimisers make gaming more rewarding
Challenges
- AI recommendation systems already outperform individual human judgement in many narrow domains (demand prediction, inventory management)
- The tacit knowledge argument may be a “god of the gaps” — each domain that was once thought irreducibly tacit gets colonised by data and algorithms
- Hybrid systems (AI-assisted markets rather than AI-replacing markets) might capture benefits of both
- If AI shapes preferences rather than reading them, the distinction between “tacit preference” and “algorithmically constructed preference” may collapse
Open Questions
- Is there an irreducible core of human tacit judgement that AI can’t replicate, or is it tacit all the way down until it isn’t?
- Does the answer differ by economic domain — AI might plan commodity allocation well but fail at frontier innovation?
- Could AI change the Hayek debate not by solving the information problem but by changing the information ecology — making tacit knowledge less economically relevant?
Source Context
Developed through discussion of whether frontier AI systems could enable central planning. The key move was distinguishing the computational reading (AI helps) from the epistemic reading (AI doesn’t help) of Hayek’s argument.