A few weeks ago I shared some reflections on LinkedIn on Daron Acemoglu’s new paper on “AI, Human Cognition and Knowledge Collapse”, that just came out as an NBER Working Paper.
What does the paper claim? The core claim is that AI could contribute to a breakdown of collapse of knowledge and a reduction in the stock of skills. This chimes very well with the concerns I had raised at the start of 2025 and is a direct implication of the inference meets retrieval chain of thought. His main point is very simple: each unit of knowledge, due to AI sharing said knowledge without friction, creates weaker incentives to retrace the steps through which the knowledge got created in the first place and this effect is increasing the lower the cost of retrieval.
I posit though, that, in fact, in some societies, some form of knowledge collapse has already happened, in particular around skilled trades and “physical skills”. The UK is a prime case here and this is where immigration may have been a contributor due to implicit contracts in skilled trades breaking down (skills are portable with freedom of movement, reducing incentives to invest in the skill transfer if not publicly organized and protected with some form of monopsony power – the good old guilds).
But this is a separate and not unimportant point. Yet, I want to highlight the bit that Daron’s work misses.
Why do I fear it’s even more problematic than Acemoglu’s point? The problem is the present “architecture” of capitalism. A marketized financial system, paired with (secondary markets) for reasoning traces, deep knowledge of behavioral biases that can be linked directly or indirectly to individual humans through non decentralized (digital) ID, makes future human behavior increasingly predictable and monetizable today.
Meaning: there is a whole sector that is responsible to shape and package financial products as investments that are presented as low risk claims to future income is precisely what bakes in a demand for predictability. That is, there are powerful incentives baked into the financial system to encourage individuals to never develop sophisticated planning, executive control, and higher-order cognition, let alone, self control.
This is possibly now supercharged by the possibilities of exploiting our digital fingerprints with AI. That is, just to be clear: the mechanism is the financialized economy with weak or poorly governed information intermediaton markets through which the “process” of knowledge production itself is monetized and incentivized. And this creates the ethical dilemma of the knowledge creator.
Daron’s paper is formalizing this in an aesthetic model, and is the direct implication of the “inference meets retrieval” reasoning chain: the risk becomes real if we keep treating humans as factors of production, consumers, or “herds” from which knowledge is “farmed” into profits. Profits that can be transfer-priced away, hollowing out the ability for free societies to maintain democratic governance over a social contract that may impose ethical constraints on the extraction to support varieties of human flourishing.
Where is the geopolitics? The US and, to a lesser extent, the UK are unusually specialized in precisely those service-sector activities that sit closest to information intermediation: finance, media, marketing, advisory services, platform intermediation, data brokerage, and the monetization of attention itself.
In other words, they are deeply specialized in turning context-specific information, predictive inference, and narrative control into rents. And this is where Acemoglu’s mechanism becomes politically explosive. In his model, the private return comes from context-specific information, while the public return comes from the accumulation of general knowledge that is shared and inherited as an externality through AI.
Agentic AI raises the value of the former while undermining incentives to generate the latter. So my claim is that the present architecture of capitalism does not merely permit this substitution, rather it rewards it. It financializes precisely the side of the knowledge-production process that is easiest to privatize, price, and trade, while systematically under-rewarding validation, replication, public reasoning, and the slow accumulation of a common stock of knowledge. That is the deeper architectural flaw and the risk of a knowledge-collapse dynamic is not an unfortunate byproduct of AI. It is what happens when a system organized around extractive returns to prediction meets a technology that radically scales prediction.
How to escape this risk? To escape the risk of knowledge collapse, the answer cannot simply be “more AI” or “better alignment” in the narrow technical sense. The answer has to be a new system architecture. That means strengthening the validation layer and making claims auditable and contestable. This requires building public and community-stewarded institutions that reward verification, not just novelty and it requires auditable commons rather than opaque private scorecards.
More crucially, on the privacy frontier, it means a privacy pivot toward data minimisation, selective disclosure, and systems where one proves attributes rather than sharing personal information. That is: we need to move to an equilibrium of knowledge sharing, not data sharing.
This means an inverted topology of power in which raw data sharing is limited but knowledge sharing is maximized. One mechanism through which this can be achieved is public data infrastructure and institutions, with players such as universities, public agencies, civic bodies, communities of validators, acting as stewards of auditable trust. And it is precisely here where we have seen some “action” in what I dubbed “operation cleanup” is so important as may such players have actually been potentially actively a part of the undermining of the necessary trust to entrust these organizations with stewardship.
In that world, even state access to data would need to be bounded, logged, legible to the data owner, and tied to public purpose rather than invisible extraction. In Acemoglu’s language, this is a way of raising the aggregation capacity of general knowledge and protecting the incentives to keep producing it, rather than letting all roads lead to high-precision private recommendation and collective cognitive atrophy.
And here is where the catch22 of geopolitics comes in. A serious move in this direction would undermine many of the pillars of contemporary US power projection. It would reduce the scope for cross-context profiling, weaken the business model of data extraction, narrow the space for dark microtargeting, challenge the private control of trust and certification infrastructures, and curb the ability to turn informational asymmetries into both domestic rents and international leverage. On this reading, the US dilemma is that the reforms most needed to preserve the social legitimacy of knowledge are also reforms that would eat into the economic and strategic advantages built on information intermediation. That is why this becomes a catch 22: the architecture that now threatens collective cognition is also one of the architectures through which American power is projected. To recalibrate the system is, in part, to disarm some of the very mechanisms that have underpinned that power.
US aggression under Trump. This, then, is why the response to the threat can become so aggressive. If demands for privacy, auditable trust, data sovereignty, and non-extractive information governance rise, the resulting architectural shift does not just regulate a few firms. It threatens an entire accumulation regime and thriving intermediary markets. And so the temptation is to move faster, entrench deeper, and normalize the existing informational plumbing before the window closes.
Yet that is precisely the road on which the Acemoglu result becomes more, not less, likely: a world of ever more powerful context-specific recommendation, ever weaker incentives for human learning, and ever more fragile public stocks of general knowledge. The challenge, then, is not merely to govern AI models. It is to redesign the institutional topology in which knowledge, validation, identity, and inference are embedded. Otherwise, we may discover too late that what looked like a frontier of intelligence was in fact a machine for liquidating the cognitive foundations of free society. If we can not trust that knowledge will be used as a force to advance the betterment of the human condition in ways that humans almost unanimously recognize as such, I fear we may move to an equilibrium where we lose the enlightenment consensus.
My interpretation on this all stacks up with US negotiation terms in its trade agreements vis-a-vis partner nations, specifically regarding bans to data localization; privacy adequacy requirements; etc etc, such as evidenced in the recent US/UK trade agreement.
My conjecture further is that the US is effectively trying to encourage VC ecosystem to develop effectively (re)building many of the already existing tools in the digital economy, yet, that these tools would be “sovereign”, albeit, built on their rails. This could create a sufficient degree of captured (and invested) individuals in said societies to ensure that polit-economic opposition to more “sovereign” solutions arise.
On the solution stack, I need to write some other time. But you guessed it: decentralisation is the key.