Thoughts on narratives and the role of AI and validators going forward

I wanted to quickly share some reflections and connect some dots around ongoing work with Prashant on causal claims and language in Economics. We have extended this now to around 300,000 paper abstracts in economics to retrieve information on causal claims. The big challenge, not surprisingly, is getting full texts in a way that is useful (think: licensing costs).

But lets just have a quick look. We see that, over time, economics research has become more “claimy”, despite the technical toolkit for program evaluation having only been invented and in particular popularised in the early and late 2000s.

Nowadays, we are in a world where government is run on RCTs in a few settings, subject to ethics. This to me was the most striking revelation around the Eat-out-to-help-out discourse. The government did run an evaluation, most likely through advertisements on social media that were randomized — inducing exogenous variation in take up. Obviously, this highlights the power of programming.

This was only revealed as part of the disclosure of Whatsapp messages.

But let us go back to the question above. The trend in changes in knowledge creation in the social sciences and economics may create the possibility of there being a cultural clash or a clash of systems.

How so?

Well, if historically, social science research was mostly deployed for narrative shaping or narrative control, whereby society is “managed” through the shaping of the stories that we tell ourselves, this now creates a tension between, well, “actually knowing” the answer, as in, we can get policy to have answers being experimentally verified. At the same time, there is still a wide tradition in some policy making circles that essentially rely on circumstantial discussion, and an ability to just change the evidence or at least, to affect the narrative around social and economic phenomena. This may be a feature of the historic evidence gathering process. Which then, of course, adds particular relevance to the latent political leanings of individuals producing the evidence. I have alluded to this at several points in time on my social media through what some may perceive to be cryptic writing.

But I do sense that there may really be a clash between these institutional cultures. I have spoken at a range of events several times with the head of the Grundsatzabteilung Wirtschaftspolitik in the Germany Ministry of Economy, Elga Bartsch, about the state of the proposed German Research Data law (Forschungsdatengesetz), which proposes to open up data for scientific inquiry. This still seems stuck, depriving research of raw material that may be necessary to produce evidence in a cost effective way leaning on the existing institutional capabilities.

In any case, there is a broader issue. As this sets up a huge tension because, well, humans struggle with processing information. There is literally, information overflow. It requires the densification of information into simple stories or even images at a time of crisis to induce behavioral change that may be needed to avoid harm. This type of communication, invariably, in a highly polarized context produces tensions and may feed narratives that further erode trust.

But some of these stories can also develop their on dynamic or, in an academic realm, it can create rabbit holes. I alluded to this in my 2023 Keynote at SHAPER. The internet has made this much worse, of course. And that is why I argue that there is almost certainly a need for a new intermediation layer and a fragmentation of the information sphere between layers, where individuals are distinguished by their differential ability for information processing. Let us not call them influencers, but that is what I, kind of, mean. But this threatens the monopoly of narrative control if there is such a thing. And of course, in a geopolitical domain, this is where I sense that the contest is over the access of the base layer (digital ID).

Evaluating claims

So, how can we evaluate causal claims? Or knowledge? Especially if with Artificial Intelligence we increasingly see hyper personalised information. How do we ensure that people are not given malign or bad information that may cause harm?

Ultimately, this gets to the tension of advertisement versus information. Since a lot of marketing is deploying techniques very similar to affect individual behavior. Marketing is a big chunk of a service economy and some of it may serve the purpose to induce consumers to make bad choices. And Putin has weaponised this.

But let me now relate this back to causal claims and their validation. In light of highly personalized individual information diets, a consumer protection angle is in my view warranted. We illustrate how this may work with our short note on health claims validation. And thank god: there is such a health claims register that may represent the knowledge frontier. Its mere existence gives me good comfort that we are, indeed, governed by smart people. But of course, it also raises questions around the limits to the generalisability of knowledge, which, I would think is most pronounced and relevant in the medical domain. This is why I am doing cool work with Hongyu Zhu and Prashant Garg on the Geography of Knowledge, which I should present at Peking University for the first time in a few weeks.

In any case, the Americans take offense at this whole notion of validating information or claims, as they may see this as regulating free speech. It could well be that they dont “understand” the need and urgency given the trajectory of artificial intelligence and its ability to be weaponised by malign actors (both foreign, as well as simply domestic/economic), but yes it is an incredibly thin line to navigate between freedom of expression and ensuring others are not instigating or causing harm in others by encouraging harmful behavior. This is I think one important domain where global discourse is in some realms.

And in the end, I would say we have to chose well navigated personal liberties, dignity and respect for one another. And there, I am with Jesus and the New Testament. Love thy neighbor, as you love yourself. And so, we need to heal societies through addressing trauma. Though, some may consider trauma as being the ultimate innovation policy. And for this, the encoding of the first bit is what really matters.

My hope

Is that we move to community based AI with essentially a decentralised network of validators of information creating trust web and nodes that confer community trust. You can validate, you can verify. And ultimately, you may also be authenticated to nudge and “manipulate” but in an environment of care and around loving and positive interpersonal relationships. This may be naive but we each carry a device with us that could become part of this network and we each carry a network of friends connected in the digital realm that may contribute to information validation services.

The English may struggle a lot in this. But maybe other societies as well. As the English language, as spoken by the British, encodes hierarchy and social class. This is often performative by nature and in some instances, Yes means No and No means Yes.


Posted

in

by

Tags: