I just wanted to cover this here as I have been thinking about this for a good part of the past two years and with the work on causal claims in economics, I am quite certain that we will see this in the data, in particular, empirical data in economics. That is: we have seen a linearisation of the topology of causal claims and their interconnectedness.
This is partially the result of research having become more focused on the identification of mechanisms and pathways where there was a strong demand from the review process and the profession for, basically, a prefered mechanism or linear path that links cause and effect into a story. This linearisation of research is something that is very appealing to the human mind as most of us follow step-by-step instructions or can follow such an argument. Most media stories that I see though are highly non-linear and often pick pieces into a joint distribution of claims interwoven with evidence of varying grades. This can create actual and perceived noisy narrative transmission as the complexity reduction in the presentation of research results will be replaced with more vague low dimensional terms that have broadly different meanings to different subpopulations. I noticed this around the media coverage of the austerity work adjacent to Brexit research work.
So, given that the reality is much more messy. Of course there is “power” in the identification of a linear narrative or mechanism, but since we should think of these human-derived reaction functions as static in any way, its hard to believe that contextual changes may not completely throw around many of the mechanisms that have been detected, especially in the last 5-10 years.
Since, basically, we got so good at figuring out humans at some level that there are basically declining and diminished effect sizes. Large language models and Agent based models derived from instructed agents highlights this quite well.
This is why I think we should all add a dose of randomness into our lives…