On limitations to survey experiment research and attacks on academia

In the past I had shared some reflections or concerns on some challenges that I see arise with survey experiments and potentially with the information experiment variety of this class of research designs. Don’t get me wrong, I have a few papers that make use of such instruments that I feel are well done because they also combine and draw on other “hard data” and techniques. But there are some fundamental challenges that I have shared on social media in the past that I think may merit study, challenge and investigation. I also want to comment on how this may link to attacks on academia from certain corners.

There is a recent paper that highlights the relative ineffectiveness of translating survey experiment into “outcomes” or impact. This to me highlights a challenge with some research work. The paper ist titled “Not Getting the Message on Climate? Attention as a Key Barrier to Mass-Marketing Experimentally-Validated Messages” and it is coauthored by Nicholas Carnes and Geoffrey L. Henderson and just came out in the British Journal of Political Science. Here is the abstract and link:

Survey experiments have many problems, especially as highlighted how AI is increasingly likely to be among the respondent set in absence of digital ID.

But one of the conceptual problems that advocates of the “we just have to communicate our policies better” strategy overlook is that we self-select into / out of exposure to news that do not accord with our priors.

So, even if you can show that in some survey experiment, there is some persuasive effect, this doesn’t mean that this effect will materialise in the real word. This is because, unlike in the experiment, you can rarely force people to be exposed. The other problem that this strategy ignores is that, in reality there is always counter-framing by those elites (e.g. interest groups or politicians) with opposed interests.
So, the question then is:

  • (i) will people actually get the message
  • (ii) will the message be more persuasive than the counter-message(s) they receive?

Giving policy advice based solely on a simple survey experiment that recovers, if at all, only the partial equilibrium effect, is bad practice. General equilibrium effects matter.

But naturally, if research that generates a lot of attention is leaned on using such designs, researchers who engage in such research and their tools may be seen as particularly worthy of access. They develop perceived authority and it is said authority through which they may be asked to give advice to policy makers. It is a difficult cycle that may lead to individuals being catapulted into positions to provide advice or input may end up exerting a lot of influence.

This is particularly problematic in systems of higher education that consider measures of attention, citations, or that encourage outright activism to create impact as a currency of reward for research funding.

Reflecting a bit back In some way I have been thinking hard about some of own research that has found its way into shaping some narratives. I would like to think that I did get things right in most instances. Of course, you may say this is a success. This is impact albeit, in the UK research funding bodies have a catalogue of what they consider officially to be impact.

It always has been my view that “good research” should find its way in the cognition. There were two notable exceptions where I actively and quite decidedly pushed the research out: two papers around the pandemic (here and here) And that was quite simply because I was furious.

But where I see more substantive issues around information experiments is the following. Not only is the selection into information spaces can produce internally valid experiments that still do not generalize even if the marginal distributions between treatment and control appear perfect balanced or well balanced. I am trying to formalize this a bit or illustrate this in some other work at the moment.

Risk to the internal validity of survey experiments are particularly concerning in presence of high dimensional treatment effect heterogeneity – it is perfectly possible to have perfect balance on treatment and control, but high dimensional imbalance that can not be statistically evaluated. This is consequential for the narrative that that may arise from a research paper if the treatment effects are highly heterogenous. There are good reasons to believe that this may be the case. In the below post I commented on some very peculiar observations around selection in opinion polling surveys around the Leave question.

Attacks on academia? The bigger issue is around societal organisation in free democracies. Structurally the above issue creates then a tension between politics, media and society — and the role of academics more broadly. Academics may be (perceived to be) contesting traditional spaces occupied by politics — the intermediation between different groups and the shaping of narratives. For some, it may be articulated as if academia is producing disinformation. And so, in some way you could rationalize some forms of attacks on higher education. This naturally is NOT an endorsement. It is important for free societies to have diverse inquiry, but we know an academic system and its incentive structure can be gamed.

And the rewards to gaming have, most likely, drastically increased. If one is pursuing a career in academia to “win academia”, there may be a societal cost to be born from spillovers of ambition.


Posted

in

by

Tags: