Sponsored by

Research Impact

Allow me to deviate from my usual posts to discuss an issue within the academy itself for a moment. This issue is that of how we translate our research to the end user, how our institutions measure our success at this critical endeavour and how our colleagues generally do not engage with it, instead opting for publication. I am talking, of course, about Research Impact.


The time is 5:30 and a business executive has just begun the process of a new day. He rises and heads to his morning ritual ablutions with a phone in hand. Before he has even found trousers, he begins to consume media; some of which will rightly educate his decision-making process during his day job. This media contains articles, posted on social media, emailed news and perhaps sites dedicated to his business or personal vocations.

This subtle and often underestimated pervasive consumption of media is one side of the academic argument that is often poorly misunderstood or ignored altogether.

What is Impact?

The pervasive consumption of digital media is a prime example of how research should be communicated to the rightful end users. This is how academic research can have impact. But what is impact?

There are several different definitions of impact. The Economic and Social Research Council based in the UK says:

Academic impact is the demonstrable contribution that excellent social and economic research makes in shifting understanding and advancing scientific, method, theory and application across and within disciplines.


In Australia, the Australian Reasearch Council states:

The Definition of Research Impact 
Research impact is the contribution that research makes to the economy, society, environment or culture, beyond the contribution to academic research. 

Australian Research Council – https://www.arc.gov.au/policies-strategies/strategy/research-impact-principles-framework

Note the emphasis on “beyond the contribution to academic research.” Merely publishing into a journal of repute is not enough to claim impact. I argue that my colleagues who pursue high cited publications with no engagement do not have impact. Those that take their work and value add do instead.

Why then do we argue with so much importance on publication if impact is the currency for which society benefits from our work? The publication of our work is only half the equation for a truly sucessful career.

Why is Impact Important?

Looking at the above definitions, impact is how society benefits from our research.

When we consider the paywall issues plaguing our global journal systems currently, the issue of research dissemination becomes more one-sided with only academics from wealthy, privileged institutions being able to gain access to the most current literature. Society, by and large does not read journals. They do not subscribe to the latest academic findings and they rarely consult patents. They get on with their lives, jobs and business.

Putting our work into journals with the intent of dissemination is no longer meeting the stated objective. On the contrary, it is, in fact, locking the research up, freezing it beyond the masses. Collaborating internationally and publishing in open access increases impact. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6145557/

How is impact currently measured?

It has been said, “Good departments will find a way of side-stepping metrics to judge what counts.”

The metrics currently used to measure impact generally amounts to the monetary amount of category 2, 3 and 4 research grants that an institution receives as a proxy for impact. The theory, the more money industry is giving you, the more engaged they are with your research, the more impact they receive.

University and Government policy should be robust yet, flexible to capture the tangible and intangible economic and social outcomes our institutions stand to deliver. Measuring impact in this manner measures the tangible, but, ignores the intangible. It praises the high-cost research endeavours but ignores the low-cost research that does not need expensive equipment and contracts. It is dictated to by market forces and in my experience ultimately sees those contracting to Defence or other high valued STEM fields with more capital than those who may be advising public policy directly.

Arts faculties are not considered powerhouses of research because they simply do not bring home the bacon. Yet, it is from these institutions that our future of policy, work and ultimately the very foundations of society are being decided.

It’s not a competition, but first place gets the prize.

The academy is not a competition but, measuring impact in this manner promotes an unhealthy competition which ruins collegiality of interfaculty research. Measure money and suddenly people fear they are being ranked or examined for efficiencies. The problem, the more impact an institution has, the more funding they inevitably get from the government to fund the inevitable administration burden a school must support.

It does become a competition.

The burden is driven because, in Australia, we underfund our research by almost 30%. We have to supplement our research from our teaching fees. Our students miss out as we divert resources which should be spent on new, better, more purposeful education spaces. Australia is an oddity in this sense.

It means that funding below a threshold (say $25,000) is not commercial to accept as it costs an institution more in administration. Universities must refuse free money because, it costs simply due to the ineptitude of public policy.

It is at odds with the scientific method. Research equally rewards she who comes first as he who follows. The rigours of the scientific method require both independent validation and continuous repetition. There is no static finish line to cross with arms raised high declaring we have found it all.

Indeed the punchline, “Everything that can be invented has been invented” from an 1899 joke is precisely the humorous intent of this exactness of the scientific method.

A comic from 1899 edition of Punch Magazine regarding the coming century.

And ultimately, whatever institutions or governments measure will merely only be a proxy for success. If we define success as how high one is based on their relative size, but, then measure it as how one climbs a tree it will benefit a monkey over the fish even if the fish was already more successful.

Measuring impact from money is exactly the same due to the issues of STEM fields being higher cost.

The allegory of the animal school.

How can we measure impact?

No matter how we measure impact it will not be perfect. But there are somethings we could do to improve.

  • Impact could be measured by how influential our students become. Without the teacher, the student could not reach their lofty heights. This gives benefit to those academics in our institutions who are research only. It increases the importance of our teaching institutions within society.
  • Impact can be measured by how often the media takes an interest. How successful are we at communicating our discoveries to the broader community at large? Do we receive bequests or other philanthropy to assist our research? Is it just money or is there in kind? How many readers are our articles generating? How engaged is the community with our research?
  • Impact can be measured by how much money we bring in, but, a weighted measurement is needed to ensure that low-cost research which has similar benefits is ranked on an equal footing with high-cost research. Equivalence between the disciplines is required to ensure high cost technology research does not capture the funding attentions of our communities to the detriment of the soft sciences.
  • Impact could be measured by how many engagements we have with industry, media, government and the public. institution in and of itself would mean that our research is being reported both up and down. Moreover, how much time is being spent would be a proxy for how impactful our research is. The more time each body takes up with requests for comment and insight, the more impactful our work is.
  • Impact should be measured by how many awards our research has received. How much acknowledgement has been paid to our work for the beneft of society at large?

As a PhD awaiting an ECR position, I have captured 5% of my University’s total readership for The Conversation. However, this is often overlooked and regarded as a poor distribution of my research for want of highly regarded journals which no one bar academics will ever read. I have had arguments when seeking funding because I had not published in reputable journals. I have a readership of over 600,000 for my science communication work. My published journal articles are lucky to have 1,000; there are no citations bar my own. My science communication work has been a far more effective impact activity than publication.

My conversation work has driven nearly 20 follow up engagements with media, and numerous meetings with industry to solve and consult on the real-world problems. These “wicked problems” are what I went into academia for in the first place. There is more value in my work engaging with industry than sitting behind a screen, writing a research grant. Yes, Category 1 grants bring prestige and honour to an institution, but why? Shouldn’t we glean that reputation advantage from the work we offer our society as a whole?

The point to this post is that there are more ways to measure impact than simply the number of dollars or publications one has attributed to one’s name. Impact is the global currency of the academic. It’s time we learned how to convert.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.