Skip to main content

Together we are beating cancer

Donate now
  • For Researchers

Measuring the impact of research

The Cancer Research UK logo
by Cancer Research UK | Research Feature

20 June 2017

0 comments 0 comments

Measuring the impact of research

Evaluating the broader societal impact of research is increasingly important for research funders. But what is the best way to do this? And how should we work with the research community to ensure we know our research is bringing the greatest possible impact to cancer patients?

We know the research we fund is world class, but is it curing cancer? That’s the £386 million question (the amount we spent on research in 2016/17). We have an ambition to accelerate progress and see 3 in 4 people survive cancer by 2034. But is our strategy on track to achieve this ambition?

Enter the world of research impact assessment. We are plugging into this evolving area to link the activity we fund to eventual patient impact, explains Rachel Stirzaker, our Director of Strategy.

It is really important to us to fund exceptional cancer research and to support the community that delivers it. But our mission is to drive real patient impact. Everything we do must feed into that.

—Rachel Stirzaker

A brief history of research impact assessment

Impact assessment involves plotting a path from input right through to patient impact (see diagram). The concept is not new, but is increasingly important for funders.

There is a need for society to hold research funders and organisations undertaking research to account. We have a duty to make research effective if we are using taxpayers’ and donors’ money.

—Professor Jonathan Grant, Assistant Principal for Strategic Initiatives and Public Policy, King’s College London

 

Research impact and its assessment has moved up the science policy agenda, Jonathan says, and now plays a strong role in the UK’s Research Excellence Framework (REF), which determines university funding. Changes introduced in 2014 mean the impact of research beyond academia now accounts for 20% of the overall assessment, which has changed the conversation around research funding and impact, making it more mainstream and visible. “The REF pushed people to really consider the impact of their research,” says Andrew Knowles, Senior Research Evaluation Manager at CRUK.

However, REF’s impact assessment hasn’t been without its critics, with claims including extra admin time for reporting. Lord Stern’s 2016 review contained recommendations for future REF exercises, which were generally well received. These included retaining peer review as a key assessment tool, supported by appropriate metrics and data, and the introduction of institutional level impact case studies to better showcase interdisciplinary research.

As Dr George Santangelo, Director of the Office of Portfolio Analysis at the US National Institutes of Health (NIH), points out, impact assessment is an evolving process, involving learning from past experiences and collaborating with the research community.

Impact assessment pathway from input to patient impact

Evaluation and impact pathway

Adapted from CSIRO (Commonwealth Scientific and Industrial Research Organisation)

Laying the foundations

In 2014, we joined Researchfish, an online platform that enables us to capture research outputs and outcomes directly from researchers and link them with inputs. Andrew describes Researchfish as a tool that helps collect a “wealth of information” on the research we fund. Aware of the risk of creating more admin for researchers, we worked closely with many funders, including Research Councils UK, to agree a shared set of questions for the platform. “We’re still learning what works,” Andrew says. “But lessons from REF and Researchfish have evolved our thinking on impact assessment.” In just three years, Researchfish® is providing us with a great deal of new insight that we wouldn’t have previously collected or analysed.

While Researchfish provides a granular view and is generating a better knowledge of outputs, we know there is more to be done to monitor and measure impact. As Andrew explains, “We are currently at stage one: considering impact assessment, clarifying our objectives and starting a conversation with our research community so we can work together on stage two – developing the tools and measures to gauge impact and the charity’s progress.”

Finding the right measures

But figuring out what to do, and what not to do, is complicated. For starters, there are 16 different frameworks and models that already exist for assessing impact, according to a review published in 2015 in Health Research Policy and Systems, and each has its own benefits and downsides. And understanding what to measure is critical. This is where working with the research community and learning from others is paramount. As Rachel notes, there is still a lot of work to be done to develop the right metrics, but she points to the work being done by experts in the field that will help frame our thinking at CRUK.

The NIH’s George Santangelo is one such expert. He is of the mind that traditional impact gauges – used mainly because they are easy to measure, such as publications, citations and journal impact factors – are flawed metrics, and he believes a more effective metric is needed. George and his team has developed the Relative Citation Ratio, which looks at the citations of individual papers and the rate these are accrued, adjusted for different research disciplines, as a proxy of ‘influence’ rather than impact. He is pleased with the buy-in from the community, but “no one metric will capture impact”, he notes. Instead, there needs to be a diversity of metrics – which could include measuring translation into treatments, media mentions, patents, data sharing, reproducibility and quality based on human judgement – and these need to be developed collaboratively with the research community.

Striking a balance

Hundreds of scientists and research organisations also hold this view, criticising the reliance on the journal impact factor and signing the Declaration on Research Assessment (DORA). As a signatory of this declaration, we are committed to accurately measuring output and improving the ways research is evaluated. As Andrew reports, we don’t want to focus too much on publications. “We want to take a wide-ranging approach, incorporating lots of data sources and a whole host of existing information, to develop a broad suite of metrics that work for CRUK at each stage of the impact pathway” Andrew says. Furthermore, Andrew and Rachel both confirm that expert review will stay as part of the suite because it remains key to ensuring that we continue to fund world-class research.

We want to make sure our reporting is as simple and efficient as possible while still achieving high-quality data necessary for analysis. Sharing data between systems, universities and funders is one way to enable this, Andrew believes. Researchfish is aiming to improve interoperability between systems, with work so far focusing on publication records between institutions and funders – a recent pilot study showed significantly reduced reporting times for researchers with this increased sharing of data. Researchers can also share information between their Researchfish account and their ORCID profile.

Keeping on track

Measuring impact – and doing it well – is a necessity for CRUK. “To be able to understand where we’ve had impact, and where we could help to drive impact if we applied funding, has become central to our decision making,” says Andrew. We will continue to fund high quality research, and building on learnings and collaborations to date, we will establish an appropriate suite of metrics for CRUK. Our supporters increasingly expect to understand how their money is helping to solve the cancer problem as part of their reason to donate; ultimately, our aim is to use our supporters’ money in the most effective and efficient way, helping the researchers we fund generate the greatest impact for cancer patients.

In this article

Professor Jonathan Grant

 

Jonathan Grant

Assistant Principal, Strategic Initiatives and Public Policy, King’s College London

 

 

Andrew Knowles

Andrew Knowles

Senior Research Evaluation Manager, CRUK

 

 

George Santangelo

 

George Santangelo

Director of Office of Portfolio Analysis, US National Institutes of Health (NIH)

 

 

Rachel Stirzaker

 

Rachel Stirzaker

Director of Strategy, CRUK