COVID-19 has brought tough choices for governments and other decision-makers, including employers. Within one week from 16 March, following research from Imperial College London, the UK government’s rhetoric and strategy changed drastically from mitigation and herd immunity to suppression and lockdown. Then there were decisions on what COVID-19 tests to invest in, how many ICU beds to prepare, how to source and distribute protective equipment, when to loosen restrictions on movement, and on it goes.

We have heard criticisms – for example that the UK government’s actions were too little, too late. Yet we’ve also seen accounts of divided opinion among experts with the science around COVID-19 evidently developing week by week. But in navigating these treacherous waters one thing remains crystal clear: decision-makers must draw on the best available scientific evidence.

Had enough of experts?

This isn’t always the case - it’s a far cry from Michael Gove’s comment in the run-up to the Brexit vote that ‘people in this country have had enough of experts’. But,there’s no arguing with a pandemic - wise decisions aren’t based on gut feeling, we need to look at the figures. And we rely on experts to tell us which figures to look at and how to interpret them.

It’s tricky when, as is often the case, the decision-makers and most trustworthy experts are different people. But there are approaches and questions that can help decision-makers engage with experts and evidence. And this is something that employers, managers and HR professionals can learn from too. That doesn't mean becoming a scientist, just becoming a savvy consumers of research.

Consider the study design - a question of scientific method

A first critical question to put to scientists is what is the study design of their research. Understanding the scientific method helps to gauge the robustness or trustworthiness of research – how good a guide it is for what we want to know.

Much of the research on COVID-19, including the studies mentioned from Imperial College, is based on predictive mathematical modelling that draws on other studies. We are now also seeing what for many will be more familiar methods, with clinical trials starting in earnest in the UK.

Trials centre on interventions – be they drugs to cure a disease, or management practices to improve performance - aim to tell us about cause-and-effect relationships. If we do A, will it increase B, or reduce C?

Take the case of Didier Raoult, whose trial of chloroquine derivative and the antibiotic azithromycine to treat COVID-19 caused huge waves, seemed to show some promise. But it was criticised for not being properly controlled or peer reviewed. It was, in short, a preliminary study. This didn’t stop Donald Trump tweeting that it could be ‘one of the biggest game changers in the history of medicine’. Meanwhile, Emmanuel Macron has received his own criticism for the French government’s response to COVID-19, including giving mixed messages about Raoult’s research, but got something right when he said: ‘It’s not for the president of the French republic to say what treatment is good and what isn’t. What I did … was to make sure that what [Raoult] is working on is within the framework of a clinical trial protocol.’

‘What’s the study design?’ is a useful question decision-makers in any field can put to scientists. Helpfully, there is a broad hierarchy of research designs for cause-and-effect relationships.

Hierarchy of study designs according to trustworthiness

  1. Systematic reviews and meta-analyses – the results of multiple studies are combined.
  2. Randomised controlled trials (RCTs) – subjects are randomly assigned to intervention and control groups.
  3. Controlled trials – two groups are studied longitudinally, one that receives an intervention and a comparison (control) group that does not.
  4. Longitudinal studies – a group is studied over time to look at change.
  5. Cross-sectional studies – a group is studied at a single point in time.

Evidence is often conflicting and far from clear-cut, so cherry-picking studies to suit your views is dangerous. We need to draw together the best evidence from the body of knowledge and look at it as a whole. This is why systematic reviews are so useful: if the studies they draw on are RCTs, this gives us the most trustworthy evidence on a cause-and-effect relationship.

The effect of size - thinking about scale

What changed with the Imperial College research of 16 March? Essentially it boils down to a question of scale, or effect size. The research predicted that, under the UK government’s current measures, COVID-19 would spread at a rate that would massively overwhelm our hospitals, so a change of strategy was needed. The numbers for the existing strategy didn’t add up.

Effect sizes are important because they explain the magnitude of an intervention, or the size of the difference between groups. A hugely useful tool is Cohen’s Rule of Thumb, which matches different statistical measures to small / medium / large categories. It gives relative statistical novices a clear steer on what technical results means for the real world (see further CEBMa Guideline for Rapid Evidence Assessments in Management and Organizations, p 20).

According to Cohen, a ‘small’ effect is one that is visible only through careful examination – so may not be practically relevant. A ‘medium’ effect is one that is ‘visible to the naked eye of the careful observer’. And a ‘large’ effect is one that anybody can easily see because it is substantial. An example of a large effect size is the relationship between sex and height: if you walked into a large room full of people in which all the men were on one side and all the women on the other side, you would instantly see a general difference in height.

Effect sizes need to be contextualised. For example, a small effect is of huge importance if the outcome is the number of heart attacks or deaths; in comparison a large effect will be relatively unimportant if the outcome is work motivation. But they tell us a great deal about the importance of an intervention or technique.

Unfortunately, researchers often ignore effect sizes in favour of ‘P values’ or statistical significance. P values are important, as they tell us about the likelihood of research results being due to chance. In particular, they show the probability that one would have obtained the same, or a more extreme result by chance (less than 1 in 20, or p<.05, is the main convention in social science for statistically significant results). If this seems conceptually complex, it’s because it is. P values are widely misunderstood and even leading statisticians can find it hard to explain them.

But even though it’s important technical data, statistical significance does not tell us about practical significance – do research results show a huge difference that you ignore at your peril, or a tiny difference that really isn’t worth the effort. For that we need effect sizes.

Don’t be daunted

Many lessons will be learnt from the COVID-19 pandemic, but making better use of experts and evidence may be among the most valuable on offer. Decision-makers of all types, from governments to employers, can become savvier consumers of research, so perhaps the first lesson is to try not to be daunted by research.

More specifically, two lines of questioning can take you a long way:

First, basic questions about research design – Is this study a controlled trial? and so on – can give you a good handle on how trustworthy a piece of evidence is. The quality of studies – how well the designs were carried out – also needs to be understood, but identifying the study design is a very good start.

Second, if the research is about an intervention or the differences between groups, ask about the effect size – is it small, medium or large. Don’t let yourself be bamboozled by statistical significance and if researchers don’t have interpretable effect sizes, insist on them.

The CIPD’s Profession Survey 2020 highlights that over two-thirds (67%) of people professionals feel they can use evidence to improve their practice. Your professional opinion carries more weight when it's supported by strong evidence from diverse sources. Our new Profession Map will show you how being evidence-based contributes to making better decisions in any situation.

Keep calm and ask about the evidence.

About the authors

Jake Young, Research Associate, CIPD

Jake joined the CIPD in 2018, having completed a master’s degree in Social Science Research Methods at the University of Nottingham. He also holds an undergraduate degree in Criminology and Sociology.

Jake’s research interests concern aspects of equality, diversity and inclusion, such as inequality, gender and identity in the workplace. Jake is currently involved in the creation of a research project examining the effectiveness of organisational recruitment programmes and their relationship with workplace performance.

Jake leads research on the CIPD Good Work Index programme of work, exploring the key dimensions of job quality in the UK. Jake has also written several CIPD evidence reviews on a variety of organisational topics, including employee engagement, employee resilience and digital work and wellbeing.

Jonny Gifford, Senior Adviser for Organisational Behaviour | Interim Head of Research, CIPD

Jonny’s work centres on conducting applied research in employment and people management, and strengthening links between academia and practice. His research interests include job quality or ‘good work’ and what works in driving employee performance and wellbeing. He leads the CIPD’s work on evidence-based HR and academic knowledge exchange.

Jonny has been conducting applied research in the field of employment and people management for about 20 years, with previous roles at Westminster Business School, the Institute for Employment Studies and Roffey Park Institute. He is an Academic Member of the CIPD, a Fellow of the Center for Evidence-Based Management (CEBMa), Associate Editor at the Journal of Organizational Effectiveness: People and Performance (JOEPP), and a PhD candidate at the Vrije Universiteit Amsterdam.

More on this topic

Factsheet
Evidence-based practice for effective decision-making

Effective HR decision-making is based on considering the best available evidence combined with critical thinking.

Thought leadership
Building an evidence-based people profession

We all know that being evidence-based helps us make better decisions, but how can we turn this into a reality?

Case study
Hybrid working and employee wellbeing: International SOS Foundation

A case study on using evidence-based practice to better understand how to support hybrid workforces

Case study
Performance management: BBC

A case study on using evidence-based practice to reinvigorate performance management practices

More thought leadership

Thought leadership
How L&D can create value: Focusing on skills development

How can L&D teams create value and impact and improve performance through focusing on skills development?

Thought leadership
Briefing | HR implications of Budget 2025

A summary of the key developments from Ireland’s Budget 2025, focusing on the measures most relevant to the people profession

Thought leadership
Critical role of HR business partnering in HR operating models

How can people teams balance line managers’ need for operational people management support while growing their team’s strategic influence through the HRBP role?

Thought leadership
Analysis | Quantifying the impact of generative AI on HR

We examine and outline recent research investigating the impact of generative AI tools on the HR profession