What are the next big steps for the research on evidence informed practice and policy making?

Today’s blog is by Andreas Brøgger Jensen, Research Assistant at the Humanomics Research Centre at Aalborg University, campus Copenhagen, offering his thoughts on the discussions that took place at the “Transform the use of Research and Evidence in policy and practice” in London 24-25. September 2018.

The event was on the one hand the forming or manifestation of a community. Researchers from different fields, policy professionals and representatives from research foundations all took part in a discussion to define the current state and future of what one might also describe as a field of research in itself, even though this was also a topic of debate. On the other hand, the agenda was set around a consensus that the field of implementation science and the research on research use need to take steps to get to a next level of some sort. This led to the formulation of many questions in a well facilitated debate on future research questions, put in motion by the hosts of the conference. As I heard the discussion, there are roughly two major meta-questions in play.

  • Are those who have taken up the task of accumulating knowledge and conducting research on these topics on the right track or do we need to look in other directions, regarding specific research topics and research methods?
  • Are the concepts and theories that form the base for conducting empirical research adequate or do we need to settle some of the major conceptual and theoretical challenges in the field before more empirical research is carried out?

 

There seemed to be as many ideas on what direction those steps should be in, as there where participants, but I will here make an attempt to cluster these diverse and interesting ideas into what I call positions, using the two questions posed above, although these were probably not entirely comprehensive in summing up the many debates. Participants of course do not fall neatly into each position, as they represent ideas about research and not researchers themselves.  In the following I outline the main characteristics of the different positions, in order to provide a compact summary of the discussions and some of the arguments presented. This is of course a reduction of what was in fact a much more detailed discussion of specific subfields or aspects of the wider area of use of research, but I hope that it can provide the reader some overall perspective on the landscape of ideas expressed at the conference.

 

In the first position, I have clustered versions of the idea that there is a need for more conceptual clarity. This seems to take a point of departure from an assumption that researchers and academics have so far not been on the right track in producing conceptual clarity. Major and fundamental questions were raised about what is the object of study. These included: What is evidence? What is practice/policy? And in asking these questions one can also find the implicit assumption that we so far do not have any sound or practical definitions, or any consensus on these issues. Other questions in this positions focus on our understanding of the processes. Is there, for instance, such a thing as a gap between research and policy that needs to be bridged? And what should be the role of research knowledge in an ideal decision making process in a democratic society?

There seems to be a consensus that the fundamental conceptual and theoretical questions still need some work, but the degree to which we need to start all or build on the exiting knowledge differentiates this and the following position.

 

The second position also holds an assumption that we need more conceptual clarity, but here I have clustered different versions of the idea that we need to build on the foundations already constructed by, for example, Carol Weiss and other scholars, perhaps more recent. From this point of view, there still seems to be some details that need clarification, including some value-driven discussions on what norms and standards should be used to evaluate good policy making. Some referred to these as the “ought to be”-issues of the field. Following from this are also questions about how can we operationalise the value-driven assumptions and ideas about good policy-making into meaningful research questions.

 

This provides a logical bridge, to the two other positons formulated from observations during the conference. They both point to more empirical research, as the main steps to be taken for the field of research use in policy and practice. These positions thereby focus more on building the evidence base on the use of evidence.

 

The third position is a sort of “scale-up what we are already doing” regarding empirical research. Some statements during the conference underlined that we perhaps do not need to “overthink things” and get moving with scaling up existing projects and transferring methods of enquiry between countries in order to conduct better comparative studies of use of evidence. This implies some satisfaction with the current state of the concepts and theories in the field and with the methods being used. In this position, I also put ideas about applying designs from studies in one country in another country. Interesting large scale surveys on the use of research of civil servants in Canada (https://www.tandfonline.com/doi/full/10.1016/j.polsoc.2010.03.004) and Australia (https://www.emeraldinsight.com/doi/full/10.1108/IJPSM-04-2014-0056)could probably produce interesting knowledge if applied in other countries as well.

 

The fourth position holds suggestions pointing to a need for different kinds of empirical research.The fundamental idea in this cluster is that there perhaps is a need for different methods in order to investigate what actually is going on in the work processes of the people using or translating evidence and using research in policy making. The current widespread use of eg surveys and interviews might not sufficiently give us insights in the substance of what is actually going on in the real-life-world, behind the self-report bias that haunt both these methods. Logically there is also some relation between this position and the one above, pointing to the invention and introduction of new theories and concepts. However, there are many directions for empirical research – such as building on the terminology of Weiss, mentioned above. Promising new possibilities are offered by new methods such as digital information tracking, or big-data analysis on the diffusion of concepts. However, traditional anthropological methods, such as systematic field observations, to supplement existing use of interviews and document studies could answer some of the fundamental questions of the field – for instance about the ways research is actually understood and used by policy makers. This might also be a way to learn more about what makes evidence useful to policy makers, even though there might be some epistemological problems with getting in to the heads of decision makers and learning what can swing a decision one way or the other, which again relates to ideals on whether evidence should inform decision making or persuade to specific decisions.

 

One version of different kinds of empirical research could be to take a more interventionist approach. In other areas, such as research on psychosocial work environment, methods based on e.g. participatory action research-principles have long been used, although they have in some areas been in and out of fashion. But has the field of research use, evidence informed policy making and implementation science exhausted the possibilities in the interventionist research approaches? With a clearly defined set of values as departure, one approach might be to design and evaluate interventions that reflect what is already considered good practice, e.g. regarding the co-formulation/co-creation of research questions. Perhaps a solid pre- and post- measure that makes sense in the context could lead to some evidence on the effects of such interventions.

 

The pro- and con-arguments for each position would probably justify writing several books, so these are not included here. Some considerations are, however, empirical. Systematic literature reviews, such as the one conducted by Oliver et al. , published in 2014 (https://bmchealthservres-biomedcentral-com.zorac.aub.aau.dk/articles/10.1186/1472-6963-14-2), can also be instrumental in informing the further debate on the future research in the field of evidence use and research uptake. At the Humanomics research centre at AAU in Copenhagen, we are currently working on such a literature review on Science Advice in policy making to contribute to the identification of knowledge gaps and thereby take part in the debate on the need for future research in that field.

These are some of my thoughts on the discussions that took place at the “Transform the use of Research and Evidence in policy and practice” in London 24-25. September 2018.

The event was on the one hand the forming or manifestation of a community. Researchers from different fields, policy professionals and representatives from research foundations all took part in a discussion to define the current state and future of what one might also describe as a field of research in itself, even though this was also a topic of debate. On the other hand, the agenda was set around a consensus that the field of implementation science and the research on research use need to take steps to get to a next level of some sort. This led to the formulation of many questions in a well facilitated debate on future research questions, put in motion by the hosts of the conference. As I heard the discussion, there are roughly two major meta-questions in play.

  • Are those who have taken up the task of accumulating knowledge and conducting research on these topics on the right track or do we need to look in other directions, regarding specific research topics and research methods?
  • Are the concepts and theories that form the base for conducting empirical research adequate or do we need to settle some of the major conceptual and theoretical challenges in the field before more empirical research is carried out?

 

There seemed to be as many ideas on what direction those steps should be in, as there where participants, but I will here make an attempt to cluster these diverse and interesting ideas into what I call positions, using the two questions posed above, although these were probably not entirely comprehensive in summing up the many debates. Participants of course do not fall neatly into each position, as they represent ideas about research and not researchers themselves.  In the following I outline the main characteristics of the different positions, in order to provide a compact summary of the discussions and some of the arguments presented. This is of course a reduction of what was in fact a much more detailed discussion of specific subfields or aspects of the wider area of use of research, but I hope that it can provide the reader some overall perspective on the landscape of ideas expressed at the conference.

 

In the first position, I have clustered versions of the idea that there is a need for more conceptual clarity. This seems to take a point of departure from an assumption that researchers and academics have so far not been on the right track in producing conceptual clarity. Major and fundamental questions were raised about what is the object of study. These included: What is evidence? What is practice/policy? And in asking these questions one can also find the implicit assumption that we so far do not have any sound or practical definitions, or any consensus on these issues. Other questions in this positions focus on our understanding of the processes. Is there, for instance, such a thing as a gap between research and policy that needs to be bridged? And what should be the role of research knowledge in an ideal decision making process in a democratic society?

There seems to be a consensus that the fundamental conceptual and theoretical questions still need some work, but the degree to which we need to start all or build on the exiting knowledge differentiates this and the following position.

 

The second positionalso holds an assumption that we need more conceptual clarity, but here I have clustered different versions of the idea that we need to build on the foundations already constructed by, for example, Carol Weiss and other scholars, perhaps more recent. From this point of view, there still seems to be some details that need clarification, including some value-driven discussions on what norms and standards should be used to evaluate good policy making. Some referred to these as the “ought to be”-issues of the field. Following from this are also questions about how can we operationalise the value-driven assumptions and ideas about good policy-making into meaningful research questions.

 

This provides a logical bridge, to the two other positons formulated from observations during the conference. They both point to more empirical research, as the main steps to be taken for the field of research use in policy and practice. These positions thereby focus more on building the evidence base on the use of evidence.

 

The third position is a sort of “scale-up what we are already doing” regarding empirical research. Some statements during the conference underlined that we perhaps do not need to “overthink things” and get moving with scaling up existing projects and transferring methods of enquiry between countries in order to conduct better comparative studies of use of evidence. This implies some satisfaction with the current state of the concepts and theories in the field and with the methods being used. In this position, I also put ideas about applying designs from studies in one country in another country. Interesting large scale surveys on the use of research of civil servants in Canada (https://www.tandfonline.com/doi/full/10.1016/j.polsoc.2010.03.004) and Australia (https://www.emeraldinsight.com/doi/full/10.1108/IJPSM-04-2014-0056)could probably produce interesting knowledge if applied in other countries as well.

 

The fourth position holds suggestions pointing to a need for different kinds of empirical research.The fundamental idea in this cluster is that there perhaps is a need for different methods in order to investigate what actually is going on in the work processes of the people using or translating evidence and using research in policy making. The current widespread use of eg surveys and interviews might not sufficiently give us insights in the substance of what is actually going on in the real-life-world, behind the self-report bias that haunt both these methods. Logically there is also some relation between this position and the one above, pointing to the invention and introduction of new theories and concepts. However, there are many directions for empirical research – such as building on the terminology of Weiss, mentioned above. Promising new possibilities are offered by new methods such as digital information tracking, or big-data analysis on the diffusion of concepts. However, traditional anthropological methods, such as systematic field observations, to supplement existing use of interviews and document studies could answer some of the fundamental questions of the field – for instance about the ways research is actually understood and used by policy makers. This might also be a way to learn more about what makes evidence useful to policy makers, even though there might be some epistemological problems with getting in to the heads of decision makers and learning what can swing a decision one way or the other, which again relates to ideals on whether evidence should inform decision making or persuade to specific decisions.

 

One version of different kinds of empirical research could be to take a more interventionist approach. In other areas, such as research on psychosocial work environment, methods based on e.g. participatory action research-principles have long been used, although they have in some areas been in and out of fashion. But has the field of research use, evidence informed policy making and implementation science exhausted the possibilities in the interventionist research approaches? With a clearly defined set of values as departure, one approach might be to design and evaluate interventions that reflect what is already considered good practice, e.g. regarding the co-formulation/co-creation of research questions. Perhaps a solid pre- and post- measure that makes sense in the context could lead to some evidence on the effects of such interventions.

 

 

The pro- and con-arguments for each position would probably justify writing several books, so these are not included here. Some considerations are, however, empirical. Systematic literature reviews, such as the one conducted by Oliver et al. , published in 2014 (https://bmchealthservres-biomedcentral-com.zorac.aub.aau.dk/articles/10.1186/1472-6963-14-2), can also be instrumental in informing the further debate on the future research in the field of evidence use and research uptake. At the Humanomics research centre at AAU in Copenhagen, we are currently working on such a literature review on Science Advice in policy making to contribute to the identification of knowledge gaps and thereby take part in the debate on the need for future research in that field.

 

The secret sauce of evidence use

James Georgalakis of the IDS argues: At times the noble cause of evidence-informed policy and practice (EIPP) can almost feel like a competition, as different research disciplines sectors and experts claim to have discovered the secret sauce of evidence use. The truth is we all have a lot to learn from each other.

Read his reflections from our event here.