UQ318508OA - UQ eSpace

POST PRINT VERSION.
Accepted by Science Communication on 27 August 2012.
*Note – this is a copy of the final draft version submitted on 5 April 2013 after peer
review.
Science Communication (2012) 35(6) 780-809
http://scx.sagepub.com/content/35/6/780
DOI: 10.1177/1075547013491398
Author details:
Corresponding author: Adrian Cherney
*Dr Adrian Cherney
School of Social Science
The University of Queensland
Brisbane, St Lucia 4072
ph + 61 7 3365 3236
fax + 61 7 3365 1544
email: [email protected]
Dr Jenny Povey
Professor Brian Head
Professor Paul Boreham
Michele Ferguson
Institute for Social Science Research
The University of Queensland
Brisbane, St Lucia.
Acknowledgements: This project is supported through ARC Linkage project:
LP100100380.
Research Utilization in the Social Sciences: A Comparison of Five
Academic Disciplines in Australia
Abstract
Social science disciplines generate diverse forms of research utilization, given the
various contexts in which disciplinary knowledge is produced and translated for the
fields of policy and practice. We examine this issue from the perspective of academic
researchers in the social sciences across education, economics, sociology, political
science and psychology. We use survey data from a study of university-based social
science researchers in Australia to examine factors that influence perceptions of the
policy uptake of social research. Our results show that disciplinary and
methodological context matters when it comes to understanding the translation,
dissemination and utilization of academic social research.
Keywords: research utilization, translation, research impact, social science, research
collaborations.
Introduction
The need to improve the dissemination and translation of social research for nonacademic audiences, and to increase the impact of academic research has gained
increasing attention across a range of academic disciplines. These include social work
(Wilkinson, Gallagher and Smith 2012), education (Rickinson, Sebba & Edwards
2011), economics (Banks 2011), sociology (Cherney and McGee 2011), political
science (Rogers 1989), psychology (Meagher, Lyall and Nutley 2008) and public
health (Contrandriopoulos et al 2010; Haynes et al 2011; Lavis et al 2003, 2006).
There have been many recent attempts to establish new processes to promote
interaction between social scientists and government and community stakeholders,
such as the Dutch ‘science shops’ concept that has been supported by the European
Commission and emerged in many countries under various guises (European
Commission 2003).
Interest has also been fuelled by government approaches to the assessment of
academic research quality, such as the Excellence in Research Australia (ERA)
initiative and the UK government’s Research Excellence Framework. The impact of
academic research has also been raised in the United States in the context of how the
National Science Foundation funds research and requires information about relevance
for non-science stakeholders (Holbrook 2012; Kamenetzky 2013; Mervis 2011). The
issue has gained public attention following calls by some politicians to reduce social
science research funding due to its lack of perceived relevance by comparison with
medical research (Atran 2013). These issues have coincided with recent attempts in
Australia, Europe and the US to gauge the social and economic impact of university
research (Allen Consulting 2005; Holbrook 2012; Kamenetzky 2013; Macintyre
2010; Mervis 2011; Juhlin, Tang and Molas-Gallart 2012; Smith, Ward and House
2011; Wooding et al 2007). The issue of research impact has also been emphasized in
wider community and industry concerns that academics need to engage more with
end-users, a major criticism being that there is a disjunction between research
conducted by academics and its uptake by public or private sector agencies
(Burkhardt & Schoenfeld 2003; Bogenschneider and Corbett, 2010; Rickinson, Sebba
and Edwards 2011; Lambert 2003; for specific Australian commentary see Macintyre
2010; Ross 2011; Shergold 2011). Better channels for communication and knowledge
translation seem to be essential (Lomas 2000). Closer synergies such as collaborative
research partnerships are seen as a way to enhance the impact of academic social
research (Huberman, 1990; Lavis et al 2006b; Nutley, Walter and Davies 2007; Orr
and Bennett 2012).
It needs to be recognized that different social science disciplines will
potentially generate diverse forms of research use. For instance, research uptake will
be influenced by discipline-based contextual factors, which will shape knowledge
translation activities (Landry, Amara and Lamari 2001b; Levin 2011). These
contextual factors include attitudes among academics about the value of conducting
applied research and about the importance of investing in processes (e.g. partnerships)
that help generate research uptake by end-users. This observation draws attention to
two related issues: which factors appear to influence social research use, and how
these factors may vary across social science disciplines? There is little empirical work
on this topic internationally. Investigations of research utilisation have mainly focused
on case studies of specific policy domains (e.g. Kramer and Wells 2005; Wilkinson,
Gallagher and Smith 2012; Weiss and Bucuvalas 1980) and only a few studies have
examined research impact across different research disciplines (see Landry, Amara
and Lamari 2001b). While case studies on research utilization have been important in
providing insight into the nuances of knowledge translation and impact, their
generalizability to other fields is open to debate (Dunn, Dukes and Cahill 1984;
Landry, Amara and Lamari 2001a, 2001b; Seidal 1981). The absence of studies that
aim to examine research utilization across the social sciences means that our
understanding about disciplinary variations in research translation and uptake is
limited. Such studies would help shed light on the potential practices or processes that
hinder and facilitate research impact and knowledge transfer.
In this paper we examine the issue of research utilization from the perspective
of social science knowledge producers – academic researchers in the social sciences
across education, economics, sociology, political science and psychology. We use
survey data from a study of university-based social science researchers in Australia to
examine perceptions of the policy uptake of social research. Our broader aim is to
better understand factors that facilitate the utilization of social research by nonacademic audiences – whom we have termed end-usersi. Importantly we advance
understanding in the field of knowledge transfer and use by examining how research
utilisation varies across social science disciplines, accounting for any major
differences. Our findings also have wider implications for evaluating research impact
in the social sciences as well as broader lessons about how research is communicated
to end-users to improve research uptake.
.
Background and literature review
Conceptualising Research Utilization
Understanding the impact of social science research has been a primary focus of
research utilization scholarship which, since the 1970s, has examined the factors and
circumstances which support or undermine the uptake of social research by policy
decision-makers and practitioners (Caplan 1979; Larson 1980; Lester and Wilds
1990; Rich 1997; Weiss 1980; Weiss and Bucuvalas 1980). It is a field not only
concerned with the behaviours and decisions of those who consume social research
but also with investigating how the circumstances and activities of those who produce
social research (e.g. academics, research institutes or private think tanks) influence
processes of knowledge transfer and uptake (Cherney et al 2012a, 2012b; Florio and
Demartini 1993; Hayes et al 2011; Landry, Amara and Lamari 2001b; Lavis et al
2003; Weber 1986; Weiss and Bucuvalas 1980).
When it comes to measuring research utilization, no single conceptual model
has gained unanimous approval (Belkhodja 2012; Rich 1997). One reason for this is
the methodological problem associated with specifying the dependent variable of
research use, given that it can be defined either as a process or an outcome (Rich
1997). From one perspective, research utilization can be viewed as a definitive endpoint where research has a direct impact on policy or practice. This is often referred to
as the problem-solving model, with research seen as inducing changes in policy
decision-making (Belkhodja 2012; Weiss 1980). However this view has been
criticized as misplaced because it ignores the non-instrumental forms of research
impact, such as conceptual or symbolic forms of utilization, which involve research
being used to change understanding (i.e. conceptual use) or to confirm and promote
pre-existing policy directions or commitments (i.e. symbolic use) (Amara, Ouimet
and Landry 2004; Belkhodja 2012; Lavis et al 2003; Weiss 1980). It has also been
argued that research utilization rarely follows a linear path from academic knowledge
producers to end-users in fields of policy and practice and hence uptake of research
evidence may be more diffuse (Juhlin, Tang and Molas-Gallart 2012; Weiss, 1980).
In this study we measure research utilization by adopting a stages or process
model, replicating a modified version of the Knott and Wildavsky (1980) research-use
(RU) scale, similar to that used in the study by Landry, Amara and Lamari (2001a,
2001b). This model comprises 6 stages - transmission, cognition, reference, effort,
influence and application - and Table 1 provides the descriptions for each stage as
presented in our questionnaire to Australian social scientists. The RU scale
characterizes research use as both a process and an outcome, in the sense that
cognition builds on transmission, reference builds on cognition, effort on reference,
influence on effort, and application on influence. The different stages encompass a
range of outcomes and reflect an increasing level of knowledge absorption by endusers (Belkhodja 2012). The RU scale does provide the ability to measure the
significance of factors that have a bearing on research use, and has been shown to be a
reliable scale (Belkhodja 2012; Cherney and McGee, 2011; Cherney et al 2012a,
2012b, Landry, Amara and Lamari 2001a; 2001b; Lester and Wilds 1990; Lester
1993).
INSERT TABLE 1 HERE
Factors Influencing Research Use
Just as there is no agreed conceptual model relating to research utilization, there is no
definitive list of variables developed to help predict knowledge use (Lester, 1993).
Singular and multiple perspectives have been proposed. These include, for example,
the science-push perspective, which explains advances in research utilization as
largely generated by the actions of knowledge producers and the types of products
they produce (Belkhodja 2012; Landry, Amara and Lamari 2001a). Alternatively
Belkhodja et al (2007) adopt an organizational perspective, focusing on multiple
contextual variables to explain knowledge utilization that encompass organizational
interests, the needs and behaviors of researchers and end-users, and levels of
interaction between researchers and users (also see Belkhodja 2012).
Taking
account of various individual and contextual factors, variables that influence the
utilization of academic social research can be grouped under four broad headings
relating to the researchers’ and users’ context, the chosen dissemination activities, and
the interactions between academic researchers and potential end-users.
The researchers context relates to a mix of supply-side variables that influence
research production. This includes the academic role or position of researchers, such
as whether they are in a research-only or a teaching and research position; the types of
outputs such as quantitative and qualitative studies; whether research is focused on
non-academic users; the importance of different funding sources; success at securing
external research grants; and the institutional drivers that influence the motivation to
collaborate
with
external
partners
(rewards
for
collaborative
research)
(Bogenschneider and Corbett 2010; Contrandriopoulos et al 2010; Cherney et a
2012a, 2012b; Landry, Amara and Lamari 2001a; 2001b; Jacobson, Butterill and
Paula 2004). Disciplinary backgrounds can also be extremely important in influencing
research utilization, because these can shape behaviours and views about
dissemination and engagement with end-users – matters relevant to the culture of
knowledge production within particular research disciplines and the forms of
interaction and methods of communication adopted (Bogenschneider and Corbett
2010).
The factors related to end-user contexts encompass judgements on the part of
policy-makers and practitioners relating to the value placed on the quality of research
evidence, its perceived relevance, and the political and economic feasibility of
adopting research findings. Skills to interpret and apply research findings also matter,
as does the level of access to research products such as reports and journals. Added to
this are organisational processes such as the value policy-makers and practitioners
place on research evidence. Such factors influence the overall demand for academic
research within end-user contexts (Belkhodja et al., 2007; Belkhodja 2012;
Contrandriopoulos et al 2010; Nutley, Walter and Davies 2007; Ouimet et al 2009).
Dissemination variables relate to efforts by researchers to tailor research
products (e.g. reports) for the needs of users, and to develop communication strategies
targeting particular non-academic audiences (Cherney et al 2012b; Huberman 1990;
Mitton et al 2007).
The basic argument is that the more researchers invest in
dissemination, the more likely research-based knowledge will be adopted (Cherney
and McGee 2011; Mitton et al 2007). This includes holding meetings to discuss the
scope and results of research projects with specific users or partners, and targeting
particular forums such as those where academics
report on their research to
government committees.
Finally, interaction variables focus on the intensity of the linkages between
knowledge producers and potential users or beneficiaries of research. Interactions
generally help the process of dissemination but are usually based on informal personal
contacts and networks between researchers and end-users. The argument is that the
more intensive are these linkages, the more likely research uptake will occur
(Huberman 1990; Landry, Amara and Lamari 2001a; Lomas 2000; Mitton et al 2007).
Research Design and Methodology
The data used in this research were drawn from a broader study examining evidencebased policy and practice. The project involves 4 phases: (1) a targeted survey of
Australian social scientists; (2) a targeted survey of policy personnel; (3) interviews
with a selection of academic respondents; and (4) interviews with policy personnel.
Results reported in this paper are drawn from the phase 1 survey. The academic
survey was partially based on existing items or scales (Bogenschneider and Corbett,
2010; Landry, Amara and Lamari, 2001a, 2001b) but with additional items included
to gauge the dynamics of research partnerships. Questions were framed around a
number of themes relating to seniority, research discipline, academic position, grant
success, main orientation of the respondent’s research, experience of working with
external partners, methods of dissemination, perceived barriers to research transfer to
end-users, benefits resulting from collaborations with external partners, the challenges
of research partnerships, and the use and impact of the research produced by
respondents.
The survey was first piloted among Fellows of the Academy of the Social
Sciences in Australia (ASSA) in September-October 2010ii. Eighty-one surveys were
completed, with a response rate of about 17 per cent. There were no significant
changes to the survey following the pilot outside of editing some lead-in questions to
make them cleareriii. No scales in the survey were changed. For the main survey, a
database was established of Australian academics who had secured at least one
Australian Research Council (ARC) grant (known as Discovery or Linkage grantsiv)
between 2001 and 2010 within the field of social and behavioral sciencev. The
selection of relevant disciplines from which respondents were sampled was based
upon the ‘field of research’ codes used by the ARC to categorise the funded projects,
and comprised codes relating to anthropology, criminology and law enforcement,
human geography, political science, policy and administration, demography, social
work, sociology, other studies in human society, psychology, education and
economics. Using this database, a web link to the survey was sent via email to 1,950
academic researchers between November 2010 and February 2011. The same
reminder email was sent twice during this period and the survey closed in May 2011.
A total of 612 completed surveys were received, which constitutes a response rate of
32 per cent. When the main academic survey was combined with the ASSA pilot, the
final total was 693 responses (see also Cherney et al 2012b). In this paper we have
drawn on results from the same questions used in the pilot and main survey. This final
sample included respondents from the following main disciplines: education,
economics, sociology, political science, and psychology. These disciplines comprised
the 5 largest discipline clusters in our sample, and will form the primary basis of our
analysis. The remaining disciplines have been grouped as ‘other’ (see Table 2)vi.
INSERT TABLE 2 HERE
Dependent variable
Research utilization was measured using a modified version of the Knott and
Wildavsky (1980) research use scale, which comprises six stages: transmission,
cognition, reference, effort, influence, and application. For each of these six stages,
respondents were asked to estimate what had become of their research using a 5-point
scale ranging from 1 (never), 2 (rarely), 3 (sometimes), 4 (usually), to 5 (always).
Previous research (Cherney et al 2012a) has shown that ‘failure’ in one stage
does not preclude academic researchers from progressing to other stages. Data
reported in Table 3, illustrating the proportion of academics who pass or fail at each
stage of research utilization for each discipline, indicates that academic social
researchers do not necessarily have to traverse in sequence each rung of the research
utilization ladder to reach the ultimate stage, namely, the substantive application of
research findings by end-users (see Table 3). This finding tends to support arguments
relating to the non-linear nature of research transmission, uptake and use. Table 3
illustrates that academic researchers, particularly in political science and psychology,
perceive that the uptake of academic research declines in the effort, influence and
application stages. In other words there was a decline in the perceived level of
influence of academic research during the process of research utilization by external
agencies. Later stages of the RU scale (effort, influence, application – see Table 1)
can be particularly challenging for academics to influence directly, given that
decisions by policy-makers or practitioners to adopt or apply research evidence can be
determined by factors (e.g. political considerations) over which academics have little
control.
INSERT TABLE 3 HERE
Figure 1 explores the proportion of academic researchers who cumulatively did not
pass all six stages of the research utilization scalevii. The disciplines of political
science and psychology have the highest proportion of researchers who did not
perceive their research as being adopted by end-users, whereas academics in the field
of education had the highest number of academics reporting significant utilization by
end-users.
A factor analysis of the items (i.e. 6 stages of utilization – see Table 1),
revealed a 1-factor solution and a Cronbach’s alpha coefficient of 0.91 (see Table 4).
This result indicates that these items are measuring one construct and it was decided
to use the RU scale as an index to measure research use. A mean index score was
calculated for all 6 six stages. The mean score for the research utilization index is
presented in Table 5.
INSERT FIGURE 1 HERE
Independent variables
A number of indices were created and included in our model as independent
variables. The items used in each index were determined by factor analyses, with each
index comprising a 1-factor solution. The Cronbach’s alpha coefficients for these
independent variables are presented in Table 4 and detailed descriptions of index
compositions are presented in Appendix 1.
INSERT TABLE 4 HERE
Descriptive statistics for each independent variable are presented in Table 5. The
disciplines of education and sociology had the highest research utilization index
score, while political science had the lowest. A Bonferroni test of significance
indicated that there were significant differences between the mean research utilization
index scores of political science academics and those academics from education and
sociology. A higher proportion of education, sociology, and political science
academics reported frequent use of qualitative approaches, whilst, economics and
psychology academics more frequently used quantitative approaches. A smaller
proportion of education and political science academics were research-only positions
as compared to the other three disciplines. On average, academic researchers from our
psychology and sociology sample had won a higher number of external grants
compared to the other disciplines. Academic researchers from the education discipline
accorded higher importance to dissemination activities such as tailoring research and
dissemination activities to end-users, compared to the other four disciplines.
INSERT TABLE 5 HERE
Regression Analysis – Factors Influencing Utilization
Given that our dependent variable is approximately continuous, a multiple linear
regression model was used to estimate the associations between research utilization
(our dependent variable) and a number of explanatory variables, such as benefits and
consequences associated with engaging in research with policy-makers and
practitioners for each of the five disciplines. As a preliminary check, we examined the
correlations between all variables in the model for each discipline separately. The
correlations suggested that multicollinearity was unlikely to be a problem. This was
confirmed by a relatively low value of the mean Variance Inflation Factor (VIF) for
each discipline.
Regression Results
The regression results are presented in Table 6. The results indicate that, for each
discipline, a different cluster of variables predict reported levels of research
utilization. The discipline of education had the highest number of predictors; four of
the variables were positively and significantly related to the reported uptake and use
of research produced by academics in the field of education. These variables were: the
perceived benefits of collaborative research; importance of tailoring research for endusers; importance of using contacts, seminars and sending reports to policy-makers
and practitioners; and the number of external grants. The results show that having to
make large efforts investing into research partnerships had a negative but significant
relationship on the reported uptake and use of research. In other words, un-
complicated partnership processes were seen to improve the likelihood of research
uptake by educational researchers. The other four disciplines had only two to three
variables that were significantly associated with research utilization and there does
not seem to be a common predictor or pattern across these disciplines compared to the
education sample. Nevertheless, there were some noteworthy results pertaining to
these other disciplines. For both the disciplines of economics and sociology, holding a
teaching and research position was negatively related to the reported use of academic
research by policy-makers and practitioners, as was a reliance on qualitative research
methods within political science. The emphasis placed on the useability of research
was strongly associated with reported levels of research use among political scientists.
Finally, academic respondents in the field of economics indicated that the more
policy-makers or practitioners were seen as prioritizing the ‘feasibility’ of research
(i.e. policy-makers or practitioners place greater emphasis on research being
economically and politically feasible) the less likely were these academics to report
that social research use would occur.
INSERT TABLE 6 HERE
Interpretation of Results and Discussion
Why is there such a difference between the results pertaining to researchers in the
field of education compared to the other four research disciplines? One possibility
relates to the overall applied nature of educational research, which favours and
promotes engagement with end-users in the government and school sectors. In
Australia and elsewhere there is a long history of research partnerships between
academic educational researchers, policy-makers, and practitioners such as teachers
and school principals (Department of Education Training and Youth Affairs 2000;
Bransford et al 2009; Levin and Edelstein 2010; Saha, Biddle and Anderson 1995;
Vanderlinde and van Braak 2010). This orientation can be particularly powerful in
socializing academics in the field of education to be mindful of end-user perspectives
and needs compared with some other fields of social science. This orientation is
possibly reflected in the types of external grants they obtain, which may increase their
chance of generating research use. Among our education sample the perceived
importance of tailoring research for end-users and its relationship to research
utilization reflects awareness that targeted forms of dissemination are needed to
improve research absorption among non-academic audiences. The same can be said
for the significant relationship found between reported levels of research use and
reported involvement in activities focused on interactions with users (i.e. informal
contacts, seminars and workshops organized by end-users or sending reports to endusers). Improved forms of interaction help decrease the gap between research
produced by academics and users in fields of policy and practice. Ways of reducing
the research-policy-practice gap and enhancing the relevance, translation and uptake
of academic research has been intensely investigated and debated in the field of
education (Burkhardt & Schoenfeld 2003; Levin 2011; Vanderlinde and van Braak
2010).
The above interpretation of educational research does not mean that
understandings about research translation and end-user needs are lacking in other
social science disciplines, such as sociology, psychology, economics and political
science. Results relating to psychology, political science and economics did point to
an appreciation among academic researchers in these disciplines that end-user
engagement and contexts (e.g. the importance of meetings, and the usability and
feasibility of research) have a bearing on levels of research use. The useability item
related to issues of clear communication and timeliness of research results (see
appendix
1).
The
feasibility
variable
comprised
three
items:
research
recommendations are seen as economically and politically feasible and research
findings support a current position or practice (see appendix 1). When feasibility is
seen as a strong priority for users, our respondents in economics reported that their
academic research is less likely to be used. These findings point to some key lessons
about research quality: it is not the key priority potentially driving research use, nor is
it the single most important factor in determining uptake – contacts, communication
and timeliness also matter.
The general pattern of results and the variations we found among the sample
point to the potential role that disciplinary processes play in generating research
utilization and how they may shape both the behaviors and attitudes of academic
researchers. This is perhaps more strongly the case for our education sample. Such
disciplinary orientations were also reflected in our descriptive statistics relating to the
types of research methods respondents used, with academics in the fields of
economics and psychology reporting they more frequently use quantitative methods
compared to respondents in sociology, political science and education. Economics
and psychology have generally been dominated by a quantitative orientation, such as
econometrics in the case of economics and experimental lab studies in the case of
psychology. This has strongly influenced the focus of research arising from both
fields (Sowey 2002; Griffin and Phoenix 1994). One result worthy of comment relates
to the fact that researchers in economics and sociology who occupy teaching and
research positions are less likely to report success in generating research utilization
than those in research-only positions. Differences in how occupational profiles may
influence research uptake in the social sciences have been found in previous studies
(see Fox and Milbourne 1999 as it relates to research outputs among academic
economists). This finding draws attention to the probability that academics within
some disciplines have different capacities to devote time and resources to generate
research outputs that go beyond traditional academic outlets, and perhaps have
different experiences in undertaking research for or with industry partners.
Understanding how academics develop these broader capacities to influence and
engage policy-makers and practitioners effectively would be a fruitful avenue for
future studies.
What are the broader lessons arising from this study for how research is
communicated to end-users to improve research uptake? One is that levels of
engagement with end-users and investment in dissemination matter a great deal. Both
engagement and dissemination are related to the process of research translation - i.e.
the conversion of research findings into forms suitable for their use in solving a
problem of a practical nature (Boyd and Menlo 1984: 60). When engagement between
academic researchers and policy-makers or practitioners occurs, researchers are better
able to learn about the needs of potential end-users and thus tailor (disseminate) their
products more effectively. This often requires researchers to communicate research
findings that meet tight timeframes, and that meet the idiosyncratic information needs
of end-users (Bogenschneider and Corbett, 2010; Cherney et al 2012b). This is not a
skill that can come easily to academic researchers whose ways of writing and training
in scientific methods, as well as interest in theoretical and esoteric issues, can make it
hard for them to communicate in a manner that appeals to the action-orientated and
pragmatic concerns of non-academic end-users (Caplan 1979; Dunn 1980).
While our results show that engagement and dissemination are important, the
ways they actually occur and the forms of interactions and methods of communication
adopted will matter a great deal. For instance, it has been noted by a number of
scholars that closer engagement through research collaborations between academics
researchers and end-users will not guarantee improved research uptake, given the
various political, individual and organizational variables that influence decisions to
use academic social research by non-academic end-users (Amara, Ouimet and Landry
2004; Belkhodja 2012; Belkhodja et al 2007; Bogenschneider and Corbett, 2010;
Florio and Demartini 1993; Landry, Lamari and Amara 2003; Oh and Rich 1996;
Ouimet et al 2009; Weber 1986; Weiss and Bucuvalas 1980). Different interests,
ideologies and priorities mean that engagement between academics and industry can
be a difficult undertaking because motivations for producing and using research may
not be the same. This means that engagement in the form of research collaborations
between academic researchers and potential end-users would need to be carefully
managed and expectations and outcomes clarified (Bammer 2008). Understanding the
models of engagement that work best is an important area for future research given
that it would provide insight into how academics and policy-makers and practitioners
can best manage interactions and communication between them so as to achieve
mutual outcomes.
Just as there is no one-size-fits all approach when it comes to academic and
end-user engagement, the same applies to the other component of knowledge
translation – dissemination. Written forms of dissemination alone will not generate
research use (Bogenschneider and Corbett, 2010; Boyd and Menlo 1984; Friedman
and Farag 1991; Kramer and Wells 2005; Nutley, Walter and Davies 2007; Rogers
1983). Formatting, style and mode of delivery matter a great deal, as does the context
in which research evidence is communicated. Summaries absent of jargon, toolkits
and guides that stipulate practical implications and workshops between researchers
and end-users that focus on identifying the most applicable format and content of
research products, are central to effective dissemination (Bogenschneider and Corbett,
2010). It is possible that members of certain social science disciplines understand this
process much better than do others.
Conclusion
Before concluding it should be noted that relying on the self-reporting of academics
about the uptake of their research does have limitations. Respondents were requested
to make judgments about processes they may not have directly observed, particularly
in regard to the choices and actions of end-users concerning the higher stages of the
research utilization scale. There is also a possibility that the data reflect a social
desirability bias. That is, the judgement of respondents concerning the utilization of
their research can be influenced by how practically relevant they perceive their
research to be, which could be inflated in such circumstances. We have not examined
the project reference points (e.g. specific project contexts) that underpin why
respondents believed their research had an impact, or why they encountered problems
in partnership contexts, as this assessment is more suited to qualitative methods.
Qualitative data would prove useful for understanding the dynamics of particular
collaborations between academic researchers, policy-makers and practitioners which
would provide insight into factors that help predict whether particular types of
research partnerships succeed or fail. Also, we did not explore any variations by subdiscipline within our main five disciplinesviii. Each discipline incorporates subdisciplines, which can place different emphasis on applied research. The type of
research, the degree of technicality, theoretical abstraction and practicality of results
can lead to different degrees of research utilization within particular social science
disciplines.ix Exploring these factors would give insight into the overall influence of
certain research fields on areas of policy and practice.
Our results show that context matters when it comes to understanding the
utilization of academic social research. The variation in factors that influenced
reported levels of research utilization across our sample in the disciplines of
education, economics, sociology, political science and psychology highlight
contextual factors (also see Belkhodja et al 2007 as it relates to policy-makers use of
research). The more consistent pattern observed in our education sample, compared to
the other disciplines, was explained by reference to the orientation of educational
research and the particular culture of engagement that characterizes research in the
field of education. This is not to claim that other social science disciplines lack an
appreciation of engagement and end-user needs or lack the capacity to partner with
policy-makers or practitioners. This is clearly not the case when it comes to such
fields as economics and political science, where researchers might engage more with
international and national agencies compared to engagement with local partners (such
as schools or local authorities) – a pattern more common with research collaboration
in education. It is possible that models of research translation and collaborative
partnerships with end-users are less well developed or institutionalized across some
social science fields. We have noted evidence of variations in the emphasis placed on
enhancing the uptake of research across the social sciences. Moreover, engaging endusers and undertaking collaborative research with external partners is not a central
theme in mainstream academic training. Understanding how both these factors
operate across the various social sciences would help identify ways to improve
research impact.
Our results provide lessons for academics interested in enhancing the
utilization of the research they produce and in better communicating their findings.
The results indicate that there is a premium return for investing in knowledge
translation activities and directly engaging with end-users through meetings and
dissemination processes and that it is important to tailor research projects and findings
to end-user needs. There is of course a risk that a too close relationship with industry
partners can potentially compromise the research that academics produce. None-theless, academics who wish to see their research utilized by end-users should not
confine their efforts to passive transmission of research, for example by relying upon
traditional academic journals for dissemination. However, a significant problem is
that University research assessment exercises in some countries (such as the
Australian ERA, the UK REF, and other similar research evaluation initiatives - see
Donovan 2011) which currently focus primarily on research quality indicators (e.g.
journal rankings, impact factors and citation counts) can deter academics from
investing in alternative outlets beyond standard academic publications such as
refereed journals. This is because alternative outputs focused on knowledge
translation, such as government reports or practitioner and industry journals may not
be counted or valued in the assessment framework (Elton 2000; Martin 2011;
Nightingale & Scott 2007; Shergold 2011).
Finally our analysis shows that when it comes to measuring and demonstrating
research impact, no one single model of evaluation will be adequate. Levels of impact
will vary across academic disciplines and across problem-types, and there are reasons
for this. A one-size-fits-all approach will not adequately measure either the quality or
the social and economic impact of academic social research. While it is quite
legitimate for governments to examine whether they are getting value from the
research they fund through taxpayers dollars, one fruitful area of investment would be
to improve incentives and capacities to translate research to non-academic end-users
and work at providing opportunities for academic researchers to engage with industry
partners. The results reported in this paper show some of the ways this can be
improved.
Appendix I: Independent variables measures
Researchers’ Context
Quantitative studies
The quantitative research approach is a single item variable
that reflects how often researchers use a quantitative
approach such as surveys research, statistical analysis, and
GIS in their research. The results reported are the
percentage of respondents who indicated always or usually.
These responses were recorded as 1, while all other
responses were recorded as 0.
Qualitative studies
The qualitative research approach is a single item variable
that reflects how often researchers use a qualitative
approach such as interviews, focus groups, ethnography,
and observation in their research. The results reported are
the percentage of respondents who indicated always or
usually. These responses were recorded as 1, while all other
responses were recorded as 0.
Benefits of
collaborative research
This Index is based on academic perceptions of the benefits
of carrying out research in collaboration with government,
industry or community sector partners. This index is
comprised of ten dimensions that range on a 6-point scale,
ranging from 0 (not applicable), 1 (strongly disagree) to 5
(strongly agree). The ten dimensions are: (1) I have been
able to use data that would otherwise be difficult to access;
(2) Research partnerships have provided me with
opportunities for my research to have an impact on policy
and practice; (3) Research partnerships have helped to
increase my industry contacts; (4) My industry contacts
have helped with developing future research projects; (5)
Research partnerships enable me to generate extra income
for my work unit; (6) Such projects have provided me
opportunities to commercialise research outcomes; (7)
Research partnerships have helped me with career
advancement; (8) Such projects have required me to be
pragmatic and realistic in relation to research outcomes for
industry partners (9) Research partnerships have enabled
me to publish in a broad range of publication outlets (10) I
find projects with external partners more satisfying than
fundamental “blue sky” research.
Consequences of
This index is based on problems relating to investing time
investing into research
partnerships
and resources and accommodating partnership work that
academic researchers encounter when carrying out research
with partners from government, industry or the community
sector. This index is comprised of ten items that range on a
6-point scale, ranging from 0 (not applicable), 1 (strongly
disagree) to 5 (strongly agree). The ten dimensions are: (1)
There are inadequate university resources to support
research partnerships with end-users; (2) I find there are
different research orientations between academics and
external partners; (3) You need to invest a lot of time in
coordinating the work between different partners; (4)
Confidentiality requirements often restrict what you can
report and publish; (5) You can lose ownership of
intellectual property; (6) You are subject to delays that
impede your ability to publish results in a timely manner;
(7) I am under pressure from my work unit to undertake
contract research to meet budget requirements (8) External
partners do not appreciate the full costs of research; (9) The
ethics process can be time consuming and cumbersome;
(10) The complexity of contractual arrangements can lead
to delays in commencing research.
Research Time
This is a dummy variable created from the question asking
academics to indicate the nature of their position, either
research and teaching or research only. The research only
was used as the reference group.
Users’ Context
End-users prioritise
high quality research
This index is based on academic researcher’s perceptions of
what research characteristics end-users prioritise when
using academically produced social science research. This
index is comprised of seven dimensions that range on a 5point scale, ranging 1 (not a priority) to 5 (high priority).
The seven dimensions are: (1) high quality research; (2)
unbiased findings; (3) adds to theoretical knowledge; (4)
statistical analysis is high quality; (5) findings can be
generalised; (6) offers a new way of thinking; and (7)
reputation of researcher.
End-users prioritise the
useability of the
research
This index is based on academic researcher’s perceptions of
what research characteristics end-users prioritise when
using academically produced social science research. This
index is comprised of four dimensions that range on a 5point scale, ranging 1 (not a priority) to 5 (high priority).
The four dimensions are: (1) findings available when
decisions need to be made; (2) findings have direct
implications for policy & practice; (3) findings written in a
clear style; and (4) report has brief summary of findings.
End-users prioritise the
feasibility of the
research
This index is based on academic researcher’s perceptions of
what research characteristics end-users prioritise when
using academically produced social science research. This
index is comprised of three dimensions that range on a 5point scale, ranging 1 (not a priority) to 5 (high priority).
The three dimensions are: (1) recommendations are
economically feasible; (2) findings support a current
position & practice; and (3) recommendations are
politically feasible.
Dissemination
Importance of tailoring
research when endusers are the focus
This index is based on the importance attributed to various
aspects of tailoring research when the focus is on end-users.
This index is comprised of seven dimensions that range on
a 6-point scale of adaption, ranging from 0 (does not apply),
1 (very unimportant) to 5 (very important). The seven
dimensions are: (1) readability and use of comprehension of
my reports and research articles; (2) specific, operational
nature of conclusions or recommendations; (3) provision of
data that can be analyses by end-users; (4) sensitivity to
end-users’ expectations; (5) presentation of reports
(graphics, colour, packaging); (6) on-time presentation of
research findings to end-users; (7) attention to
‘deliverables’.
Importance of meetings
& dissemination
activities with endusers
This index is based on the importance attributed to
organising meetings and dissemination activities for endusers when carrying-out research. This index is comprised
of four dimensions that range on a 6-point scale, ranging
from 0 (does not apply), 1 (very unimportant) to 5 (very
important). The four dimensions are: (1) preparing and
conducting meetings in order to plan the subject and scope
of projects with end users; (2) regular formal meetings to
report on a study’s progress with end-users; (3) formal
meetings to discuss findings with end-users; (4) preparing
and implementing research dissemination activities for endusers.
Interactions
Importance of using
contacts, seminars and
sending reports to
policy-makers and
practitioners
This index is based on the importance attributed to using
methods such as informal contacts, seminars and reports for
presenting research to policy-makers and public
practitioners. This index is based on six items measured on
a 6-point scale, ranging from 0 (does not apply), 1 (very
unimportant) to 5 (very important). The six items are: (1)
informal contacts with policy personnel of government
agencies; (2) informal contacts with public or community
sector practitioners; (3) participation in seminars and
workshops organised by government policy agencies; (4)
participation in seminars and workshops organised by
practitioners within public or community sectors; (5)
sending reports to government policy agencies; (6) sending
reports to practitioners within public or community sectors.
Number of external
grants
This index is the sum of all the research grants (i.e. ARC
discovery, ARC linkage, other external competitive grants)
academics have received.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the
research, authorship, and/or publication of this article.
Role of the funding source: This project was finically supported through the
Australian Research Council Linkage project LP100100380. This project has received
cash and in-kind support from the following industry partners: Australian Productivity
Commission; Australian Bureau of Statistics; Queensland Health; Queensland Dept of
Communities; Queensland Dept of Employment; Queensland Dept of Premier and
Cabinet; Victorian Dept of Planning and Community Development; Victorian Dept of
Education & Early Childhood; and the Victorian Dept of Human Services.
References
Allen Consulting (2005). Measuring the impact of publicly funded research.
Department of Education, Science and Training, Canberra.
Amara, N. Ouimet, M. and Landry, R (2004) ’New Evidence on Instrumental,
Conceptual, and Symbolic Utilization of University Research in Government
Agencies‘. Science Communication, 26 (1): 75-106.
Atran, S. (2013) Social Warfare. Foreign Policy blog, 15 March:
http://www.foreignpolicy.com/articles/2013/03/15/social_warfare_budget_r
epublicans
Banks, G. (2011). Economics, economists and public policy in Australia. Opening
address to the 40th Australian Conference of Economists Symposium, ‘Does
Australian public policy get the economics it deserves?’, 14 July 2011,
Canberra.
Bogenschneider, K. and Corbett, T. J. (2010). Evidence-Based Policy Making: Insights
from Policy Minded Researchers and Research-Minded Policymakers. New
York, Routledge.
Belkhodja, O. (2012). Toward a Revisited Organisational and Social Perspective of
Knowledge Utilization: Contributions from Organisational Theory. The
International Journal of Knowledge, Culture and Change Management, 11 (3),
39-58.
Belkhodja, O., Amara, N., Landry, R., Ouimet, M., 2007. The extent and
organizational determinants of research utilization in Canadian health
services organizations. Science Communication 28(3), 377-417.
Boyd, R.D. and Menlo, A. (1984). Solving problems of practice in education: A
prescriptive model for the use of scientific information. Science
Communication, 6 (1), 59-74.
Burkhardt, H. and Schoenfeld, A. H. (2003). Improving educational research: Toward
a more useful, more influential, and better funded enterprise. Educational
Researcher, 32 (9), 3-14.
Bransford, J.D., Stipek, D.J. Vye, N.J., Goemx, L.M. and Lam, D. (2009). The Role of
Research in Educational Improvement. Harvard Education Press, Cambridge.
Caplan, N. (1979) ‘The Two-Communities Theory and Knowledge Utilization’.
American Behavioral Scientist 22 (3), 459-470.
Cherney, A. and McGee, T. R. (2011). Utilization of social science research: Results of
a pilot study among Australian sociologists and criminologists. Journal of
Sociology, 47(2), 144-162.
Cherney, A. Povey, J. Head, B. Boreham, P. & Ferguson, M. (2012a) ‘What influences
the utilization of educational research by policy-makers and practitioners?
The perspectives of academic educational researchers’, International Journal
of Educational Research, 56, 23-34.
Cherney, A. Head, B. Boreham, P. Povey, J. & Ferguson, M. (2012b) ‘Perspectives of
academic social scientists on knowledge transfer and research collaborations:
A cross sectional survey of Australian academics’, Evidence and Policy, 8 (4),
433-453.
Contrandriopoulos, D., Lemire, M., Denis, J.J., and Tremblay, E. (2010) ‘Knowledge
Exchange Processes in Organizations and Policy Arenas: A Narrative
Systematic Review of the Literature’, The Milbank Quarterly, 88 (4): 444–483.
Department of Education, Training and Youth Affairs (2000). The Impact of
Educational Research. Higher Education Division Department of Education,
Training and Youth Affairs, Canberra.
Donovan, C. (2011) ‘Special issue on the state of the art in assessing research
impact’. Research Evaluation 20 (3), 175-179.
Dunn, W.N. (1980) ‘The Two Communities Metaphor and Models of Knowledge Use:
An Exploratory Case Study’, Science Communication, 1 (4), 515-536.
Dunn, W,N., Dukes, M.J. and Cahill, A.G. (1984) Designing Utilization Research.
Science Communication, 5 (3), 387-404.
Elton, L. (2000). The UK Research Assessment Exercise: Unintended Consequences.
Higher Education Quarterly 54 (3), 274-283.
European Commission (2003) Science Shops: knowledge for the community. EU:
Luxembourg
Florio, E and Demartini, J.R. (1993) The Use of Information by Policymakers at the
Local Community Level. Science Communication 15 (1), 106-123.
Fox, K.J. and Milbourne, R. (1999) What determines research output of academic
economists?, Economic Record, 75 (230), 256-267.
Friedman, M.A. and Farag, Z.E. (1991) Gaps in the Dissemination/Knowledge
Utilization Base. Science Communication, 12 (3), 266-288.
Griffin, C. and Phoenix, A. (1994). The Relationship between Qualitative and
Quantitative Research: Lessons from Feminist Psychology. Journal of
Community & Applied Social Psychology, vol. 4, no. 4, pp. 287-298.
Haynes, A.S., Derrick, G.E., Chapman, S. Redman, S. Hall, W.D. Gillespie, J. and Sturk,
H. (2011). From “our world” to the “real world”: Exploring the views and
behaviour of policy-influential Australian public health researchers. Social
Science & Medicine 72 (7), 1047-1055.
Holbrook, J.B. (2012). Re-assessing the science–society relation: The case of the US
National Science Foundation's broader impacts merit review criterion (1997–
2011). Retrieved from
http://www.scienceofsciencepolicy.net/system/files/attachments/Holbrook_
BIC_2.0_final.pdf
Huberman, M. (1990). Linkages between researchers and practitioners: A qualitative
study. American Educational Research Journal, 27(2), 363-391.
Jacobson, N., Butterill, D. and Paula, G., (2004) Organizational Factors that Influence
University-Based Researchers’ Engagement in Knowledge Transfer Activities.
Science Communication 25 (3), 246-259.
Juhlin, M., Tang, P. and Molas-Gallart, J. (2012). Study of the Contribution of Social
Scientists to Government Policy and Practice. Economic and Social Research
Council, London.
Kamenetzky, J.R. (2013) Opportunities for impact: Statistical analysis of the National
Science Foundation’s broader impacts criterion. Science and Public Policy
40(1): 72-84.
Kramer, D.M. and Wells, R.P. (2005) Achieving Buy-In: Building Networks to Facilitate
Knowledge Transfer. Science Communication, 26 (4), 428-444.
Knott, J. and Wildavsky, A. (1980). If dissemination is the solution, what is the
problem? Knowledge, Creation, Diffusion, Utilization, 1(4), 537-578.
Lambert, R., ‘Business–University Research Collaboration’, Report to HM Treasury,
UK, 2003.
Landry, R., Amara, N., and Lamari, M. (2001a). Climbing the ladder of research
utilization - Evidence from social science research. Science Communication,
22(4), 396-422.
Landry, R., Amara, N., and Lamari, M. (2001b). Utilization of social science research
knowledge in Canada. Research Policy, 30(2), 333-349.
Landry, R. Lamari, M and Amara, N. (2003) ‘The Extent and Determinants of the
Utilization of University Research in Government Agencies’, Public
Administration Review, 63 (2): 192-205.
Larson, J. (1980) Review Essay: Knowledge Utilization : What Is It? Science
Communication, 1 (3), 421-442.
Lavis, J.N. (2006) Research, Public Policymaking, and Knowledge-Translation
Processes: Canadian Efforts to Build Bridges. The Journal of Continuing
Education in the Health Professions, 26 (1), 37-45.
Lavis, J.N., Roberston, D., Woodside, J.M., McLeod, C.B., Abelson, J. and the
Knowledge Transfer Group (2003). How can research organisations more
effectively transfer research knowledge to decision-makers. Milbank
Quarterly, 81 (2), 221-248.
Lavis, J.N, Davis, H. Oxman, A. Debis, JL, Golden-Biddle, K and Ferlie, E. (2006a)
Towards systematic reviews that inform health care management and policymaking. Journal of Health Services Research and Policy, 10 (Suppl 1), 35-48.
Lavis J.N, Lomas J, Hamid M, Sewankambo NK. (2006b) ‘Assessing country-level
efforts to link research to action’. Bulletin of the World Health Organization;
84 (8): 620-8.
Lester, J.P. (1993). The utilization of policy analysis by state agency officials.
Knowledge: Creating, Diffusion, Utilization, 14(3), 267-290.
Lester, J.P., Wilds, L.J. (1990). The utilization of public policy analysis: a conceptual
framework. Evaluation and Program Planning 13, 313–319.
Levin, B. (2011). Mobilising research knowledge in education. London Review of
Education, 9 (1), 15-26.
Levin, B., & Edelstein, H. (2010). Research, policy and practice in education.
Education Canada, 50(2), 29-30.
Lomas, J., (2000). Using ‘Linkage and Exchange’ to Move Research into Policy at a
Canadian Foundation. Health Affairs 19(3), 236-240.
Macintyre, S. (2010). The Poor Relation: A history of the social sciences in Australia.
Melbourne, Melbourne University Press.
Martin, B.R. (2011) ‘The Research Excellence Framework and the ‘impact agenda’:
are we creating a Frankenstein Monster’. Research Evaluation 20 (3), 247254.
Meagher, L. Lyall, C. & Nutley, S. (2008) Flows of knowledge, expertise and influence:
a method for assessing policy and practice impacts from the social science.
Research Evaluation, 17 (3): 163-173.
Mervis, J. (2011) Peer Review: Beyond the data. Science, vol 334, no. 6053, pp. 169171.
Mitton C, Adair CE, McKenzie E, Patten SB, and Perry W. B. (2007) Knowledge
transfer and exchange: Review and synthesis of the literature. Milbank
Quarterly, 85(4):729-768.
Nightingale, P., Scott, A. (2007). Peer Review and the Relevance Gap: Ten
Suggestions for Policy-makers. Science and Public Policy 34(8), 543-553.
Nutley, S., Walter, I., Davies, H., (2007). Using evidence: How research can inform
public services. Policy Press, Bristol.
Oh, C.H., and Rich, R.F., (1996). ‘Explaining use of information in public
policymaking’. Knowledge and Policy 9 (1): 3–35.
Ouimet, M., Landry, R., Ziam, S. and Bedard, P. (2009). The absorption of research
knowledge by public civil servants. Evidence and Policy 5(4), 331-350.
Rogers, E. M. (1983). Diffusion of innovations (3rd ed.). New York: Free Press.
Rogers, J.M. (1989). Social Science Disciplines and Policy Research: The Case of
Political Science. Policy Studies Review, 9 (1), 13-28.
Ross, J. (2011). Academic Research “Lost without Translation”. Campus Review 28
March.
Rickinson, M. Sebba, J. and Edwards, A. (2011) Improving Research Through User
Engagement, London: Routledge.
Rich, R.F. (1997) Measuring Knowledge Utilization: Processes and Outcomes.
Knowledge and Policy: The International Journal of Knowledge Transfer and
Utilization 10 (3), 11-24.
Rich, R.F. and Oh, C.H. (2000) ‘Rationality and Use of Information in Policy Decisions:
A Search for Alternatives’. Science Communication 22 (2), 173-211.
Saha, L. J., Biddle, B. J., & Anderson, D. S. (1995). Attitudes towards educational
research knowledge and policy-making among American and Australian
school principals. International Journal of Educational Research, 23(2), 113–
126.
Seidel, A.D. (1981) Underutilized research: Researchers’ and decision makers’
concepts of information quality. Science Communication, 3 (2), 233-248.
Shergold, P. (2011). ‘Seen but not heard’. Australian Literary Review (The Australian
Newspaper), 4 May, pp. 3-4.
Smith, S., Ward, V., and House, A. (2011). ‘Impact’ in the proposals for the UK's
Research Excellence Framework: Shifting the boundaries of academic
autonomy. Research Policy 40 (10), 1369-137.
Sowey, E. (2002). The value of econometrics to economists in business and
government: a study of the state of the discipline', Journal of Applied
Mathematics and Decision Sciences, 6 (2), 101-127.
Sue, V.R. (2007) Conducting Online Surveys. Los Angeles: Sage Publications.
Vanderlinde, R and van Braak, J. (2010). The gap between educational research and
practice: Views of teachers, school leaders, intermediaries and researchers.
British Educational Research Journal, 36(2), 299-316.
Weber, D.J. (1986), Explaining Policymakers' Use of Policy Information: The Relative
Importance of the Two-Community Theory Versus Decision-Maker
Orientation. Science Communication 7 (3), 249-290.
Weiss, C.H. (1980). Knowledge Creep and Decision Accretion. Science
Communication 1(3),
381-404.
Weiss, C. H. and Bucuvalas, M. (1980). Social Science Research and Decision-Making.
New York: Columbia University Press.
Wilkinson, H. Gallagher, M. and Smith, M. (2012) A collaborative approach to
defining the usefulness of impact: lessons from a knowledge exchange
project involving academics and social work practitioners. Evidence and
Policy 8 (3), 311-327.
Wooding, S., Nason, E., Klautzer, L., Rubin, J., Hanney, S. and Grant, J. (2007) Policy
and practice impacts of research funded by the Economic and Social Research
Council: A case study of the Futures of Work programme, approach and
analysis. RAND Europe.
Table 1. Research Utilization Scale
Variable
Transmission
Cognition
Reference
Effort
Influence
Application
I transmit my research results to end-users
My research reports have been read and understood by end-users
My work has been cited in reports and strategies by end-users
Efforts were made to adopt the results of my research by end-users
My research results have influenced the choices and decisions of end-users
My research has been applied by end-users
*end-users were defined as comprising policy-makers within government, or practitioners/managers
within public or community sectors or private sector organisations
Table 2. Disciplines
Education
Economics
Sociology
Political Science
Psychology
Other disciplines
Total
n
156
102
90
78
110
157
693
%
22.5
14.7
13.1
11.2
15.9
22.6
100.0
*Other comprised a variety of fields including social work, geography, criminology, anthropology,
demography and archaeology.
Table 3. Proportion of academics who pass or fail each stage of the research
utilization stages across disciplines
Research
Education
Economics
Sociology
Political
Psychology
Utilization
Science
stages
Pass
Fail
Pass
Fail
Pass
Fail
Pass
Fail
Pass
Fail
%
%
%
%
%
%
%
%
%
%
Transmission
96
4
82
18
91
9
83
17
75
25
Cognition
95
5
85
15
89
11
87
13
75
25
Reference
87
13
77
23
87
13
83
17
66
34
Effort
82
18
75
25
72
28
60
40
68
32
Influence
85
15
69
31
73
27
68
32
66
34
Application
86
14
74
26
73
27
60
40
68
32
Figure 1. Proportion of academic researchers who did not pass all six stages of
the research utilization scale per discipline
50%
46%
45%
45%
41%
40%
32%
35%
30%
25%
25%
20%
15%
10%
5%
0%
Political
Science
Psychology
Economics
Sociology
Education
Table 4. Internal reliability coefficients (Cronbach’s alpha) for variables
Name of variable
RU Index
Researchers’ Context
Benefits of collaborative research
Consequences of investing in research partnerships
User’s Context
End-users prioritise high quality research
End-users prioritise the useability of the research
End-users prioritise the feasibility of the research
Dissemination
Importance of tailoring research when end-users are the
focus
Importance of meetings & dissemination activities with
end-users
Interactions
Importance of using contacts, seminars and sending reports
to policy-makers and practitioners
Number
of cases
Number Cronbach
of items
alpha
in a scale
559
6
0.91
612
693
10
10
0.93
0.89
612
693
693
7
4
3
0.78
0.78
0.69
693
7
0.94
693
4
0.95
693
6
0.88
Table 5. Means and standard deviationsa academic research across 5 disciplines
Min
1
Max
5
M
3.64
SE
0.05
M
3.38
SE
0.08
M
3.63
SE
0.08
Political
Science
M
SE
3.21 0.09
0
0
0
1
1
5
0.41
0.84
3.38
0.04
0.03
0.06
0.85
0.13
3.10
0.04
0.03
0.11
0.43
0.74
3.25
0.05
0.05
0.10
0.17
0.71
2.77
0.04
0.05
0.15
0.91
0.26
2.97
0.03
0.04
0.13
0
5
3.70
0.06
3.39
0.08
3.50
0.09
3.27
0.12
3.38
0.09
0
0
0
1
1
51
0.81
0.19
7.81
0.03
0.03
0.57
0.55
0.45
8.38
0.05
0.05
0.87
0.51
0.49
9.49
0.05
0.05
0.96
0.76
0.24
5.76
0.05
0.05
0.61
0.60
0.40
10.57
0.05
0.05
0.87
1
1
1
5
5
5
3.91
4.64
3.90
0.05
0.03
0.06
3.60
4.53
3.74
0.07
0.05
0.08
3.69
4.62
3.78
0.07
0.05
0.08
3.65
4.55
3.85
0.07
0.06
0.09
3.63
4.52
3.82
0.07
0.05
0.07
0
5
4.24
0.04
3.68
0.11
3.96
0.06
3.46
0.15
3.62
0.14
0
5
4.23
0.05
3.47
0.13
3.91
0.11
3.34
0.15
3.46
0.15
0
5
3.78
0.06
3.21
0.10
3.68
0.09
3.44
0.10
3.07
0.13
Range
Research Utilization Index
Researchers’ Context
Quantitative studies
Qualitative studies
Benefits of collaborative research
Consequences of investing in research
partnerships
Research Time (teaching and research position)
Research Time (research only position)
Number of external grants
User’s Context
End-users prioritise high quality research
End-users prioritise the useability of the research
End-users prioritise the feasibility of the research
Dissemination
Importance of tailoring research when end-users
are the focus
Importance of meetings & dissemination
activities with end-users
Interactions
Importance of using contacts, seminars and
sending reports to policy-makers and
practitioners
a.
Standard deviations only reported for continuous measures.
Education
Economics
Sociology
Psychology
M
3.38
SE
0.09
Table 6. Multiple linear regression equations predicting utilization of academic research
Education
β
SE β
0.12
(0.10)
Quantitative
studies
Qualitative
-0.10
(0.13)
studies
Benefits of
0.22*** (0.06)
collaborative
research
Consequence
-0.14**
(0.07)
s of investing
in research
partnerships
Teach/resear
0.03
(0.12)
ch
Number of
0.02*** (0.01)
external
grants
End-users
-0.03
(0.09)
prioritise
high quality
research
End-users
0.17
(0.12)
prioritise the
useability of
the research
End-users
-0.08
(0.07)
prioritise the
feasibility of
the research
Importance
0.37*** (0.13)
of tailoring
research
when endusers are the
focus
Importance
-0.04
(0.10)
of meetings
&
disseminatio
n activities
with endusers
Importance
0.13*
(0.07)
of using
contacts,
seminars and
sending
reports to
policymakers and
practitioners
Constant
1.06*
(0.61)
Observations
155
Adjusted R2
0.284
Standard errors in parentheses
*
p < 0.10, ** p < 0.05, *** p < 0.01
Economics
β
SE β
0.24
(0.19)
Sociology
β
SE β
0.25
(0.16)
Political Science
β
SE β
-0.04
(0.20)
Psychology
β
SE β
0.13
(0.23)
0.16
(0.20)
0.28
(0.18)
-0.35**
(0.17)
0.13
(0.10)
0.15
(0.10)
0.13
(0.08)
-0.03
(0.11)
0.05
(0.10)
-0.07
(0.09)
-0.05
(0.09)
-0.34**
(0.15)
-0.25*
(0.15)
-0.16
(0.19)
0.05
(0.13)
0.01
(0.01)
-0.00
(0.01)
-0.01
(0.02)
0.02**
(0.01)
0.18*
(0.09)
0.27**
(0.13)
0.15
(0.15)
0.15
(0.10)
-0.00
(0.17)
0.21
(0.17)
0.44**
(0.18)
0.26
(0.16)
-0.25**
(0.11)
-0.07
(0.11)
-0.17
(0.11)
-0.13
(0.10)
0.11
(0.11)
-0.09
(0.16)
0.01
(0.08)
0.05
(0.09)
-0.01
(0.11)
0.04
(0.09)
0.10
(0.09)
0.17*
(0.09)
0.07
(0.11)
0.14
(0.11)
0.15
(0.13)
0.09
(0.09)
2.64***
98
0.343
(0.79)
0.80
86
0.262
(0.82)
0.72
75
0.349
(0.76)
0.49
101
0.513
(0.76)
-0.00
0.14*
(0.17)
(0.09)
36
Throughout this this paper we use the term “end-user” in a generic sense to refer to nonacademic audiences e.g. policy personnel, or practitioners/ managers within the public,
private or community sectors.
ii
Fellows are recognised for their outstanding contributions to the social sciences in Australia
and abroad. See http://www.assa.edu.au/.
iii We have reported here the combined results from the same questions used in the pilot and
main survey.
iv
Australian Research Council (ARC) grants are national competitive grants and funds a
significant proportion of research activity in Australian Universities. Discovery grants fund
fundamental research that may not have an immediate applied focus, but it is assumed to have
some broader community benefit. Linkage grants fund research collaborations between
academic chief investigators and industry partners (including government agencies). Industry
partners are required to make a cash and in-kind contribution to the project (see
http://www.arc.gov.au/ncgp/default.htm). These grants emphasis track recorded with 40% of
ARC Discovery assessment based on track record.
v
The reason for targeting academics who had secured research grants was to ensure the
project captured experienced academics who were likely to have had a history of research
collaborations, since one aim was to understand the impact and dynamics of such
partnerships.
vi
Respondents were asked to identify their main disciplinary background from a
predetermined list. Due to space restrictions and the length of the survey respondents were
not asked to identify their sub-discipline.
vii
Figure 1 and table 3 were calculated differently. Figure 1 is calculated by assigning a value
of 1 when respondents replied always, usually, or sometimes to a particular stage (which
means they progressed across a stage), with all other responses assigned the value of 0, which
means they failed to move up the scale. Figure 1 refers to how many respondents did not in
total progress across all 6 stages, thus giving a cumulative count across each discipline. This
exclusion criteria has been adopted in existing studies that have used the research use scale
(e.g. Cherney & McGee; Landry, Amara and Lamari 2001b) and is based on a linear
understanding of research utilization. Table 3 did not have this same restriction from one
stage to the next, i.e. if you don’t pass stage 1 you can’t progress to stage 2 and represents a
more non-linear representation of research utilization. Hence both figure 1 and table 3 are
based in different assumptions.
viii
We are unable to explore the impact of sub-disciplines because we only asked respondents
to identity their main research discipline in the academic survey.
ix We would like to thank one of the referees for drawing our attention to this issue.
i
37