Getting the Word Out: Multiple Methods for Disseminating Evaluation

Getting the Word Out: Multiple Methods for
Disseminating Evaluation Findings
Nancy B. Mueller, Ryan C. Burke, Douglas A. Luke, and Jenine K. Harris
rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
Objective: To evaluate the effectiveness of different strategies
for disseminating evaluation results to program stakeholders.
Methods: The results from a process evaluation of eight states’
tobacco control programs were disseminated to the state
programs that were assigned to one of four dissemination
conditions: print reports only, reports and Web site, reports and
workshop, or all three dissemination modes. Key measures
included levels of usefulness of the evaluation results and
satisfaction of participation by study participants. Results:
Although exposure to the Web site and workshop individually did
not provide a statistically higher degree of usefulness, a clear
upward trend was observed in usefulness as the number of
dissemination modes increased. Participants who engaged in all
three dissemination modes found the results more useful (P <
.05) for their work and the work of their agency than participants
using one or two dissemination modes. Participants who
engaged in the three dissemination modes also appeared to be
more likely to share the results with their colleagues (P = .06).
Conclusions: This study shows that disseminating evaluation
results through multiple, active modes increased usefulness,
satisfaction, and further dissemination of the results. Evaluators
should consider implementing more than one mode of
dissemination to share findings with stakeholders.
KEY WORDS: evaluation studies, information dissemination,
tobacco
practice, another important area to examine is the dissemination of evaluation results to program stakeholders. Many funding agencies now require grantees
to evaluate their programs to show effectiveness and
maintain accountability. The Centers for Disease Control and Prevention Office on Smoking and Health identifies program evaluation as one of its nine best practices
for comprehensive tobacco control programs and recommends at least 10 percent of a program’s budget be
allocated to evaluation and surveillance activities.4 The
primary reason for conducting a program evaluation
is to ensure that a dissemination feedback loop exists
so that evaluation results are shared with stakeholders, who use the information to improve their programs
and guide future planning efforts. Furthermore, the REAIM (Reach, Effectiveness, Adoption, Implementation,
and Maintenance) framework5 and Rogers’ diffusion
of innovations6 offer approaches for evaluating the potential for dissemination and public health impact of
interventions.
Disclaimer: The views expressed herein are those of the authors and do not
necessarily reflect the official views of the American Legacy Foundation (Legacy),
the Association of State and Territorial Chronic Disease Program Directors (CDD),
or the Centers for Disease Control and Prevention (CDC) Office on Smoking
or Health. Human subjects approval was obtained from Saint Louis University
Institutional Review Board.
The authors thank the tobacco control partners in Florida, Indiana, Michigan,
Minnesota, Nebraska, New Mexico, North Carolina, and Oregon for participating in
this project. They also thank Tanya Montgomery, Stephanie Herbers, Sarah Shelton, and Rachael Zuckerman for their contributions to this project. This research
was supported by the CDD and the Legacy with guidance from the CDC.
Corresponding Author: Nancy Mueller, MPH, Center for Tobacco Policy Research, Saint Louis University, 3545 Lafayette Ave, Suite 300, St. Louis, MO
63104 ([email protected]).
Understanding how to disseminate evidence-based
interventions (ie, closing the discovery-delivery gap)
has become an important focus for researchers, practitioners, and funding agencies in recent years.1–3 While
current efforts focus on translating basic science into
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
Nancy B. Mueller, MPH, is Assistant Director, Center for Tobacco Policy Research,
Saint Louis University School of Public Health, St. Louis, Missouri.
Ryan C. Burke, MPH, Center for Tobacco Policy Research, Saint Louis University
School of Public Health, St. Louis, Missouri.
Douglas A. Luke, PhD, is Principal Investigator, Center for Tobacco Policy Research,
Saint Louis University School of Public Health, St. Louis, Missouri.
J Public Health Management Practice, 2008, 14(2), 170–176
Copyright C 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins
170
Jenine K. Harris, MA, Center for Tobacco Policy Research, Saint Louis University
School of Public Health, St. Louis, Missouri.
Methods for Disseminating Evaluation Findings
Despite the existence of these frameworks for understanding dissemination, little is known specifically
about the process of dissemination and which strategies
work best in a public health setting.7 Although many
evaluations of states’ tobacco control programs have
been conducted,8–10 the literature regarding dissemination and utilization of results is limited. Even when results are shared with program stakeholders, it is unclear
to what extent programs are able to fully utilize the results to make improvements to their programs.
For example, a recent case study by Howell and
Yemane11 examined the quality of 12 large federal
program evaluations, including the extent to which
the evaluation results were disseminated to grantees,
government stakeholders, and academia. The authors
found dissemination of results to be very limited. Most
disseminated materials presented only the evaluation
methodology or program monitoring findings. The
timeliness of the dissemination of results was a major impediment for most of the evaluations. Several
reasons for the limited dissemination were identified,
including a lack of dissemination planning, few incentives for the government or contractors to widely disseminate the evaluation results, and reluctance by some
government personnel to disseminate unfavorable program results.
One factor that may influence utilization of evaluation results is the form (or mode) in which the results are communicated to the stakeholders. Traditionally, evaluation results are communicated through
printed reports. However, results can also be more
actively disseminated in face-to-face meetings or via
electronic media. With the advent of widespread Internet access, it has become more common to make
evaluation results available on the Web. To our knowledge, no studies have been conducted that examine
how the dissemination mode may affect the utilization of evaluation results. Determining the most effective way to share results with a specific audience
is critical to the uptake of the information by stakeholders. In a systematic review to identify strategies
used for evaluating the dissemination of cancer control interventions, Ellis and colleagues12 found that the
body of research was limited regarding the evaluation of dissemination strategies. Another systematic review reported similar conclusions in the primary studies of dissemination and diffusion strategies of dietary
interventions.13
● Project LEaP
In 2003, the Center for Tobacco Policy Research conducted Project LEaP: Linking Evaluation and Practice, a
3-year study funded by the National Association of
❘ 171
State and Territorial Chronic Disease Program Directors
and the American Legacy Foundation. The two primary
objectives of LEaP were to (1) conduct a process evaluation of the effects of state budget crises on eight states’
tobacco control programs and (2) test the effectiveness
of three strategies to disseminate the evaluation results to states’ tobacco control program partners. This
article reports the results of the latter dissemination
trial.
The three dissemination modes examined in this
study were print reports, targeted Web site, and interactive workshop. Each of the eight states participating in the process evaluation was assigned to one of
four groups: (1) print reports only; (2) print reports and
Web site; (3) print reports and workshop; and (4) print
reports, Web site, and workshop. We hypothesized
that states receiving the evaluation results through all
three dissemination modes (print reports, Web site, and
workshop) would have a greater understanding and
find the information more useful than states receiving
only one or two of the dissemination modes. Our rationale for this hypothesis was two-fold. First, the greater
number of modes provides state partners with more opportunities to access the evaluation results. Second, traditional print reports were considered passive dissemination, whereas Web sites with interactive features and
intensive workshops would provide interactive ways
to access the results. This hypothesis is supported by
communication theory, in particular media effects research, which shows that repeated exposure to a message especially through multiple modes may intensify
the impact on its audience.14,15
● Methods
Participating states
From 2003 to 2004, a process evaluation was conducted
for the following eight states’ tobacco control programs:
Florida, Indiana, Michigan, Minnesota, Nebraska, New
Mexico, North Carolina, and Oregon. Table 1 provides
a brief description of the programs. Upon completion
of the evaluation of each program, the results were disseminated to the tobacco control program partners in
each state. Each state was randomly assigned to a dissemination condition:
•
•
Florida and New Mexico received print reports only;
Michigan and Minnesota received print reports and
a targeted Web site;
• Nebraska and Oregon received print reports and an
interactive workshop; and
• Indiana and North Carolina received all three dissemination modes.
172 ❘ Journal of Public Health Management and Practice
TABLE 1 ● Description of states participating in Project LEaP
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
State
Florida
New Mexico
Michigan
Minnesota
Oregon
Nebraska
North Carolina
Indiana
a From
Dissemination mode
Funding allocation FY
2005 (in millions)
Per capita spending
(in dollars)
% Of CDC minimum
recommendationa
% Funding change
(FY 2004–2005)
Reports only
Reports only
Reports and Web site
Reports and Web site
Reports and workshop
Reports and workshop
Reports, Web site, and workshop
Reports, Web site, and workshop
2.8
6.3
6.7
30.5
5.0
4.4
13.1
12.4
0.16
3.32
0.68
6.21
1.39
2.53
1.58
2.03
3.6
45.9
12.2
107
23.7
33.1
30.7
35.6
−93.0
3.3
39.6
−27.9
−43.8
−37.1
45.6
−63.3
the Centers for Disease Control and Prevention.4
Dissemination mode conditions
Reports
A series of five print reports presenting state-specific
evaluation findings was developed for each state.
The reports were organized by the following topics: a project introduction and executive summary,
program environment (eg, state financial and political characteristics), program resources (eg, human,
informational, monetary), program capacity (eg, network relationships, partner roles, strategic planning),
and program sustainability (eg, level of sustainability in five domains). These reports are available at
http://ctpr.slu.edu. The reports were sent to all participants to ensure that the evaluation feedback loop was
completed and every participant was provided with at
least a basic set of evaluation results.
Web site
A Project LEaP Web site was developed to disseminate
evaluation results to specific state’s tobacco control programs. The Web site structure followed the same fivepart organization as the reports. The purpose of the
Web site was to allow visitors user-friendly access to
the evaluation results and for visitors to easily disseminate the results to their colleagues. All of the content
from the reports was included in the Web site. A “tell a
friend” feature was included so that visitors could easily share the link with others to increase the dissemination of the results. Other interactive features included
polling questions, a feedback form, and a site search engine. The URL was e-mailed to participants in the Web
dissemination states (ie, Indiana, Michigan, Minnesota,
and North Carolina).
Workshop
Center for Tobacco Policy Research staff conducted a
1-day in-person workshop in the four workshop states
(ie, North Carolina, Indiana, Oregon, and Nebraska).
The goal of the workshop was to provide tobacco control partners with relevant evaluation results, which
could be used for program planning efforts. Three major sessions of the workshop were (1) examining the
communication, information, and financial relationships among the tobacco control partners; (2) exploring the program’s level of sustainability; and (3) identifying opportunities to utilize the evaluation results in
short-term program planning. During the last session,
participants prioritized program challenges based on
discussions from the relationships and sustainability
sessions and then developed short-term action plans
addressing the top priorities.
Procedures and measures
In early 2005, before dissemination of the evaluation
results, baseline data were collected from participating agencies that were active partners of the states’
tobacco control program. These active program partners included voluntary healthcare and other advocacy
organizations, local and statewide coalitions, contractors, grantees, advisory groups, and state department
of health staff. One hundred forty-four agency representatives (on average 15 agencies per state) were sent
letters of invitation to participate in the study. For those
agreeing to participate, a Web-based survey was completed. States received their evaluation results based
on their dissemination condition on a staggered timeline, starting in July 2005. A follow-up Web survey was
then conducted approximately 8 weeks after the evaluation results were disseminated to program partners.
The follow-up survey data collection occurred from
September 2005 to February 2006.
Five major outcomes measures were examined to
evaluate the effectiveness of the dissemination methods. The outcomes were (1) usefulness of the evaluation
results to the individual, (2) usefulness of participating
in Project LEaP to the individual’s tobacco control work,
Methods for Disseminating Evaluation Findings
(3) usefulness of participating in Project LEaP to the
state’s tobacco control program, (4) usefulness of participating in Project LEaP to the agency, and (5) the degree
to which participating in Project LEaP was worthwhile
to the individual. Each of these items were measured on a
5-point Likert-type scale, with 1 indicating not at all useful or strongly disagree and 5 indicating extremely useful
or strongly agree. The primary research questions were
addressed by comparing the mean response to each
question across dissemination groups using an analysis
of variance (ANOVA) planned comparison analysis in
SPSS, Version 13.0.16
● Results
One hundred thirty-one of 140 participants completed
the baseline survey, whereas 167 of 185 participants
completed the postsurvey (94% and 91% response rate,
respectively). The larger postsurvey sample size was
due to the inclusion of additional program partners
who were invited to attend the workshops and any
changes to the tobacco control partner lists made by
the program managers (eg, new staff, new partner agencies) before the follow-up survey administration. There
were 106 individuals who completed both the baseline
and follow-up surveys. On average, 16 participants per
state representing 14 agencies completed the baseline
survey, whereas 21 participants per state representing
14 agencies completed the follow-up survey.
Usefulness of dissemination results
A slight majority of participants (57%) aware of the
Project LEaP results reported having had time to read
the reports (Table 2). Almost two thirds of those individuals (66%) who did find the time found the reports to be
useful. Overall, 43 percent of the participants thought
the reports were the most useful of the three dissemina-
❘ 173
tion modes. Seventy-four percent of participants shared
the reports with their colleagues, including supervisors
and boards of directors.
Fewer than half of the participants (47%) invited
to visit the Web site actually did. Sixty percent
of the participants read at least some of the contents of
the Web site, whereas 28 percent skimmed it. Although
the Web site content was exactly the same as the reports,
a lower percentage of participants found its sections
useful. Half of the participants forwarded the link to
their colleagues. Web site tracking showed that visitors
to the four different Web sites came from 167 unique
IP addresses located in 25 states, Washington, DC, and
countries such as Canada, Spain, Turkey, and the United
Kingdom. Although the Web sites had a number of visitors, they typically did not stay long. The median visit
length was less than 2 minutes. A majority of the visitors (80%) looked only at one state’s Web site, and less
than a quarter of individuals (21%) were repeat visitors.
Greater participation was reported in the workshops
than the Web sites, with 64 percent of invited participants attending a workshop. A large majority of attendees (73%) reported learning new information about
their state’s tobacco control program, and more than
half (59%) said the workshop influenced their work in
tobacco control. Fifty-four percent said the action plan
was useful to their work in tobacco control, and 77 percent reported making progress on the action plan at the
time of the postsurvey. Sixty-seven percent of participants shared the information from the workshops with
their colleagues.
One year later, the program managers from the
workshop states reported varying levels of progress regarding the action plans during a brief phone followup. However, all four states did continue to use
the plans as guides and had made some progress.
Two states incorporated three of their priorities
(sustainability, increasing new partners’ understanding of a comprehensive tobacco control program, and
TABLE 2 ● Summary of access to dissemination modes among survey participants
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
Reports
Florida
New Mexico
Michigan
Minnesota
Oregon
Nebraska
North Carolina
Indiana
Total
Web site
Workshop
State
Reports sent
Individuals who read
reports, n (%)
Links sent,
n
Individuals who read Web
site materials, n (%)
Invitations
sent, n
Workshop attendees,
n (%)
10
12
13
17
17
34
30
31
167
6 (60)
12 (100)
11 (85)
6 (35)
8 (47)
17 (50)
9 (30)
19 (61)
92 (55)
...
...
14
17
...
...
30
31
92
...
...
6 (43)
3 (18)
...
...
8 (27)
8 (26)
25 (27)
...
...
...
...
16
33
29
31
109
...
...
...
...
12 (75)
26 (79)
10 (34)
22 (71)
96 (88)
174 ❘ Journal of Public Health Management and Practice
tobacco-related disparities) addressed by the workshop
action plans into their state strategic plans. In addition,
a few states worked to further develop and carry out
step identified in their actions during program retreats.
FIGURE 1 ● Collapsed modes by outcomes variable.
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
Relationship of dissemination modes to usefulness
and satisfaction
ANOVA results showed that individuals who were
exposed to all three dissemination modes (reports,
Web site, and workshop) were significantly more likely
to find the evaluation results useful to their agency
than those receiving reports only (P < .05). Similar
trends were observed for the other outcomes measures,
though not significant. To further explore the effect of
the number of dissemination modes on satisfaction and
usefulness, the four-mode model was collapsed into a
three-mode model (ie, reports only; Web site and reports or workshop and reports; and reports, workshop,
and Web site). The mean responses for the five outcomes measures showed an upward trend based on
the number of dissemination modes in which a participant engaged. The more times participants were exposed to the project results, the more satisfied they were
with their participation in the project and the more
useful they found the results. Specifically, significant
differences were found among the modes for the outcome variables usefulness of Project LEaP participation for
your agency and usefulness of Project LEaP participation
for your own tobacco control work (P = .01 and P = .03,
respectively) It appears that the usefulness measures
accounted for between 5 percent and 10 percent of the
variance (η2 = 0.05–0.08), which can be considered a
medium-size effect (Table 3).17 Figure 1 shows that exposure to all three modes of dissemination is related
to higher satisfaction with the disseminated evaluation
results.
Following these significant ANOVA results, planned
comparisons for these outcomes showed that participants who reported engaging in all three modes of dissemination found the Project LEaP results significantly
more useful for their work and the work of their agency
(P < .05) than participants using one or two modes of
dissemination.
Secondary dissemination
Participants who engaged in all three dissemination
modes (workshop, Web site, and reports) were more
TABLE 3 ● Utility of dissemination results by number of dissemination modes
qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
Dissemination mode exposure
Variable
Overall usefulness of LEaP results to you
Usefulness of Project LEaP participation for your
state’s tobacco control program
Usefulness of Project LEaP participation for your
agency
Usefulness of Project LEaP participation for your
own tobacco control work
Participation in Project LEaP was worth your time
a Values
given are mean.
Report series
only (one mode)
(n = 74)a
Reports plus Web site or
workshop (two modes)
(n = 74)a
Workshop, Web site, and
reports (three modes)
(n = 19)a
F
P
η2
2.77
3.07
2.85
3.38
3.32
3.82
2.26
2.71
.11
.07
0.030
0.050
2.70
3.07
3.72
4.88
.01
0.084
2.60
2.98
3.47
3.64
.03
0.065
3.90
3.89
4.16
0.67
.52
0.012
Methods for Disseminating Evaluation Findings
likely to share the Project LEaP results with their colleagues. A marginally significant upward trend is observed in the percentage of participants sharing the
evaluation results from those receiving only the reports to those receiving all three modes (χ 2 = 7.4;
P = .06). Approximately 84 percent of participants receiving reports, Web site, and workshop shared the results in comparison with 52 percent in the reports-only
group.
● Discussion
The results of most evaluation projects continue to
be disseminated using traditional print formats. This
study contributes to dissemination science by formally
examining whether exposure to more types of active
dissemination is related to greater perceived usefulness of and satisfaction with evaluation results. In our
study, traditional print reports were utilized most by
study participants and were shared more with participants’ colleagues than the Web site and workshop
modes. However, the reports represented the most passive dissemination mode of the three and lacked the
opportunity for further discussion about the interpretation and application of the results between participants and evaluators. Although the Web site provided
relatively easy access to the evaluation results, it was
utilized less and was more resource intensive than
expected.
The workshops provided the most interactivity between participants and evaluators, with an actual product (ie, short-term action plans) being developed by
workshop participants. The action planning helped
move the participants from simply awareness of the
results to a first step toward utilizing the results.
The results of the study suggest a simple doseresponse relationship between modes of exposure and
utility of evaluation results—the more times a person
sees the results of the evaluation, the greater he or she
views the utility of these results. This is not a surprising finding, but it does have important implications for
public health evaluators. On the basis of this evidence,
evaluators should consider implementing more than
one mode of dissemination to share findings with their
stakeholders. Dissemination of results should not be a
one-time experience, but rather an ongoing, dynamic
dialogue between evaluators and stakeholders.
Although not the primary objective of this study,
some of our results suggest the importance of secondary dissemination. Almost three quarters of the
study participants reported sharing evaluation results
with other program partners, and persons exposed to
all three dissemination modes had the highest rates of
shared results. Further study of secondary dissemina-
❘ 175
tion is warranted—we know very little about how evaluation results are communicated and shared past the
first direct dissemination of information. Rogers’ diffusion of innovations6 may provide a useful framework
for studying secondary dissemination.
Lomas18 described the dialogue between researchers
and practitioners about implementing findings as
“poorly organized and all too rare.” This description
also applies to evaluators and stakeholders. Although
funding agencies recommend at least 10 percent of the
program budget be allocated to program evaluation,4 a
clearly defined dissemination plan with identified target audiences and strategies is often overlooked during
evaluation planning.11 The planning for the dissemination often does not occur until there are results to share
and/or does not provide adequate information or discussion about how to use the results.
Very little literature is currently available regarding effective dissemination processes and interventions
within public health.7 Ellis et al12 reported that the evidence to recommending any one dissemination strategy as effective in the promotion of uptake of cancer
control interventions was lacking. Many funding agencies (ie, government and nongovernment) have begun
funding dissemination research, although funding resources are not at optimal levels.1
It is important to note the following limitations of the
Project LEaP dissemination study. First, the outcome
measures were based on self-reported data. The study
design and budget did not allow for observational measures of evaluation utilization. However, we were able
to contact the program managers from the four workshop states 1 year later to assess how the action plans
had been used. It would be useful to follow the programs for a longer period to understand how programs
had used the evaluation results. We were also unable
to measure the effectiveness of each mode (ie, reports
vs Web site vs workshop) individually because of the
study design and the importance of sharing evaluation
results with all eight states’ programs. Understanding
the influence of each mode on utilization of evaluation
results would be an important consideration for future
study. Another limitation was the small sampling frame
of only eight states, which may limit its generalizability.
Although the findings were not unexpected or surprising, Project LEaP was one of the few studies to
actively compare strategies for the dissemination of
evaluation results. There remain many unanswered
questions about factors leading to successful dissemination. However, this study suggests that evaluation
results will be more useful if evaluators take a more active, dynamic approach to communicating their findings. Sending print reports to program partners and
expecting these reports to have a large effect is clearly
not a safe assumption any more.
176 ❘ Journal of Public Health Management and Practice
REFERENCES
1. Kerner J. Knowledge translation versus knowledge integration: a “funder’s” perspective. J Contin Educ Health Prof. 2006;
26:72–80.
2. Glasgow RE, Marcus AC, Bull SS, Wilson KM. Disseminating effective cancer screening interventions. Cancer.
2004;101(5)(suppl):1239–1250.
3. National Cancer Institute, Center for the Advancement of
Health, Robert Wood Johnson Foundation. Designing for Dissemination: Conference Summary Report. Washington, DC: National Cancer Institute; 2002. http://cancercontrol.cancer.
gov/d4d/d4d conf sum report.pdf. Accessed April 10,
2007.
4. Centers for Disease Control and Prevention. Best Practices
for Comprehensive Tobacco Control Programs—August 1999
[reprinted with corrections]. Atlanta: US Department of
Health and Human Services, Centers for Disease Control
and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health;
1999.
5. Dzewaltowski DA, Glasgow RE, Klesges LM, et al. REAIM: evidenced-based standards and a Web resource to improve translation of research into practice. Ann Behav Med.
2004;28(2):75–80.
6. Rogers EM. Diffusion of Innovations. 5th ed. New York: Free
Press; 2003.
7. Kerner J, Rimer B, Emmons K. Dissemination research and
research dissemination: how can we close the gap? Health
Psychol. 2005;24(5):443–446.
8. Mueller NB, Luke DL, Herbers SH, Montgomery TP. The
Best Practices: use of the guidelines by ten state to-
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
bacco control programs. Am J Prev Med. 2006;31(4):300–
306.
Pierce JP, Gilpin EA, Emery SL, et al. Has the California tobacco control program reduced smoking? JAMA.
1998;280:893–899.
Abt Associates Inc. Independent Evaluation of the Massachusetts
Tobacco Control Program: Sixth Annual Report. Cambridge, MA:
Abt Associates Inc; 2000.
Howell EM, Yemane A. An assessment of evaluation designs: case studies of 12 large federal evaluations. Am J Eval.
2006;27(2):219–236.
Ellis P, Robinson P, Ciliska D, et al. A systematic review of
studies evaluating diffusion and dissemination of selected
cancer control interventions. Health Psychol. 2005;24(5):488–
500.
Ciliska D, Robinson P, Armour T, et al. Diffusion and dissemination of evidence-based dietary strategies for the prevention
of cancer. Nutr J [serial online]. 2005;4:13.
National Institutes of Health. Theory at a Glance: A Guide
to Health Promotion Practice. 2nd ed. Washington, DC:
US Department of Health and Human Services; 2005. http://
www.cdc.gov/dhdsp/CDCynergy training/Content/
activeinformation/resources/Theory at Glance Spring2005.
pdf. Accessed July 26, 2007.
Freimth V, Quinn SC. The contributions of health communication to eliminating health disparities. Am J Public Health.
2004;94(12):2053–2054.
SPSS (for Windows). Release 13.0. Chicago: SPSS Inc; 2004.
Cohen J. Statistical Power Analysis for the Behavioral Sciences.
2nd ed. Hillsdale, NJ: Earlbaum; 1988.
Lomas J. The in-between world of knowledge brokering. BMJ.
2007;334:129–132.