Assessing and Evaluating Multidisciplinary Translational Teams: A

ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
1
Assessing and Evaluating Multidisciplinary Translational Teams: A Case
Illustration of a Mixed Methods Approach and an Integrative Model
Kevin C. Wooten
University of Houston Clear Lake
Robert M. Rose, Glenn V. Ostir, William J. Calhoun, Bill T. Ameredes, and Allan R. Brasier
University of Texas Medical Branch at Galveston
This study was conducted with the support of the Institute for Translational Sciences at
the University of Texas Medical Branch, supported in part by a Clinical and Translational
Science Award (UL1TR000071) from the National Center for Advancing Translational Sciences,
National Institutes of Health.
Corresponding author: Kevin C. Wooten, University of Clear Lake, 2700 Bay Area
Blvd., Houston, TX 77058. Email: [email protected].
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
2
Abstract
A case report is provided illustrating how multidisciplinary translational teams can be assessed at
the outcome, process, and developmental levels using a mixed methods approach. Levels of
evaluation appropriate for teams are considered in relation to relevant research questions and
assessment methods, as well as their relative strengths and limitations. Examples are provided of
how logic models can be applied to both scientific projects as well as team development, and
serve to inform choices between various methods within a mixed methods design. Use of an
expert panel technique is reviewed, culminating in consensus ratings of 11 multidisciplinary
teams along specific scientific and maturational criteria, and a final evaluation within a team type
taxonomy. Teams were designated as early in development, traditional, process focused, or
exemplary, on the basis of team maturation and scientific progress. Lessons learned from data
reduction, use of mixed methods, and use of expert panels are explored.
Keywords: team science, logic models, process evaluation, translational teams
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
3
Assessing and Evaluating Multidisciplinary Translational Teams: A Case Illustration of a Mixed
Methods Approach and an Integrative Model
The growth in science and engineering research conducted by teams has dramatically
accelerated since 1975 (Jones, Wuchy, & Uzzi, 2008), making multi-university collaborations
the fastest growing authorship structure. This transition has been accelerated by the recognition
that increasingly specialized scientific fields must develop collaborations to enhance creativity
and accelerate the pace of discovery to address major societal health problems (Disis & Slattery,
2010). Research and intellectual property developed by highly functioning multidisciplinary
research teams has greater impact in peer recognition through citations and patent uses than
research products from siloed investigators (Wuchty, Jones, & Uzzi, 2007). As a result, major
funding agencies are placing increasing emphasis on team science approaches in their funding
portfolio. A notable example is the Clinical and Translational Sciences Award (CTSA), an
initiative emerging from the NIH Roadmap (Zerhouni, 2006), intended to stimulate the speed
and effectiveness of translational research (Woolf, 2008).
While definitions of different types of teams are numerous, work teams have been
defined as an “interdependent collection of individuals who are responsible for specific outcomes
for the organization” (Sundstrom, DeMuse, & Futrell, 1990, p. 120). Multidisciplinary research
teams are a variant of work teams that focus on collaborative processes, which is a key
component of team science (Hall, Feng, Moser, Stokols, & Taylor, 2008). Collaboration in the
context of research teams involves having team members apply their unique expertise to the
scientific problem, work separately, and then subsequently integrate their efforts, as well as share
data and ideas (Bennett, Gadlin, & Levine-Finley, 2010).
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
4
Recently, an implementation model for multidisciplinary translational teams (MTTs)
using the CTSA infrastructure has been proposed (Calhoun et al., 2013). The MTT is a unique
hybrid structure that includes goals of both an academic research team in knowledge generation
and training with those of a product-driven business team to develop a device or intervention for
clinical translation. MTT design characteristics include a strategic core of multidisciplinary
investigators dynamically engaged in training, capacity development and product generation
(Calhoun, et al., 2013). The interdependence and heterogeneous membership promotes
innovation and effectiveness (Van de Vegt & Janssen, 2003). The dynamic, multi-layered and
stage-dependent processes engaged by MTTs pose major challenges in evaluating the processes,
outcomes and skill acquisition of its members.
In this paper, we examine models, methods, and a case illustration of assessing and
evaluating MTTs within the context of a CTSA environment. We use team assessment to refer to
the collection of data to measure a team’s progress, and team evaluation to describe the process
of determining significance of team activities. Several evaluation objectives are addressed by this
paper. First, we attempt to identify and describe a variety of qualitative and quantitative methods
useful to assess scientific teams at several different levels. Thus, a mixed methods approach to
evaluating teams in a complex and changing environment is suggested, such that multiple levels
of evaluation may be addressed. Second, we illustrate the use of data reduction processes
involving various methods and multiple levels to evaluate team outcomes, team processes, and
opportunities for team development. Last, we provide a case example of data reduction for team
evaluation, using a proposed integrated scientific team evaluation model to facilitate helpful
feedback and overall team management.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
5
Assessing and Evaluating Team Science
The growth in the field of team science has now matured to the state where major themes and
theoretical perspectives have been identified (Falk-Krzesinski et al., 2010; Stokols, Hall, Taylor,
& Moser, 2008). Because much of translational research utilizes team science processes and
structures (Börner et al., 2010), identifying the most appropriate evaluation models, methods and
techniques are now needed (Hall, et al., 2008; Masse et al., 2008). Recent work illustrates a
range of methods to evaluate scientific teams. Specifically, team science specific survey methods
(Masse et al., 2008), social network analysis (Aboelela, Merrill, Carley, & Larson, 2007), action
research (Stokols, 2006), interviews and focus groups (Stokols et al., 2003), and improvementoriented approaches using multiple methods (Gray, 2008) in the evaluation of large scale and
research centers. Börner, et al. (2010) and others (Klein, 2006; Stokols et al., 2003; Trochim,
Marcus, Masse, Moser, & Weld, 2008) have therefore suggested that given the complexity and
levels of interdisciplinary and team science, that a multi-method approach is necessary.
Considerable literature has been generated relative to the general effectiveness of teams
(Cohen & Bailey, 1997; Guzzo & Dickson, 1996). Other attempts to synthesize team
effectiveness research (Ilgen, Hollenbeck, Johnson, & Jandt, 2005; Mathieu, Maynard, Rapp, &
Gilson, 2008) have examined studies investigating context, processes of team behavior, as well
as outcomes of team functioning. With this synthesis has been the exploration of stages of team
evolution and development (Wheelan, Davidson, & Tilin, 2003). Other efforts have described
models of non-linear evolution (Gersick, 1991), as well as multiple, alternative pathways to
maturation and development (Burke, Stagl, Salas, Pierce, & Kendall, 2006; Smith-Jentsch,
Cannon-Bowers, Tannenbaum, & Salas, 2008). Such alternative models of team process,
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
functioning, and development are additional justification for the use of a mixed methods
approach to assessment and evaluation.
While many of the findings and models of general team effectiveness have applicability
to team science, much effort is needed to determine what components of team functioning and
effectiveness are most applicable to translational science (Falk-Krzesinski et al., 2010; Rubio et
al., 2010). The NIH Field Guide articulates a range of potential team considerations for
translational science (Bennett, Gadlin, & Levine-Finley, 2010), and more recent efforts have
been directed toward greater specificity in team membership, skill acquisition, team structure,
and discrete team processes that should constitute the foci of translational science investigation
and ultimately team evaluation (Calhoun et al., 2013). Given the need to assess team
effectiveness as well as development, a mixed methods approach to the evaluation of MTTs is
warranted (Falk-Krzesinski et al., 2011).
Types of Evaluation Appropriate for Multidisciplinary Teams
Levels of Evaluation
The selection of the appropriate evaluation framework is critically dependent on the
anticipated purpose of the evaluation (Hansen, 2005). Given that MTTs are complex and
dynamic entities operating within the context of the academic health center undergoing
evolutionary, adaptive change in response to both internal and external factors focused on a
translational intervention, our goal is to assess and evaluate MTT function, to quantify the MTT
impact on translation, and to inform CTSA leadership on effective processes. These
implementations would form the basis for systems-level interventions that should broadly
facilitate other MTT translations within the CTSA (Calhoun, et al., 2013; Hall & Vogel, 2012).
6
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
7
To address the multiple levels involved in assessment and evaluation of a MTT, we draw
on concepts and techniques from Outcome-based Evaluation, Process Evaluation and
Developmental Evaluation. Table 1 illustrates these three team evaluative levels and addresses
questions which we believe to be important for the evaluation of team science. The questions
listed under the evaluation levels shown in Table 1 constitute core research questions for the
evaluation of scientific teams, and suggest they can be useful for generation of evaluative criteria
and choice of methods.
Outcome-based Evaluation is a systematic way to determine if a program has achieved its
goals (Hoggarth & Comfort, 2010). For example, the CTSA consortium-adopted logic model
(Frechtling, 2007) is a planning document that embodies the Outcomes-based Evaluation
approach, where short, medium and long-term outcomes are identified and used to develop
activities and inputs required to accomplish them (Hoggarth & Comfort, 2010). Some relevant
assessment methods useful for quantifying MTT outcomes could include artifact/unobtrusive
measures assessment, an assessment focused on the products of the MTT, or bibliographic
assessment, a tool that would suggest the impact of the MTT in the larger scientific context.
By contrast, Process Evaluation is an iterative process that focuses on revealing program
effectiveness (Saunders, Evans, & Joshi, 2005). Assessment methods relevant for Process
Evaluation could include direct observation, structured interviews, surveys and social network
analysis on program processes and program implementation. This information stimulates
discussion, inquiry, and insight into adopting programmatic change. Crucial questions to be
addressed by the Process Evaluation framework are shown in Table 1.
Developmental Evaluation is a technique focused on continuous improvement supporting
team adaptation under dynamic and evolving conditions while the team is working (Patton,
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
8
2010). Developmental Evaluation provides information on emergent processes that inform
innovative approaches that generate insights into systems-level understanding of effective
processes. In a manner distinct from Process Evaluation, this situation allows for direct, userbased feedback and focuses the participants to implement innovative change with the team.
Questions to be addressed by Developmental Evaluation are shown in Table 1, with emphasis on
roles, expectations and skill set development.
Assessment Methods
A range of social science research methods and designs are available to evaluation
researchers (Gravetter, 2003; Trochim, 2005) which can be applied to the study of scientific
teams (Hollingshead and Poole, 2012). We believe that there are eight assessment methods most
applicable for team assessment, based on the needs of translational science and the available
literature. Table 2 illustrates artifacts/unobtrusive measures, bibliographic measures, process
observation, social network analysis, surveys, structured interviews, focus groups, and the case
method. Each is described, their applicability to MTTs illustrated, and considerations of their use
noted. Moreover, each method can be considered qualitative or quantitative, serving the needs of
outcome, process, and developmental levels of evaluation. Finally, we present the strengths and
weaknesses of each.
Based on the characteristics of these evaluation methods, we have incorporated
components of the Outcomes-based, Process-based and Developmental Evaluation into a multilevel and mixed methods evaluation framework to assess and evaluate MTTs. However, mixed
methods research, particularly as applied to evaluation research, requires integrative research
designs, specific sampling strategies, sophisticated data analysis, and great care in inferring from
the results (Creswell & Clark, 2011; Teddlie & Tashakkori, 2009). Mixed methods research
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
9
produces more evidence than either qualitative or quantitative approaches could by themselves,
and the “combination of strengths of one approach make up for the weaknesses of the other
approach” (Creswell and Clark, 2011, p. 2).
Case Illustration of Mixed Methods Assessment to Inform an Integrative Evaluation Model
Use of Logic Models
Recent evidence suggests that logic models can be applied to complex scientific
endeavors (Norman, Best, Mortimer, Huerta, & Buchan, 2011). Two specific logic models were
developed for each MTT. First, each Team Principal Investigator and Team Project Manager
developed a project-based logic model that addressed the CTSA-collaborative project. In these
cases, logic models depicted short term (1-3 years), medium term (4-6 years), as well as long
term (7-10 years) outcomes that reflected anticipated changes in standards of care, diagnosis,
management of specific disease populations, as well as stated recognition within a given
scientific or treatment community. Shown in Figure 1 is an example of a project-based logic
model from a burns injury and metabolic response MTT. The leadership of each MTT
subsequently developed metrics and a measurement plan to document and track their progress.
Thus, logic models were used to guide efforts addressing questions depicted in Table 1 at the
outcome level of analysis.
A second logic model was generally developed for all MTTs to evaluate processes and
developmental outcomes involving a seven step team development cycle as shown in Figure 2,
inclusive of team establishment and vision, team process surveys, team self-assessment,
developmental plans, team coaching, team behavior observation, and external review. Specific
metrics and a measurement plan were developed for all MTTs universally, and reflected both
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 10
qualitative and quantitative measures. This second type of logic model involved those questions
involving team process and team development shown in Table 1.
Choice of Assessment Methods
Given the chosen levels of assessment and the number of methods utilized, a mixed
methods approach was operationalized. The research questions in Table 1 are illustrative of a
mixed methods approach because “they help blend the two approaches (qualitative and
quantitative) from the outset of the research” (Teddlie & Tashakkori, 2009, p. 126). Thus, these
questions, and attendant logic model metrics, specified how the qualitative and quantitative
sources of data could be integrated together to address questions that could otherwise not be
addressed by independent means. Inspection of the questions depicted in Table 1 were
specifically designed to address whether MTTs were progressing scientifically as well as
maturing as social entities (i.e., multidisciplinary teams).
Shown in Table 3 are the specific criteria that were operationalized (by method,
qualitative and quantitative designation, and level of assessment). We assessed MTTs primarily
using artifacts, process observation, surveys, and bibliographic means. Thus, the methods ranged
from highly quantitative surveys to highly qualitative thematic analysis reports generated by an
experienced observer of scientific groups.
Illustration of a more quantitative method involved a survey designed to measure team
processes. In this case, we administered the “Our Team Survey,” a web-based team assessment
which has been validated across numerous types of work teams (Wilson, 2003). The Our Team
Survey consists of 71 items measuring 14 different factors of team processes. Each factor is
scaled from zero to seven. As shown in Table 3, six specific factor scales were used to inform
our select criteria, inclusive of clarity of goals and priorities, consensus planning, management
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 11
feedback, team atmosphere, domination, and management support. Figure 3 depicts the results
from the Our Team Survey consisting of team leaders/principal investigators and team members
(scientists). Team members reported increases in perceptions of clarity of goals, consensus
planning, team atmosphere, recognition, and reduced perceptions of tension/stress and leader
domination over a two year period.
Illustration of a more qualitative method is shown in Figure 4, depicting components of a
team development planner. Here, ranking scales and identification of developmental goals for a
given team is illustrative of “information that is presented in both narrative and numerical forms”
(Teddlie & Tashakkori, 2009, p. 129). These data were used to populate portions of the
Maturation/Development criteria (e.g., new opportunities, challenges) depicted in Table 3.
Data Reduction Process
Given the large amount of potential useful qualitative and quantitative data, only select
scales and select questions were used to inform specific criteria shown in Table 3 (i.e.,
research/scientific and maturation/development factors). For example, only certain scales (e.g.,
domination, consensus, and management support) from the quantitative Our Team Survey were
used to describe transformative and empowered leadership. An example of how each team’s
vision and goals were assessed involved using stated goals in progress reports and in team
agendas, which were qualitative in nature. Data illustrating each of the four research/scientific
factors and the four maturation/development factors were separately compiled for each MTT.
Creation of an Evaluation Model for Scientific Teams
While the many general reviews of team research (Cohen & Bailey, 1997; Guzzo &
Dickson, 1996; Ilgen, Hollenbeck, Johnson, & Jandt, 2005; Sundstrom, DeMuse, & Futrell,
1990) have well documented different types of teams (e.g., technology teams, work teams, self-
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 12
managed teams), these team categories do not serve well the nature and variation of scientific
teams. Perhaps the closest description of a scientific team category is that of an ad-hoc project,
where a team exists for only a finite time to solve problems and interact with internal and
external constituents (Devine, Clayton, Phillips, Dunford, & Melner, 1999). Related to potential
ways of examining scientific teams, Guimera, Uzzi, Spiro, and Nunes Amaral (2005) have
suggested the use of a team’s network structure based on disciplinarity, and Stokols, et al. (2008)
have suggested seven contextual influences on transdisciplinary team effectiveness.
Unfortunately, there is no universally accepted model or criteria that is entirely teamspecific that is useful to distinguish scientific teams categorically. While there are many
suggestions for potential evaluation criteria, and while many argue for useful methods and
processes for evaluation of team science (Falk-Kresinski, et al., 2011; Hall, et al., 2008; Rubio, et
al., 2010), the many levels and many potential methods available renders this task problematic.
There is a tremendous need therefore for an evaluative model that is specific to scientific teams.
Such a model is necessitated by the goals of the NIH (Zerhouni, 2006) which focuses on team
based intiatives and their evaluation. Practically speaking, a common framework is much needed
to provide guidance in data reduction and criteria variability. Given the potential for increased
competitiveness and reduced resources, the ability to evaluate teams categorically and along their
growth needs is much needed.
Based on the four research/scientific factors and the four maturational/developmental
factors, we established a prototype evaluation model to be used to synthesize the reduced
assessment data into an overall team evaluation. We desired to create a model that not only
differentiates between teams on our eight criteria, but also could be graphically displayed to team
leaders, team members, and external parties. A two-by-two matrix was constructed to depict
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 13
teams as one of four types: an exemplary team, a process team, a traditional team, and an early in
development team. These team types result from assessing teams and subsequently evaluating
them along a continuum of high to low on their team maturation/developmental factors and high
to low on their research and scientific progress. For example, exemplary teams would be high in
both research and scientific progress assessed using reduced data along the criteria involving the
research plan, the research generated, the communication and program growth, progress across
translational domains, as well as vision, charter, goals, transformative and empowered
leadership, meeting management and coordination, and external communication and
collaboration.
Use of an Expert Panel
The use of expert panels to evaluate research proposals, grants, and programs is wellestablished within the scientific community (Coryn, Hattie, Scriven, & Hartman, 2007; Lawrenz,
Thao, & Johnson, 2012). Following the recommendations of Huutoniemi (2010) and Klein
(2006), we utilized expert panels to judge the reduced assessment data and to balance the
objective data with contextual specificity and interdisciplinary focus. Our expert panel included
the Principal Investigator, the Director and Assistant Director of Research Coordination, a
consulting team coach, and a consulting evaluator, representing a range of disciplines (medicine,
clinical research, psychiatry, psychology, and management).
The expert panel was provided logic models and measurement plans for each team, as
well as assessment data and reports generated from the various methods shown in Table 3. Each
expert panel member was asked to independently review all the assessment data and determine
an initial rating of each of the 11 teams assessed using a scoring template as shown in Figure 5.
Here, each expert panel member rated a given team’s performance as 0 (not present), 1 (low), 2
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 14
(medium), or 3 (high). Thus, each team could be given a score of zero to twelve for each of the
two criteria categories (research/scientific and maturation/development) by summing the
subscore for each specific criterion (e.g., research plan). Panel members were instructed to
thoroughly review all data provided for each team, and then evaluate each team through the
completion of a scoring template (Figure 5) for each team. Members of the expert panel were
next assembled, and each presented their initial ratings on the maturational/developmental and
research/scientific progress factors. After each panel member articulated their view, differences
in the ratings were discussed, assessment data reexamined, and a panel consensus was reached.
Results of Mixed Methods Data Synthesis
Figure 6 illustrates the results of the expert panel evaluation results. Five teams were
placed in the “early in development” category (i.e., low in team maturation/development and
high in research/scientific progress), one team was categorized as a “process focus” team (i.e.,
high in maturation/team development, low in research/scientific progress), and three teams were
categorized as “exemplary” (high in team maturation/development and high in research/scientific
progress). Two teams were combinations of team types.
Due to the number of teams judged to be “early in development,” the 11 teams were then
illustrated by their categorization resulting from evaluation, and their relative tenure. Because the
MTTs consisted of existing research teams with a relatively long history (over 5 years), maturing
teams (between 3-5 years), and new teams (less than 3 years), the distribution shown in Figure 7
suggests that MTTs are sensitive to their maturational stages when evaluated.
Lessons Learned and Conclusions
Developing a robust and uniform strategy for assessing and evaluating translational teams
will be a key for process improvement for the CTSA program. Assessment of translational teams
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 15
is a specialized case due to their academic missions and product-focused health outcomes. The
non-linear nature of MTT development consisting of dynamic cycles of activity (i.e., team
development) makes the establishment of an overall evaluation model problematic. The finding
that team tenure was central to team evaluation is illustrative.
Several lessons were learned from this effort. First, it is a practical necessity to use a
mixed methods design when applying three levels of evaluation (Greene, Caracelli, & Graham,
1989; Johnson, Onwuegbuzie, & Turner, 2008). Trochim et al. (2008) have shown the value of
integrated mixed methods approach in outcomes-based evaluation of disciplinarity. Program
evaluation based on quantitative measures, such as improvements in population health are too
long term to provide meaningful dynamic input to realistically inform changes in MTT processes
over shorter grant cycle times. Instead, we propose that qualitative assessment of the academic
environment would provide more useful information. In fact, structured interviews of the impact
of the adoption of MTTs within a CTSA reveal a striking cultural change (Kotarba, Wooten,
Freeman, & Brasier, 2013). Additionally, the adoption of a translational intervention by
practitioners, patients, and communities requires qualitative assessment of needs. For these
reasons, we suggest that mixed methods evaluation will provide the most meaningful evaluation
of MTTs.
Second, selection of evaluation criteria is critical. While there is no established evaluative
model, we found it useful to limit the number of criteria such that it could be used to inform a
model that is easy to use and easy to understand. While a broadened array of criteria for
contextual and collaborative variables are available (Stokols, et al., 2008), each CTSA should
select the evaluative criteria that best relate to its overall objectives. However, achieving a
balance between the more traditional scientific criteria with social-psychological criteria will
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 16
play an important role in facilitating the goals of training and capacity building that is desired by
the NIH. Additionally, reduction of data to select criteria is the only feasible way to utilize
graphic models of evaluation as reported here. Perhaps agreement on the overall dimensions
(scientific and maturational), along with flexibility in the choice of specific criteria to evaluate
such overall dimensions, might provide both consistency and flexibility desired by all.
Third, effective data reduction and facilitation is essential to enabling expert panels to
consider and evaluate teams using multiple and complex criteria. Providing flow charts and
indices explaining how data are reduced to address specific criteria, providing examples, and
facilitating the panels in an unbiased and egalitarian manner may hold the key for future use. We
propose that expert rater panels may be an effective means to reinforce ideal MTT processes.
However, the use of expert panels to effectively evaluate teams will require substantive research
and investigation. Reviews of the process involving expert rater panels (Olbrecht & Bornman,
2010) have found them plagued by inconsistencies, politics, low reliability, and a variety of
group based decisional and cognitive biases. Thus, while we used mixed methods, used specific
evaluation factors and criteria, reduced the data for independent rating of each team, the use of a
consensus seeking model for team evaluation and categorization is worthy of further
investigation. Such a model may be prone to numerous biases, yet does allow for an in depth
understanding from the unique perspectives of each expert rater.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 17
References
Aboelela, S. W., Merrill, J. A., Carley, K. M., & Larson, E. (2007). Social network analysis to
evaluate an interdisciplinary research center. Journal of Research Administration, 38, 6178.
Bennett, L. M., Gadlin, H., & Levine-Finley, S. (2010). Collaboration & team science: A field
guide. Bethesda, MD: National Institutes of Health. Retrieval from NIH website:
https://ccrod.cancer.gov/confluence/download/attachments/47284665/TeamScience_Fiel
dGuide.pdf?version=1&modificationDate=1271730182423.
Börner, K., Contractor, N., Falk-Krzesinski, H. J., Fiore, S. M., Keyton, J., Spring, B., Stokols,
D., Trochim, W., & Uzzi, B. (2010). A multi-level systems perspective for the science of
team science. Science Translational Medicine, 15, 49cm24.
Burke, C. S., Stagl, K. C., Salas, E., Pierce, L., & Kendall, D. (2006). Understanding team
adaptation: A conceptual analysis and model. Journal of Applied Psychology, 91, 11891207.
Calhoun, W. J., Wooten, K., Bhavnani, S., Anderson, K. E., Freeman, J., & Brasier, A. R.
(2013). The CTSA as an exemplar framework for developing multidisciplinary
translational teams. Clinical and Translational Science, 6, 60-71.
Cohen, S. G., & Bailey, D. E. (1997). What makes teams work: Group effectiveness research
from the shop floor to the executive level. Journal of Management, 23, 239-290.
Coryn, C. L. S., Hattie, J. A., Scriven, M., & Hartmann, D. J. (2007). Models and mechanisms
for evaluating government-funded research: An international comparison. American
Journal of Evaluation, 28, 437-457.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 18
Creswell, J. W., & Clark, V. L. P. (2011). Designing and conducting mixed methods research
(2nd ed.). Los Angeles: Sage.
Devine, D. J., Clayton, L. D., Phillips, J. L., Dunford, B. B., & Melner, B. B. (1999). Teams in
organizations: Prevalence, characteristics, and effectiveness.Small Group Research, 30,
678-711.
Disis, M. L., & Slattery, J. T. (2010). The road we must take: Multidisciplinary team science.
Science Translational Medicine, 2, 22cm29.
Falk-Krzesinski, H. J., Börner, K., Contractor, N., Fiore, S. M., Hall, K. L., Keyton, J., Spring,
B., Stokols, D., Trochim, W., & Uzzi, B. (2010). Advancing the science of team science.
Clinical and Translation Science Journal, 23, 263-266.
Falk-Krzesinski, H. J., Contractor, N., Fiore, S. M., Hall, K. L., Kane, C., Keyton, J., ThompsonKlein, J., Spring, B., Stokols, D., & Trochim, D. (2011). Mapping a research agenda for
the science of team science. Research Evaluation, 20, 145-158.
Frechtling, J. A. (2007). Logic modeling methods in program evaluation. San Francisco, CA:
Jossey-Bass.
Gersick, C. J. G. (1991). Revolutionary change theories: A multilevel exploration of the
punctuated equilibrium paradigm. The Academy of Management Review, 16, 10-36.
Gravetter, F. J. (2003). Research methods for the behavioral sciences. New York:
Thompson/Wadsworth.
Gray, D. O. (2008). Making team science better: Applying improvement-oriented evaluation
principles to evaluation of cooperative research centers. New Directions of Evaluation,
2008, 118, 73-87.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 19
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for
mixed methods evaluation designs. Educational Evaluation and Policy Analysis, 11, 255274.
Guimera, R., Uzzi, B., Spiro, J., & Nunes Amaral, L. A. (2005). Team assembly mechanisms
determine collaboration network structure and team performance. Science, 308, 697-702.
Guzzo , R. A., & Dickson, M. W. (1996). Teams in organizations: Recent research on
performance and effectiveness. Annual Review of Psychology, 47, 307-338.
Hall, K. L., Feng, A. X., Moser, R. P., Stokols, D., & Taylor, B. K. (2008). Moving the science
of team science forward: Collaboration and creativity. American Journal of Preventive
Medicine, 35, 243-249.
Hall, K. L., & Vogel, A. L. (2012). A four phase model of translational team-based research:
Goals, team processes, and strategies. Translational Behavioral Medicine, 2, 415-430.
Hansen, H. F. (2005). Choosing evaluational models: A discussion of evaluation design.
Evaluation, 11, 447-462.
Hoggarth, L., & Comfort, H. (2010). A practical guide to outcome evaluation. London and
Philadelphia: Jessica Kingsley Publishers.
Hollingshead, A. B., & Poole, M. S. (2012). Research methods for studying groups and teams: A
guide to approaches, tools, techniques. New York: Routledge.
Huutoniemi, K. (2010). Evaluating interdisciplinary research. In R. Frodeman, J. Thompson
Klein, and C. Mitchum (Eds.), The Oxford handbook of interdisciplinarity (pp. 309-320).
Oxford, Oxford University Press.
Ilgen, D. R., Hollenbeck, J. R., Johnson, M., & Jandt, D. (2005). Teams in organizations: From
input-process-output model to IMDI models. Annual Review of Psychology, 56, 517-540.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 20
Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2008). Toward a definition of mixed
methods research. Journal of Mixed Methods Research, 1, 112-133.
Jones, B. F., Wuchty, S., & Uzzi, B. (2008). Multi-university research teams: Shifting impact,
geography, and stratification in science. Science, 322, 1259-1262.
Klein, J.T. (2006). Afterword: The emergent literature on interdisciplinary and transdisciplinary
research evaluation. Research Evaluation, 15, 75-80.
Kotarba, J. A., Wooten, K. C., Freeman, J., & Brasier, A. R. (2013). The culture of translational
science research: A qualitative analysis. International Review of Qualitative Research, 6,
127-142.
Lawrenz, F., Thao, M., & Johnson, K. (2012). Expert panel reviews of research centers: The site
visit. Evaluation and Program Planning, 35, 390-397.
Masse, L. C., Moser, R. P., Stokols, D., Taylor, B. K., Marcus, S. E., Morgan, L. D., Hall, K. L.,
Croyle, R. T., & Trochim, W. (2008). Measuring collaboration and transdisciplinary
integration in team science [Supplement]. American Journal of Preventive Medicine, 35,
151-160.
Mathieu, J. E., Maynard, M. T., Rapp, T. L. & Gilson, L. L. (2008). Team effectiveness 19972007: A review of recent advancements and a glimpse into the future. Journal of
Management, 34, 410-476.
Norman, C. D., Best, A., Mortimer, S. T., Huerta, T., & Buchan, A. (2011). Evaluating the
science of discovery in complex health systems. American Journal of Evaluation, 32, 8597.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 21
Olbrecht, M., & Bornman, L. (2010). Panel peer review of grant application: What do we know
from research in social psychology on judgment and decision making in groups.
Research Evaluation, 19, 293-204.
Patton, M.Q. (2010). Developmental evaluation: Applying complexity concepts to enhance
innovation and use. New York: Guilford Press.
Rubio, D. M., Schoenbaum, E. E., Lee, L. S., Schteingart, D. E., Marantz, P. R., Anderson, K.
E., Platt, L. D., Baez, A., & Esposito, K. (2010). Defining translational research:
Implications for training. Academic Medicine, 85, 470-475.
Saunders, R. P., Evans, M. H., & Joshi, P. (2005). Developing a process-evaluation plan for
assessing health promotion program implementation: A how-to guide. Health Promotion
Practice, 6, 131-147.
Smith-Jentsch, K. A., Cannon-Bowers, J. A., Tannenbaum, S. I., & Salas, E. (2008). Guided
team self-correction: Impacts on team mental models, processes, and effectiveness. Small
Group Research, 39, 303-327.
Stokols, D. (2006). Toward a science of transdisciplinary action research. American Journal of
Community Psychology, 38, 63-77.
Stokols, D. Fuqua, J., Gress, J., Harvey, R., Phillips, K., Baezconde-Garbanati, L., Unger, J.,
Palmer, P., Clark, M. A., Colby, S. M., Morgan, G., & Trochim, W. (2003). Evaluating
transdisciplinary science. Nicotine and Tobacco Research, 5, 21-39.
Stokols, D., Hall, K. L., Taylor, B. K., & Moser, R. P. (2008). The science of team science:
Overview of the field and introduction to the supplement [Supplement]. American
Journal of Preventive Medicine, 35, 77-89.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 22
Sundstrom, E., DeMuse, K. P., & Futrell, D. (1990). Work teams: Applications and
effectiveness. American Psychologist, 45, 120-133.
Tashakkori, A., & Teddlie, C. (2010). Sage handbook of mixed methods in social and behavioral
research. Los Angeles: Sage.
Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research: Integrating
quantitative and qualitative approaches in the social and behavioral sciences. Los
Angeles: Sage.
Trochim, W. M. K. (2005). Research methods: The concise knowledge base. Cincinnati, OH:
Atomic Dog.
Trochim, W. M., Marcus, S. E., Masse, L. C., Moser, R. P., & Weld, P. C. (2008). The
evaluation of large research initiatives: A participatory integrative mixed-methods
approach. American Journal of Evaluation, 29, 8-28.
Van der Vegt, G. S., & Janssen, O. (2003). Joint impact of interdependence and group diversity
on innovation. Journal of Management, 29, 729-751.
Wheelan, S., Davidson, B., & Tilin, F. (2003). Group development across time: Reality or
illusion? Small Group Research, 34, 223-245.
Wilson, C. (2003). Our team survey. Old Saybrook, CT: Performance Programs, Inc.
Woolf, S. H. (2008). The meaning of translational research and why it matters. Journal of the
American Medical Association, 299, 211-213.
Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of
knowledge. Science, 316, 1036-1039.
Zerhouni, E. A. (2006). Clinical research at a crossroads: the NIH roadmap. Journal of
Investigative Medicine, 54, 171-173.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 23
Table 1
Multiple Levels of Team Evaluation and Exemplary Questions for Translational Science
Outcome Evaluation
Process Evaluation
Developmental Evaluation
Are agreed upon milestones
and timelines being achieved?
How is the team interacting
and communicating?
How are task-related
behaviors at each stage of
development being
performed?
Are agreed upon outcomes
(e.g., publications, patents,
training, etc.) being
addressed?
Are meetings regular, agendabased, and well attended?
How are roles we expect
members to fulfill being
performed?
Are innovations or
breakthroughs that are
translational in nature being
achieved?
Are internal and external
parties being engaged
collaboratively?
How are individual and team
expertise being developed?
ASSESSING AND EVALUATING MULTIDISCIPLINARY SCIENTIFIC TEAMS
24
Table 2
Assessment Methods for Multidisciplinary Translational Teams
Method
Potential Use with Scientific Teams
Considerations in Use with Scientific Teams
Artifacts/Unobtrusive Measures
Qualitative Method using artifacts such as
Can be used primarily for outcome and
Inexpensive and easy to collect
documents (research plans, vision, meeting notes,
developmental evaluation
May be inconsistently available
technical reports, etc.)
Helps team reflect upon performance and plan for
future
Bibliographic
Quantitative Method using frequency and impact of
Can be used primarily for outcomes and
Inexpensive and easy to collect
publication, number of citations, number of patents,
developmental evaluation
Controversial given use of different indices,
number and size of extramural grants
Helps to determine knowledge generation and
changing impact ratings and factors
traditional progress and impact
Process Observations
Qualitative Method unless validated scales are
Can be used primarily for process and
Requires trained observers and is time intensive
utilized to assess and rate team communication,
developmental evaluation
May be seen by team leaders and members as
leadership, collaboration, and process issues (trust,
Helps teams identify process effectiveness, role
intrusive and controlling
dysfunction)
analysis, and stimulate inquiry to barriers to
scientific achievement
ASSESSING AND EVALUATING MULTIDISCIPLINARY SCIENTIFIC TEAMS
Method
25
Potential Use with Scientific Teams
Considerations in Use with Scientific Teams
Is both qualitative and quantitative, depends upon
Can be used primarily for process and
Relatively inexpensive to collect (surveys of
analysis plan to determine communication patterns,
developmental evaluation
documents), but may require expertise to analyze
influence networks, utility of individual parties
Provides visual and statistical snapshot of group
Social Network Analysis
functioning and effectiveness of relationships,
information flow, and collaboration
Surveys
Is both qualitative and quantitative, depending upon
Can be used primarily for process and development
Inexpensive way to collect large amounts of data
desired structure and response format. Can range
evaluation
covering many areas
from highly quantitative and validated scales to
Helps team see objective data that is self-generated
Self-report might lead to bias and unreliability,
open-ended and highly qualitative comment-based
and other generated (e.g., 360 survey)
surface level assessment of deep structure team
approaches.
processes
Structured Interviews
Quantitative Method using analysis of
Can be used primarily for process and
Time consuming, expensive, and can be
recorded/documented narratives in relation to set
developmental evaluation
subjective
questions, themes, follow-up probes
Can help team understand complex behaviors from
Can be useful to diagnose issues missed by other
thematic content and provide contextual
more objective means and measures due to
understanding of thoughts, symbols, cognitive
follow-up and clarification opportunities
structures, frames of reference, etc.
ASSESSING AND EVALUATING MULTIDISCIPLINARY SCIENTIFIC TEAMS
Method
26
Potential Use with Scientific Teams
Considerations in Use with Scientific Teams
Qualitative Method using small groups of
Can be used primarily for process and
Relatively inexpensive, but requires significant
representative participants/constituents to seek
developmental evaluation
time of participants
understanding of a select problem, issue, or process
Helps team understand “grounded” problems or
May be difficult to use with politically charged
issues from own experience and seek consensus
issues, directive leadership, and may be difficult to
Focus Groups
provide information that is amenable to
categorization
Case Method
Qualitative Method used to describe and chronicle
Can be used for outcomes, processes, and
Expensive and time consuming
events, activities, processes, and results of teams
developmental evaluation
May require periods of time that limit usefulness
and groups through collection of documents and
Helps team see life cycle of events and generate
to changing team members and structures
interviews retrospectively
informed causal inferences concerning team
effectiveness
ASSESSING AND EVALUATING MULTIDISCIPLINARY SCIENTIFIC TEAMS
27
Table 3
Evaluation Factors Used to Evaluate Multidisciplinary Translational Teams
Research/Scientific Factors (Criteria)
Methods Used
Qualitative/Quantitative
Level
Research Plan
Novel and sophisticated plan, conceptually
and/or technically innovative
Artifacts (scored grant application, annual
progress reports)
Qualitative and
Quantitative
Outcome
Artifacts (completion of formal milestones;
accomplishment of short, medium, and long
term outcomes; time and resource
utilization)
Qualitative and
Quantitative
Outcome
Bibliographic (number of publication,
impact of publications, grants obtained
extramurally)
Quantitative
Outcome
Artifacts (completion of formal milestones;
accomplishment of short, medium, and long
term outcomes)
Qualitative and
Quantitative
Outcome
Research Generation
Productivity, data collection, analysis, and
appropriate use of key resources
Research Communication/ Program Growth
Progress in dissemination, publication, and
grant success
Progress Across Translational Domains
Clinical and community impact
ASSESSING AND EVALUATING MULTIDISCIPLINARY SCIENTIFIC TEAMS
28
Table 3 (cont.)
Maturation/ Development Factors
(Criteria)
Team Vision, Charter, and Goals
Well established identity, future that is
shared by all, resulting from team
deliberations
Transformative and Empowered
Leadership
Leader solicits input, integrates
perspectives, and facilitates consensus
Meeting Management and Coordination
Team members meet regularly and
explore new opportunities, challenges,
and synergies
Methods Used
Qualitative/Quantitative
Level
Artifacts (annual progress reports,
meeting notes)
Survey (web based scales involving
consensus, clarity of goals/priorities)
Qualitative and
Quantitative
Quantitative
Process
Process Observation (team coach
observations of team principal
investigator and project manager
behavior/collaboration)
Survey (web based scales involving
consensus, domination, management
support)
Qualitative
Developmental
Quantitative
Process,
Developmental
Process Observation (team based
observations of team interaction patterns
and decision making)
Survey (web based scales involving
clarity of goals, management feedback,
team atmosphere)
Artifacts (team development planners,
needs assessment/self-assessment)
Qualitative
Process,
Developmental
Qualitative
Process,
Developmental
Qualitative
Process,
Developmental
Process
ASSESSING AND EVALUATING MULTIDISCIPLINARY SCIENTIFIC TEAMS
29
Table 3 (cont.)
Maturation/ Development Factors
(Criteria)
External Communication and
Collaboration
Facilitated communication with
collaborators that enhances research
productivity
Methods Used
Qualitative/Quantitative
Level
Artifacts (annual progress reports,
meeting notes)
Process observation (team coach
observation of inclusion of disciplines)
Qualitative and
Quantitative
Qualitative
Process
Process,
Developmental
ASSESSING AND EVALUATING MULTIDISCIPLINARY SCIENTIFIC TEAMS
Outputs
Inputs
Activities
Extramural
(NIH, Shrine,
and foundation)
research
support
CTSA pilot
funding
Graduate and
post-graduate
training
programs
Key Resources
Burn Units
(Blocker &
Shrine)
IRB approved
clinical trials /
studies
Databases and
banked
specimens
Select patient
population and
markers to be
analyzed to
identify
biomarkers for
survival
Determine the
broad applicability
of the identified
biomarkers (age,
type of death)
Determine the
efficacy of
therapeutic agents
to attenuate burninduced sequelae
Reports and
updates required
by CTSA and
granting agencies
Train team
members in multidisciplinary
approach to
translational
science
Participation
MTT leaders and
participants
- Faculty
- Trainees
- Burn research
trainees
- Biostatistics Key
Resource
- Bioinformatics Key
Resource
Shrine Clinical
Research Core
Biomolecular
resource core
Short
Faculty, KL2
scholars, and postdoctoral research
fellows proficient in
multi-disciplinary
translational
research
Identification of
proteomic and
genomic predictors
for outcome
following a severe
burn injury
Identification of
therapeutic agents
that successfully
reduce the effects of
a severe burn injury
Patients and
research subjects
Identification of
molecular signaling
pathways associated
with action of
beneficial
therapeutic agents
Figure 1. Project based logic model for burns injury and hypermetabolic response team.
30
Outcomes
Medium
Trainees return to
their home
institutions, bringing
translational
research with them
Publication of high
impact journal
articles
Funding for spinoff
projects including
validation from NIH
or Shrine
Funding for
multicenter trials to
confirm efficacy of
therapeutics
Long
Training program
alums remain in
academic medicine,
many lead their own
clinical &/or research
divisions and refer
potential trainees to
UTMB
Support the
establishment of
translational
research centers by
our training program
alums
Change in standard
of care for burn care
management
Enhancement of
national /
international
reputation in
biomarker discovery
in burns field
ASSESSING AND EVALUATING MULTIDISCIPLINARY SCIENTIFIC TEAMS
Inputs
UTMB CTSA
KRs
UTMB CTSA
MTTs
Outputs
Activities/Process/Cycle
Evaluation of
Project Proposal &
Team Assessment
CTSA Award to
Team
UTMB CTSA
Funding
UTMB faculty,
students,
trainees
External
universities,
centers, CTSA
sites
External
researchers
National CTSA
consortium/NIH
External funding
sources
Other
MTT team, CTSA
administrative
staff, Informatics
Key Resource
MTT team, CTSA
Coordination
Committee
Short
Establishment of
MTT specific team
minutes and
project
documentation on
UTMB iSpace
Team Charter &
Vision
Team Survey
Process
Team SelfAssessment
Team
Development Plan
Team
Coaching
Other
Other
Participation
Team
Observation
MTT leaders and
team, mentors,
education KR,
MTT coach
Researchers in
translational
network
MTT leaders and
team, Tracking &
Evaluation Key
Resource, MTT
coach, CTSA
Coordination
Committee,
Executive Team,
External Science
Evaluator
Public and private
funding sources,
partners
Increased skill
level & knowledge
of translational
research and
translational mgmt
Identification of
collaboration
research
partnerships and
alliances
Identification of
team
developmental
needs and action
plans
Identification of
funding sources
and action plan
External
Review
Figure 2. Generic developmental logic model for multidisciplinary translational teams.
31
Outcomes
Medium
Improved/effective
team processes
(12 Team
Evaluation
Factors)
Increased
research
collaborations and
networks
Increased
research
productivity,
publications,
research
opportunities,
grants, funding,
etc.
Long
Significant self
sufficiency for
funding
Change in
standards of care,
diagnosis, or
management of
(specify specific
disease state, and
specific population
National and
international
recognition and
reputation
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
32
7
6
5
4
3
2
1
0
Team Members-Year 2
Leader-Year 2
Team Members-Year 1
Figure 3. Survey based results for multidisciplinary translational teams.
Leader-Year 1
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
FACTOR 1: TEAM VISION, CHARTER, AND GOALS (THE BEDROCK: DEFINES TEAM DIRECTION)
Rank (3, 2, 1) (1=most effective, 3=least effective, needs development)
 MTT-specific translational research vision and goals (e.g., milestones and timelines)
 Articulated project outcomes and metrics
 Outline plan to develop MTT-specific processes and capabilities
Developmental Goals for Overall Team Vision, Charter, and Goals
Developmental Issue(s)
Developmental Objectives
1.
2.
How do you plan to address these issues and address these objectives?
Objective 1
Plan
Timeline
Resources Needed
Figure 4. Team development planner for multidisciplinary translational teams.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS
34
Criterion
Maturation/Developmental Progress
Team
Vision,
Charter,
Goals
Transformative
& Empowered
Leadership
Meeting
Mgmt./
Coordination
External
Communication/
Collaboration
Place a 0,
1, 2, or 3
Place a
0, 1, 2, or 3
Place a
0, 1, 2, or 3
Place a
0, 1, 2, or 3
Research/Scientific Progress
TD
Total
Research
Plan
Research
Generation
Research
Communication/
Program Growth
Place a 0,
1, 2, or 3
Place a
0, 1, 2, or 3
Place a
0, 1, 2, or 3
1. Team 1
2. Team 2
3. Team 3
Score Template: 0 = not present 1 = Low 2 = Medium 3= High
Figure 5. Scoring template for multidisciplinary translational team evaluation.
Progress
across
Translational
Domains
Place a
0, 1, 2, or 3
RP
Total
Grand
Total
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 35
High 12
H
11
10
E
Process Focus
9
A F
8
Exemplary
G
7
Team
Maturation/
Development
B
Med 6
5
C K
D Early in
Development
J
4
3
2
Traditional
1
Low 0
0
I
2
1
3
Low
4
5
6
7
8
9
10
11
Med
12
High
Research & Scientific Progress
Total Evaluative
Score
Team Maturation/
Developmental Score
Research/Scientific Program
Score
A. Team 1
15
9
6
B. Team 2
12
6
7
C. Team 3
7
4
3
D. Team 4
5
3
2
E. Team 5
13
10
3
F. Team 6
16
9
7
G. Team 7
15
8
7
H. Team 8
23
11
12
I. Team 9
2
0
2
J. Team 10
6
2
4
K. Team 11
8
4
4
Team
Figure 6. Simplified team evaluation model matrix distribution.
ASSESSING AND EVALUATING MULTIDISCIPLINARY TRANSLATIONAL TEAMS 36
= Established Team
= Maturing Team
1
2
= Less than 3 Years
Figure 7. Distribution of multidisciplinary translational team evaluations by team tenure.
=