Using the time and motion method to study clinical work processes

Research and applications
Using the time and motion method to study clinical
work processes and workflow: methodological
inconsistencies and a call for standardized research
Kai Zheng,1,2 Michael H Guo,3 David A Hanauer4,5
< Additional appendices are
published online only. To view
these files please visit the
journal online (www.jamia.org).
1
School of Public Health,
Department of Health
Management and Policy,
University of Michigan, Ann
Arbor, Michigan, USA
2
School of Information,
University of Michigan, Ann
Arbor, Michigan, USA
3
College of Medicine, University
of Florida, Gainesville, Florida,
USA
4
Department of Pediatrics,
University of Michigan, Ann
Arbor, Michigan, USA
5
Comprehensive Cancer Center,
University of Michigan, Ann
Arbor, Michigan, USA
Correspondence to
Dr Kai Zheng, Department of
Health Management and Policy,
School of Public Health,
University of Michigan, M3531
SPH II, 109 South Observatory
Street, Ann Arbor, MI
48109-2029, USA;
[email protected]
Received 1 January 2011
Accepted 1 April 2011
Published Online First
27 April 2011
ABSTRACT
Objective To identify ways for improving the consistency
of design, conduct, and results reporting of time and
motion (T&M) research in health informatics.
Materials and methods We analyzed the
commonalities and divergences of empirical studies
published 1990e2010 that have applied the T&M
approach to examine the impact of health IT
implementation on clinical work processes and workflow.
The analysis led to the development of a suggested
‘checklist’ intended to help future T&M research produce
compatible and comparable results. We call this
checklist STAMP (Suggested Time And Motion
Procedures).
Results STAMP outlines a minimum set of 29 data/
information elements organized into eight key areas, plus
three supplemental elements contained in an ‘Ancillary
Data’ area, that researchers may consider collecting and
reporting in their future T&M endeavors.
Discussion T&M is generally regarded as the most
reliable approach for assessing the impact of health IT
implementation on clinical work. However, there exist
considerable inconsistencies in how previous T&M
studies were conducted and/or how their results were
reported, many of which do not seem necessary yet can
have a significant impact on quality of research and
generalisability of results. Therefore, we deem it is time
to call for standards that can help improve the
consistency of T&M research in health informatics. This
study represents an initial attempt.
Conclusion We developed a suggested checklist to
improve the methodological and results reporting
consistency of T&M research, so that meaningful insights
can be derived from across-study synthesis and health
informatics, as a field, will be able to accumulate
knowledge from these studies.
INTRODUCTION
While there has been a widely acknowledged
consensus that health IT implementation often
introduces radical changes to clinical work
processes and workflow,1e4 it remains unclear what
these changes are and how they impact actual
clinical efficiency, team coordination, and ultimately quality of care and patient safety. Developing such an understanding requires rigorously
conducted research that can generate compatible
and comparable results to inform effective technology designs and implementation approaches.
Methods for studying changes to work processes
and workflow vary widely depending on research
contexts and research objectives.4 In this paper, we
focus on quantitative approaches for measuring
704
the impacts of such changes by means of quantifying clinicians’ time utilization and delineating
how their time is allocated to different types of
clinical and non-clinical activities. Among several
approaches commonly used to date, time and
motion (T&M), which involves continuous and
independent observation of clinicians’ work, is
generally regarded as the most reliable method
compared to alternative methods such as work
sampling and time efficiency questionnaires.5 6 In
recent years, informatics research utilizing T&M
has grown substantially thanks to a series of
pioneering papers7 8 that led to a widely-used T&M
data acquisition tool made available through the
Agency for Healthcare Research and Quality.9
The proliferation of T&M studies creates a great
opportunity to aggregate their results for broader
learning and discovery. Unfortunately, in our prior
work, we found that the design, conduct, and
results reporting of existing T&M studies vary to
a considerable degree making cross-study synthesis
unnecessarily difficult, or even impossible.10 This
observation motivated the present paper, which
aimed to identify key methodological considerations and results reporting requirements in order
to improve the consistency of T&M research.
Toward this goal, we first reviewed existing
empirical studies that have applied the T&M
method to assess the impact of health IT implementation. Then, we distilled the results into
a suggested ‘checklist’ intended to help standardize
the research design, conduct, and results reporting
of future T&M studies, which we refer to as
STAMP (Suggested Time And Motion Procedures).
To the best of our knowledge, this is the first such
checklist. Based on STAMP, we also analyzed the
existing T&M studies identified to delineate key
areas in which methodological and results reporting
inconsistencies have occurred most frequently and
where additional attention is most needed.
BACKGROUND
Implementation of health IT systems such as electronic health records and computerized prescriber
order entry inevitably changes established clinical
work processes and workflow.1e4 Some of the
changes are intended, in order to reengineer existing
operations to both accommodate and take full
advantage of the capabilities provided by electronic
systems; others, however, may be unintended due
to software defects, problematic implementation
processes, or policy oversights, many of which
could be associated with adverse consequences on
time efficiency and patient safety.11e14
J Am Med Inform Assoc 2011;18:704e710. doi:10.1136/amiajnl-2011-000083
Research and applications
Developing a better understanding of such changes is
therefore important, and is often achieved by quantifying and
comparing clinicians’ time efficiency before and after health IT
adoption (or with and without), for instance by calculating how
clinicians’ time is redistributed among various types of clinical
and non-clinical activities.7 8 10 While T&M is best suited for
this task, the results of T&M studies are highly sensitive to
nuances in research design and study conduct, because:
(1) human observers are the sole source of conventional T&M
data; their ability to discern and timely record complex clinician
activities can therefore highly influence the reliability and
replicability of research outputs; and (2) the sample size of T&M
studies is usually small due to high resource demands for
conducting independent and continuous field observations,
which exacerbates the potential effect of observer biases and
imposes a higher requirement on subject selection and
subjecteobserver assignment. Further, the way in which T&M
data are recorded, such as how tasks are defined and the granularity of the classification of the tasks, can be also critical to
whether the results will be useful in systematic reviews and
meta-analyses.
As a result, we believe that it is time to identify ways to
standardize T&M research so that future T&M studies can
consistently produce compatible and comparable results.
Seeking ways to standardize research design, conduct, and
results reporting is crucial to enabling collective knowledge
accumulation.15 16 This is particularly true in health sciences
where novel discoveries may require generations of endeavors
before the results can be applied at the bedside. Notable existing
effort toward research standardization in health sciences
includes STARE-HI (STAtement on the Reporting of Evaluation
studies in Health Informatics),17 CONSORT (Consolidated
Standards of Reporting Trials),18 QUOROM (QUality Of
Reporting Of Meta-analyses),19 and STARD (STAndards for the
Reporting of Diagnostic accuracy),20 among others. Many of
these standards are created or supported by the U.S. National
Library of Medicine and the EQUATOR Network (Enhancing
the QUAlity and Transparency Of health Research), ‘an international initiative that seeks to enhance reliability and value
of medical research literature by promoting transparent and
accurate reporting of research studies.’21 22
METHOD
As the focus of this research is on informatics studies
conducted in health and healthcare domains, we limited the
literature search to PubMed/MEDLINE only. The search was
also restricted to empirical studies published in English within
the last 20 years (1990e2010). Further, we applied two salient
inclusion/exclusion criteria in study selection, namely: (1) the
quantitative method employed by the study must be T&M,
and the T&M method used must conform to the following
definition: ‘independent and continuous observation of clinicians’
work to record the time required to perform a series of clinical
or non-clinical activities.’5 6 With this definition, we excluded
work sampling research, ‘time’ studies for collecting efficiency
measures on isolated events (eg, medication turnaround time),
and ‘motion’ studies that do not collect time-based data; and
(2) the main purpose of the study must be to examine the
impact of health IT implementation on clinical work processes
or workflow. Therefore, empirical studies outside this context,
such as those evaluating standalone medical devices or other
types of interventions (eg, management or clinical protocols),
were also excluded.
J Am Med Inform Assoc 2011;18:704e710. doi:10.1136/amiajnl-2011-000083
The PubMed/MEDLINE search was performed on July 26,
2010. Three MeSH terms, ‘time and motion studies,’ ‘health
information technology,’ and ‘medical informatics applications,’
were used to assist in the search, in addition to various combinations of the following search terms: ‘time,’ ‘motion,’ ‘time
utilization,’ ‘time efficiency,’ ‘informatics,’ and ‘health IT.’ The
initial search resulted in a total of 204 articles. Thirty-seven were
excluded immediately because they were not in English or were
in the form of conference abstracts, posters, or commentaries or
reviews. The two faculty authors (KZ and DAH) each screened
a random half set of the abstracts of the remaining papers; 141
were determined as not meeting the inclusion criteria. Then, all
three authors read the full text of the 26 papers that remained;
12 more were determined to not meet the criteria, leaving 14
papers included in the final review.7 8 23e34 Table 1 summarizes
the reasons for excluding the 153 papers.
Further, we extended the study pool by analyzing the citations of the 14 papers identified, as well as by incorporating
a few new publications that came to our attention after the
initial literature screening. This led to the inclusion of an additional 10 papers that met the aforementioned criteria; six of
them were published in 2010.10 35e43
Next, we analyzed each of these 24 papers in-depth and
iteratively developed a ‘study feature’ schema to capture the
commonalities of how current T&M research is conducted and
reported. The schema includes (1) basic information such as
empirical setting, type of health IT system studied, and data
collection tool used, and (2) other less obvious study properties
such as whether the same clinician subjects were observed across
different phases in multistage studies, and how transition
periods between consecutive tasks were recorded and analyzed.
The results were distilled into the checklist for standardizing
T&M research that we propose in this paper. Then, we anatomized the design, conduct, and results reporting of each of the
24 studies to identify common divergences in existing T&M
research, which point to key areas where additional attention is
needed.
RESULTS
A complete list of the papers we reviewed is provided in
appendix 1 as an online supplement. Appendix 2 (also available
online) presents the study feature schema. As shown in
appendix 2, about two thirds of these papers were published in
the past 5 years (2005e2010), over half employed a prospective
before and after design, and the majority were conducted in
urban academic medical centers to evaluate the implementation
of homegrown or commercial electronic health record systems or
Table 1 Reasons for paper exclusion during eligibility screening
(N¼153)
Reason
Not an empirical study (eg, position papers,
theoretical developments, and systematic reviews)
Subject of the study was not health IT applications
Duplicate publications reporting the same datasets
Methodological exclusions (non-T&M)
Work sampling
Efficiency studies on isolated events (eg, turnaround time)
Motion only
Studies using other non-T&M approaches (eg, surveys)
Number of
papers excluded
42
40
2
10
50
1
8
T&M, time and motion.
705
Research and applications
computerized prescriber order entry systems (or ePrescribing
systems used in ambulatory settings).
Proposed STAMP checklist
The checklist that we propose in this paper, STAMP, was
developed based on the study feature schema shown in appendix
2. Comprised of 29 main items organized into eight key areas,
STAMP outlines a minimum set of data/information elements
that researchers may consider collecting and reporting in their
future T&M endeavors.
In a ninth area called ‘Ancillary Data,’ we also introduced
three additional data elements, namely interruption, interaction,
and location. These elements have not been rigorously studied
among the papers we reviewed, but are found in related literature and can be valuable additions to enrich T&M analyses
evaluating the impact of health IT implementation.44e47
First, interruptions, a pervasive phenomenon in healthcare,
have a particularly detrimental impact on the quality and efficiency of clinical work. Their frequency and/or effects
may diminish because of health IT adoption, or could escalate
instead as a result. Second, patterns of interpersonal interactiondembodying communication, cooperation, and coordinationdare an integral part of clinical work, which may also
be substantially altered by the adoption of health IT. The basic
elements of interpersonal interaction activities, such as with
whom and via what method, can be readily recorded as part of
T&M observations. Finally, clinicians’ spatial movements in the
patient care area may allude to how the physical layout of
a hospital/clinic (eg, location of computer terminals) or characteristics of the implementation (eg, use of portable computing
devices) may affect workflow efficiency. While none of the
existing T&M studies had recorded information regarding the
location where the activities took place, we believe that adding
this information would incur very little extra cost, yet the
results can greatly enhance our understanding of clinical work
processes and workflow.
The 29 main items of STAMP, in addition to the three extra
items, are presented in table 2. In figure 1, we depict STAMP as
a flow diagram and provide it in a cheat-sheet format in
appendix 3 in the online supplement.
Even though not all of these data/information elements are
universally applicable, we believe that a majority of them are
and, if so, they should be collected and reported in an explicit
manner. The checklist therefore may also serve as an initial step
toward standardized T&M research by inviting future studies to
incorporate these critical methodological considerations into
their design and conduct. Reasonable deviations are possible
depending on the specific research contexts and objectives, the
justification for which, however, should be provided.
Divergences in existing T&M research
Based on STAMP, we analyzed the variability in how existing
T&M studies were conducted and/or the way in which their
results were reported. Table 3 presents the results. As shown in
the table, observer training and methods for collecting field data
are two areas where inconsistencies have occurred most often.
Below, we discuss three salient issues observed from table 3
that we deem most crucial to T&M research. We believe these
issues are, in general, reasonably under researchers’ control
(ie, can be avoided despite practical constraints) and addressing
them will not incur a dramatically increased demand for
research resources.
First, while human observers play a central role in T&M
research, the existing studies tend to under-report the
706
preparedness of the human observers involved. In addition, very
few studies provided information regarding whether pilot
observation sessions were conducted and whether calibration
was attempted to improve consistency across multiple observers
(if applicable). These issues could pose severe threats to the
validity of T&M research: while there are exceptions, capturing
the subtleties of clinical work may exceed the ability of average
research assistants, especially if they do not possess a prior
clinical background and/or experience in conducting observational studies in clinical settings. For example, use of the data
collection tool published by the Agency for Healthcare Research
and Quality requires human observers to be able to distinguish
the following pairs of activities: ‘Paper Writing: Orders’ versus
‘Paper Writing: Forms’ and ‘Procedures: Lab Test’ versus ‘Procedures:
Phlebotomy.’7e9 The nuances between these activities, however,
may not be obvious to research personnel who do not have
adequate training and relevant backgrounds. Unfortunately,
many of the existing studies employed temporary student
assistants to collect field data, and more importantly, a significant proportion of the studies (16 out of 24) disclosed very little
information about who the observers were and how they were
trained.
Second, more than half of the studies (15 out of 24) utilized
a prospective before and after design, yet five of them did not
report whether the pre- and the post-stages involved the
same cohort of clinician subjects. Because the sample size of
T&M studies is usually small, conclusions drawn based on
preepost comparison without a within-subject control design
could be very questionable. Further, most of these studies were
vague with respect to whether the field observations were
conducted by the same observer(s) across different study phases.
If not, then this calls into question whether the preepost
differences revealed by the study might be principally introduced by observer biases rather than by the intervention
studied. While it should be acknowledged that it can be
practically difficult to ensure subject and observer continuity
due to many uncontrollable factors (eg, trainee rotation and
staff turnover), we deem it important for T&M studies to
report compromises made, even if inevitable, so that the results
can be more meaningfully interpreted and used in research
synthesis.
Third, 14 out of the 24 studies did not provide adequate detail
regarding non-observed periods, which may include lunchtime,
the time window after a shift change, and when clinician
subjects temporarily rounded off the study site. Given that
a significant amount of clinical activity may take place during
such periods (eg, clinical documentation completed in offices
outside the patient care area and deferred documentation
completed after the shift), not taking them into account could
result in serious validity issues. For example, it may be reasonable to assume that, compared to paper-based operations, offsite
or off-duty documentation activities occur more often in
a digital environment because of the ‘access anywhere and
anytime’ nature of electronic data. Therefore, comparison of
‘documentation time’ versus ‘time spent on direct patient care,’
before and after health IT implementation, may produce
misleading results if offsite or off-duty documentation activities
are not captured as part of the post-implementation observations. Similarly, more than half of the studies did not report how
between-task transition periods were handled, that is, whether
the ‘stopwatch’ was paused when the clinician was about to
finish a task and move on to the next one. Consider the
following scenario: a clinician was talking to a hospitalized
patient, then walked out of the room to grab the chart folder
J Am Med Inform Assoc 2011;18:704e710. doi:10.1136/amiajnl-2011-000083
Research and applications
Table 2
STAMP (Suggested Time And Motion Procedures)
Area and element
Ref Code
Description
Example
Intervention
Type
System genre
INT.1
INT.2
The system studied (intervention)
Origin or lineage of the system (eg, commercial product, homegrown system, open source
software)
Time elapsed since intervention, including the amount of time that study subjects have been
exposed to the intervention
Overhage et al,7 p 362
Pizziferri et al,8 p 177
Type of the healthcare facility or facilities where empirical observations are conducted (eg,
academic vs non-academic)
Area of patient care services (eg, inpatient, outpatient, emergency department)
Geographic characteristics (eg, urban vs rural)
Overhage et al,7 p 362
Maturity
Empirical setting
Institution type
Care area
Locale
Research design
Protocol
Duration
Shift distribution
Observation hours
Task category
Definition and classification
Acknowledgment of prior work
New development
Observer
Size of field team
Training
INT.3
ES.1
ES.2
ES.3
RD.1
RD.2
RD.3
RD.4
TC.1
TC.2
TC.3
OBS.1
OBS.2
Background
OBS.3
Inter-observer uniformity
Continuity
Assignment
OBS.4
OBS.5
OBS.6
Lo et al,31 p 610
Overhage et al,7 p 362
Overhage et al,7 p 362
Research protocol (eg, RCT, before and after, after only)
Length of fieldwork (eg, whether all observations are completed within a month, or occur
sporadically over the course of a year)
Clinical shifts observed (eg, morning, afternoon, night, if applicable)
Total number of direct observation hours, in addition to how the hours are distributed across
study phases or RCT study arms (if applicable)
Pizziferri et al,8 p 178
Pizziferri et al,8 p 180
Definition of tasks and description of all major and minor task categories
Acknowledgment of task classification schemas previously used in the same or similar
settings, and justifications if modifications are made
Development and validation of task definition and task classification, if no prior work can be
leveraged
Overhage et al,7 p 364
Pizziferri et al,8 p 178
Total number of independent human observers
Type and amount of training provided to human observers, including pre-study pilot observation
sessions
Professional background of observers (eg, residents, nurses, industrial engineering students)
and their prior experiences in conducting observational studies in clinical settings
If and how inter-observer agreements are calibrated
Continuity of observers across multiple study phases (if applicable)
How observers are assigned to shadow different research subjects and in particular, research
subjects enrolled in different study phases or RCT study arms (if applicable)
Pizziferri et al,8 p 179
Overhage et al,7 p 365
Overhage et al,7 p 363
Pizziferri et al,8 p 180
Overhage et al,7 p 364
Overhage et al,7 p 363
Ampt & Westbrook,29 p 161
Pizziferri et al,8 p 180
Pizziferri et al,8 p 180
Subject
Size
Recruitment and randomization
Continuity
Background
SUB.1
SUB.2
SUB.3
SUB.4
Number of research subjects enrolled
How research subjects are recruited (and randomized, if applicable)
Continuity of subjects across multiple study phases (if applicable)
Background information about research subjects such as clinician type and level of training (eg,
residents vs attending physicians); if conditions allow, other individual characteristics such as
gender, age, and computer literacy (eg, accessed using tools such as that of Cork et al48)
Overhage et al,7 p 365
Overhage et al,7 p 363
Pizziferri et al,8 p 178
Pizziferri et al,8 p 180
Data recording
Multitasking
DR.1
If and how multi-tasking is taken into account; in particular, if only the primary task is recorded
or all concurrent tasks are recorded
If there are periods of time not covered by independent observers
If and how transition periods between consecutive tasks are handled
Device and software used to collect field data, for example, the AHRQ tool,9 WOMBAT,45 and
the medical work assessment tool developed by Mache et al49
Hollingworth et al,30 p 728
Non-observed periods
Between-task transition
Collection tool
DR.2
DR.3
DR.4
Data analysis
Definition of key measures
DA.1
Analytical methods
Ancillary data
Interruption
Interaction
Location
DA.2
AD.1
AD.2
AD.3
Key measures used in analysis and results reporting, for example, average time spent on
ordering activities vs on direct patient care,7 8 TOT,47 and average continuous time that
assesses workflow fragmentation and task switching frequency11
Statistical or other types of analytical methods used to analyze the data
A descriptor specifying if a task represents an interruption to prior tasks
Interpersonal interactions/communications necessary for task execution; for example, with
whom and via what method (eg, in person, by telephone, via a computerized system)
The location where the activities take place (eg, in a patient ward, in a hallway, at computer
workstations)
Hollingworth et al,30 p 724
Lo et al,31 p 612
Overhage et al,7 p 364
Overhage et al,7 p 365
Pizziferri et al,8 p 180
Westbrook et al47
Mache et al46
Mache et al46
AHRQ, Agency for Healthcare Research and Quality; RCT, randomized controlled trial; TOT, time on task; WOMBAT, Work Observation Method by Activity Timing.
located by the door of the warddshould the time incurred for
performing this transition be counted toward the ‘Talking to
Patient’ activity, or the subsequent ‘Patient Chart Reading’
activity, or a standalone ‘Walking’ activity? We deem that T&M
J Am Med Inform Assoc 2011;18:704e710. doi:10.1136/amiajnl-2011-000083
studies should consider and report such nuances: the duration of
each individual incident may be short and seemingly trivial, but
the cumulative amount could be substantial and may have
a significant impact on research outputs.
707
Research and applications
Figure 1
Suggested Time And Motion Procedures presented as a flowchart (dashed boxes designate items that may not be applicable to all studies).
DISCUSSION
In addition to the issues mentioned above, our review of the
literature also raised two other concerns. First, as shown in table
1 that reports common reasons for paper exclusion, the term
‘time and motion’ may have been overused as it appears
frequently in studies that did not actually use the T&M method
according to the prevalent definition5 6; examples include work
sampling studies, studies on turnaround time, and questionnaire
surveys. This issue could create chaos in knowledge accumulation and cause unnecessary difficulties for researchers
conducting systematic reviews and meta-analyses. Second, there
does not seem to be a standard way to train T&M observers or
to calibrate inter-observer disagreements (if applicable). This fact
may exacerbate potential observer biases and undermine
the validity of cross-study research synthesis. Developing
a standard observer training set, possibly in the form of video
recordings of typical clinical sessions annotated by T&M
experts, would therefore be highly valuable.
The literature review results also suggest several general
methodological and results reporting issues that are not necessarily unique to T&M research. Because many informatics
evaluation studies feature a prospective before and after design,
the right time to embark on post-intervention research activities
can be a crucial decision, which should ideally occur after the
‘burn-in’ effect is diminished so that stable and sustainable user
behaviors can be observed. However, in the health informatics
research community, there does not seem to be a common
consensus regarding the definition of ‘intervention maturity,’
or readily available methods that can be used to determine if
708
intervention maturity has been reached. Among the preepost
T&M studies that we reviewed, this time point was by and
large arbitrarily determined, and the range varied widely
from immediately post-implementation to 3 months later,
6 months later, etc. Further, one-fifth of these studies did not
report at all when their post-implementation data collection
activities started, and very few studies explicitly mentioned
whether all research subjects had been equally exposed to
the intervention since the day it was introduced into the
empirical environment.
Another area that deserves closer attention is the identification of pertinent measures. Individual characteristics, for
example, can be influential factors moderating clinicians’
acceptance of health IT systems.50 However, the decision about
which individual characteristics to collect and analyze seems to
arbitrary: gender, age, level of training, medical specialty, prior
experiences, etc, frequently appeared in the studies we reviewed
but in assorted combinations, and very few studies provided
justifications as to why some of these variables were considered
while others were not. Further, additional research is needed on
the selection of appropriate measures to quantify clinical work
processes and workflow.4 Aggregated average amount of time
spent on performing certain tasks (eg, examining patients) or
certain task groups (eg, direct patient care) is the most
commonly used measure to date. As we demonstrated in our
prior work, this measure may not be adequate to capture the
level of granularity needed for revealing the true impact of
health IT implementation on workflow (eg, workflow fragmentation and task switching frequency).10 Further, researchers
J Am Med Inform Assoc 2011;18:704e710. doi:10.1136/amiajnl-2011-000083
Research and applications
Table 3 Methodological or result reporting inconsistencies among the
T&M studies reviewed
Area
Intervention
Element
Type
System genre
Maturity
Empirical setting Institution type
Care area
Locale
Research design Protocol
Duration
Shift distribution*
Observation hours
Task category
Definition and classification
Acknowledgment of
prior work
New development*
Observer
Size of field team*
Training*
Background*
Inter-observer
uniformity*
Continuity*
Assignment*
Subject
Size
Recruitment and
randomization
Continuity
Background
Data recording
Multitasking*
Non-observed periods*
Between-task transition*
Collection tool
Data analysis
Definition of key measures
Analytical methods
STAMP
Not
ref code Reported reported NA
INT.1
INT.2
INT.3
ES.1
ES.2
ES.3
RD.1
RD.2
RD.3
RD.4
TC.1
TC.2
24
20
17
19
23
18
24
18
16
21
21
11
0
4
4
5
1
6
0
6
8
3
3
0
0
0
3
0
0
0
0
0
0
0
0
13
TC.3
OBS.1
OBS.2
OBS.3
OBS.4
3
12
8
12
2
10
12
16y
12
16
11
0
0
0
6
OBS.5
OBS.6
SUB.1
SUB.2
3
6
21
18
12
13
3
6
9
5
0
0
SUB.3
SUB.4
DR.1
DR.2
DR.3
DR.4
DA.1
DA.2
13
24
10
10
11
20
24
24
5
0
14
14
13
4
0
0
6
0
0
0
0
0
0
0
*Area where the number of papers classified into the ‘not reported’ category equals or
exceeds 1/3 of all papers.
ySix of these 16 studies only stated briefly that the observers were trained without any
further detail given.
NA, not applicable; STAMP, Suggested Time And Motion Procedures; T&M, time and
motion.
STAMP by the research community, keep a watch on emerging
methods and thus new requirements for results reporting,
and accordingly adjust and update STAMP’s structure and the
data/information elements it encompasses.
CONCLUSION
We reviewed existing empirical studies that have applied the
T&M approach to evaluate the impact of health IT implementation on clinical work processes and workflow. Based on
the results, we developed a suggested ‘checklist’ intended to help
standardize the design, conduct, and results reporting of future
T&M studies. We believe that the resulting checklist, STAMP
(Standard Time And Motion Procedures), may contribute to
improving the methodological and results reporting consistency
of T&M research, so that meaningful insights can be derived
from across-study synthesis and health informatics, as a field,
will be able to accumulate knowledge from these studies.
Funding This project was supported in part by Grant # UL1RR024986 received from
the National Center for Research Resources (NCRR), a component of the National
Institutes of Health (NIH) and NIH Roadmap for Medical Research.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
have demonstrated that nuances in how measures such as time
on task are calculated, in the context of interruptions, can have
a significant impact on research results and conclusions.47
Finally, analytical methods (eg, statistical tests and regression
analyses) are another area in which a greater consensus would be
very beneficial. Among the T&M studies we reviewed, analytical
methods varied considerably even for similar data collected
under similar conditions.
It should be acknowledged that the suggested checklist
presented in this paper, STAMP, was developed through
reviewing the empirical studies that we were able to identify.
While we believe its data/information elements adequately
cover most of the critical aspects of T&M research, they may
not be comprehensive given that our literature search may not
have included all relevant studies, and the studies we included
may not have contained all possible T&M research features.
Further, the checklist only reflects the current best-known
knowledge of how T&M studies should be conducted. Therefore, it needs to be constantly updated in order to accommodate
novel field observation techniques and technologies, such as
automated activity capture and recognition using radiofrequency identification tags and video/audio recording
devices.44 51 In the future, we will evaluate the adoption of
J Am Med Inform Assoc 2011;18:704e710. doi:10.1136/amiajnl-2011-000083
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
Ash JS, Bates DW. Factors and forces affecting EHR system adoption: report of
a 2004 ACMI discussion. J Am Med Inform Assoc 2005;12:8e12.
Niazkhani Z, Pirnejad H, Berg M, et al. The impact of computerized provider order
entry (CPOE) systems on inpatient clinical workflow: a literature review. J Am Med
Inform Assoc 2009;16:539e49.
National Research Council. Computational Technology for Effective Health Care:
Immediate Steps and Strategic Directions. Washington, DC: National Academies
Press, 2009.
Unertl KM, Novak LL, Johnson KB, et al. Traversing the many paths of workflow
research: developing a conceptual framework of workflow terminology through
a systematic literature review. J Am Med Inform Assoc 2010;17:265e73.
Finkler SA, Knickman JR, Hendrickson G, et al. A comparison of work-sampling and
time-and-motion techniques for studies in health services research. Health Serv Res
1993;28:577e97.
Burke TA, McKee JR, Wilson HC, et al. A comparison of time-and-motion and
self- reporting methods of work measurement. J Nurs Adm 2000;30:118e25.
Overhage JM, Perkins S, Tierney WM, et al. Controlled trial of direct physician order
entry: effects on physicians’ time utilization in ambulatory primary care internal
medicine practices. J Am Med Inform Assoc 2001;8:361e71.
Pizziferri L, Kittler AF, Volk LA, et al. Primary care physician time utilization before
and after implementation of an electronic health record: a time-motion study.
J Biomed Inform 2005;38:176e88.
Time and Motion Study Tool: Ambulatory Practice (TMS-AP). http://healthit.ahrq.gov/
portal/server.pt/gateway/PTARGS_0_1248_216071_0_0_18/AHRQ%20NRC%
20Time-Motion%20Study%20Tool%20Guide.pdf (accessed 17 Dec 2010).
Zheng K, Haftel HM, Hirschl RB, et al. Quantifying the impact of health IT
implementations on clinical workflow: a new methodological perspective. J Am Med
Inform Assoc 2010;17:454e61.
Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after
implementation of a commercially sold computerized physician order entry system.
Pediatrics 2005;116:1506e12.
Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry
systems in facilitating medication errors. JAMA 2005;293:1197e203.
Campbell EM, Sittig DF, Ash JS, et al. Types of unintended consequences related to
computerized provider order entry. J Am Med Inform Assoc 2006;13:547e56.
Ash JS, Sittig DF, Poon EG, et al. The extent and importance of unintended
consequences related to computerized provider order entry. J Am Med Inform Assoc
2007;14:415e23.
Moher D, Schulz KF, Simera I, et al. Guidance for developers of health research
reporting guidelines. PLoS Med 2010;7:e1000217.
Glasziou P, Meats E, Heneghan C, et al. What is missing from descriptions of
treatment in trials and reviews? BMJ 2008;336:1472e4.
Talmon J, Ammenwerth E, Brender J, et al. STARE-HIdStatement on reporting of
evaluation studies in Health Informatics. Int J Med Inform 2009;78:1e9.
Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 statement:
updated guidelines for reporting parallel group randomized trials. Ann Intern Med
2010;152:726e32.
Moher D, Cook DJ, Eastwood S, et al. Improving the quality of reports of
meta-analyses of randomised controlled trials: the QUOROM statement. Quality of
Reporting of Meta-analyses. Lancet 1999;354:1896e900.
709
Research and applications
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
710
Bossuyt PM, Reitsma JB, Bruns DE, et al; Standards for Reporting of Diagnostic
Accuracy. Towards complete and accurate reporting of studies of diagnostic
accuracy: The STARD Initiative. Ann Intern Med 2003;138:40e4.
EQUATOR Network. http://www.equator-network.org (accessed 20 Dec 2010).
National Library of Medicine. Research Reporting Guidelines and Initiatives.
http://www.nlm.nih.gov/services/research_report_guide.html (accessed 20
Dec 2010).
Tierney WM, Miller ME, Overhage JM, et al. Physician inpatient order writing on
microcomputer workstations. Effects on resource utilization. JAMA
1993;269:379e83.
Pierpont GL, Thilgen D. Effect of computerized charting on nursing activity in
intensive care. Crit Care Med 1995;23:1067e73.
Menke JA, Broner CW, Campbell DY, et al. Computerized clinical documentation
system in the pediatric intensive care unit. BMC Med Inform Decis Mak
2001;1:3.
Mekhjian HS, Kumar RR, Kuehn L, et al. Immediate benefits realized following
implementation of physician order entry at an academic medical center. J Am Med
Inform Assoc 2002;9:529e39.
Rotich JK, Hannan TJ, Smith FE, et al. Installing and implementing
a computer-based patient record system in sub-Saharan Africa: the Mosoriot
Medical Record System. J Am Med Inform Assoc 2003;10:295e303.
Tang Z, Mazabob J, Weavind L, et al. A time-motion study of registered nurses’
workflow in intensive care unit remote monitoring. AMIA Annu Symp Proc
2006:759e63.
Ampt A, Westbrook JI. Measuring nurses’ time in medication related tasks prior to
the implementation of an electronic medication management system. Stud Health
Technol Inform 2007;130:157e67.
Hollingworth W, Devine EB, Hansen RN, et al. The impact of e-prescribing on
prescriber and staff time in ambulatory care clinics: a time motion study. J Am Med
Inform Assoc 2007;14:722e30.
Lo HG, Newmark LP, Yoon C, et al. Electronic health records in specialty care:
a time-motion study. J Am Med Inform Assoc 2007;14:609e15.
Asaro PV, Boxerman SB. Effects of computerized provider order entry and nursing
documentation on workflow. Acad Emerg Med 2008;15:908e15.
Yen K, Shane EL, Pawar SS, et al. Time motion study in a pediatric emergency
department before and after computer physician order entry. Ann Emerg Med
2009;53:462e8.e1.
Devine EB, Hollingworth W, Hansen RN, et al. Electronic prescribing at the point of
care: a time-motion study in the primary care setting. Health Serv Res
2010;45:152e71.
Weinger MB, Herndon OW, Gaba DM. The effect of electronic record keeping
and transesophageal echocardiography on task distribution, workload, and
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
vigilance during cardiac anesthesia. Anesthesiology 1997;87:144e55;
discussion 29Ae30A.
Wong DH, Gallegos Y, Weinger MB, et al. Changes in intensive care unit nurse task
activity after installation of a third-generation intensive care unit information system.
Crit Care Med 2003;31:2488e94.
Tang Z, Weavind L, Mazabob J, et al. Workflow in intensive care unit remote
monitoring: a time-and-motion study. Crit Care Med 2007;35:2057e63.
Keohane CA, Bane AD, Featherstone E, et al. Quantifying nursing workflow in
medication administration. J Nurs Adm 2008;38:19e26.
Cornell P, Herrin-Griffith D, Keim C, et al. Transforming nursing workflow, part 1: the
chaotic nature of nurse activities. J Nurs Adm 2010;40:366e73.
Cornell P, Riordan M, Herrin-Griffith D. Transforming nursing workflow, part 2: the
impact of technology on nurse activities. J Nurs Adm 2010;40:432e9.
Quach S, Hamid JS, Pereira JA, et al; Public Health Agency of Canada/Canadian
Institutes of Health Research Influenza Research Network (PCIRN) Vaccine Coverage
Theme Group. Time and motion study to compare electronic and hybrid data
collection systems during the pandemic (H1N1) 2009 influenza vaccination
campaign. Vaccine 2011;29:1997e2003.
Were MC, Emenyonu N, Achieng M, et al. Evaluating a scalable model for
implementing electronic health records in resource-limited settings. J Am Med Inform
Assoc 2010;17:237e44.
Were MC, Shen C, Bwana M, et al. Creation and evaluation of EMR-based paper clinical
summaries to support HIV-care in Uganda, Africa. Int J Med Inform 2010;79:90e6.
Vankipuram M, Kahol K, Cohen T, et al. Visualization and analysis of activities in
critical care environments. AMIA Annu Symp Proc 2009;2009:662e6.
Westbrook JI, Ampt A. Design, application and testing of the Work Observation
Method by Activity Timing (WOMBAT) to measure clinicians’ patterns of work and
communication. Int J Med Inform 2009;78(Suppl 1):S25e33.
Mache S, Kelm R, Bauer H, et al. General and visceral surgery practice in German
hospitals: a real-time work analysis on surgeons’ work flow. Langenbecks Arch Surg
2010;395:81e7.
Westbrook JI, Coiera E, Dunsmuir WT, et al. The impact of interruptions on clinical
task completion. Qual Saf Health Care 2010;19:284e9.
Cork RD, Detmer WM, Friedman CP. Development and initial validation of an
instrument to measure physicians’ use of, knowledge about, and attitudes toward
computers. J Am Med Inform Assoc 1998;5:164e76.
Mache S, Scutaru C, Vitzthum K, et al. Development and evaluation of a computerbased medical work assessment programme. J Occup Med Toxicol 2008;3:35.
Venkatesh V, Morris MG, Davis GB, et al. User acceptance of information
technology: toward a unified view. MIS Quart 2003;27:425e78.
Hauptmann AG, Gao J, Yan R, et al. Automated analysis of nursing home
observations. IEEE Pervas Comput 2004;3:15e21.
J Am Med Inform Assoc 2011;18:704e710. doi:10.1136/amiajnl-2011-000083