The law of maximum expected potential effect

HEALTH EDUCATION RESEARCH
Theory & Practice
Vol.11 no.4 1996
Pages 501-507
The law of maximum expected potential effect:
constraints placed on program effectiveness by
mediator relationships
William B.Hansen and Ralph B.McNeal, Jr1
Abstract
The application of mediating variable analysis
can yield information about the potential effectiveness of interventions that target social
behavior. The application of widely accepted
statistical equations to the analysis of simulated
data demonstrates that the magnitude of the
relationship between mediators and behavioral
outcomes directly affects the maximum expected
potential effect size that can be achieved for any
given intervention. The use of this relationship
in planning and executing interventions is
described. Elements that the field needs to
develop before a truly prospective system of
forecasting program effectiveness are outlined.
Introduction
With the advent of opportunities to intervene on
social and health problems that have emerged
during the past two decades, social and behavioral
scientists have become increasingly involved in
applied research. As would be expected, such
attention has created a body of empirical, theoretical and methodological literature that not only
guides research but also guides practice and public
policy. In the health behavior field in particular,
specific gains in each of these areas are easy to
document. The focus of applied research has been
toward four ends: understanding the epidemiologic
Tangle wood Research, Inc, PO Box 1772, Clemmons,
NC 27012 and 'Department of Sociology, University of
Connecticut, CT 06269, USA
O Oxford University Press
distribution of behavioral phenomena, understanding etiological processes that account for behaviors,
developing and evaluating social intervention programs, and the development and refinement of
statistical methods for analyzing complex phenomena. A sizable body of literature exists which
now documents progress in understanding each of
these ends within substance abuse (Tobler, 1986;
Hansen, 1992; Hawkins et al, 1992; Johnston
et al, 1994) and delinquency (Farrington, 1992).
Other areas are emerging.
In concept, health education program developers
utilize both epidemiology and etiology to craft
programs. In practice, program developers rely
heavily on current and prior research findings, and
on the paradigm (and its theoretical sequelae) that
guided their graduate and post-graduate training
(Kuhn, 1971). Guidance from each of these sources
has proven valuable. However, tools that guide
program development are still needed by this
emerging field. Even with extensive theory development and empirical research, the basic laws that
govern prevention program development remain
elusive.
Other fields, in contrast, have been successful
in developing laws that govern the phenomena
they study. For example, in physics there are the
three laws of thermodynamics (Zemansky, 1968)
and in economics mere is the law of diminishing
returns (Blaug, 1978; Samuelson and Nordhaus,
1995). These laws express fundamental assumptions or axiomatic regularities that are consistently
supported in both experimental and real world
applications (Suppe, 1974).
Health education program development also follows an implicit logic or rules of procedure.
501
W.B.Hansen and R.B.McNeal, Jr
However, the logic has not been formalized and
program development involves at least as much
art as science. Nonetheless, implicit in these rules
of thumb is the essence of underlying laws that,
once defined, may augment the productivity of
program developers' efforts.
The primary rule of thumb has been to develop
programs that address a meaningful mediating
process. For example, health education efforts
nearly all attempt to increase beliefs about the
likelihood of experiencing negative consequences
of behavior. In this example, such beliefs are
traditionally assumed to suppress a person's willingness to engage in high-risk behavior. Such
approaches assume that these beliefs mediate or
account for whether or not subsequent behavior will
emerge. There are, of course, an ever expanding
number of mediators that researchers have considered (Hansen, 1992; Hawkins et al., 1992). For
some, mediating variables remain a topic of artfully
identifying the right content for a program to
address. However, increasingly, mediating variables are being considered in their empirical and
statistical formulation. The technology for completing mediating variable analyses is rapidly developing and will undoubtedly become more common in
the near future (MacKinnon, 1994). It is specifically
from the empirical and statistical applications that
key insights about the laws that govern program
effectiveness may be derived.
We propose two laws in this paper. The first,
the law of indirect effect, has been an implicit
guiding principle inherent in current prevention
program planning. While this law has not been
previously stated as such, by consistently targeting
mediators as the essence of intervention, the field
acts as if this law were an accepted element of
research. We also propose a second law, the law of
maximum expected potential effect, which specifies
that the magnitude of change in behavioral outcome
that a program can produce is directly limited by
the strength of relationships that exist between
mediators and targeted behaviors. The existence
of this law is based on the mathematical formulae
used in estimating the strength of mediating variable relationships, not from empirical observation,
502
although we believe that empirical observations
will generally corroborate its existence. An understanding of this law should allow intervention
researchers a mathematical grounding in the selection of mediating processes for intervention. An
added benefit may ultimately be the ability to
predict with some accuracy the a priori maximum
potential of programs to have an effect on targeted
behavioral outcomes, although this may be beyond
the current state-of-the-science to achieve.
In our case, we use the term law to express
what we believe researchers will come to think of
as axiomatic, primarily because it is based on
statistical procedures which are currently beyond
question from the research community. The tradition of identifying a theorem as a law in the
physical sciences is based on accumulated evidence
of mathematical consistency. Being grounded in a
statistical tradition, the identification of laws in the
social sciences can never be presumed to have
such certainty. Indeed, many social scientists have
to date eschewed the term 'law' altogether and
would view the adoption of such to be beyond the
reach of the field (Marx and Hillix, 1973). We
have adopted the term 'law' to imply that the
evidence to date and statistically grounded evidence and logic provide a basis for labeling the two
principles we propose as laws. The law of indirect
effect and the law of maximum expected potential
effect succintly summarize principles that we think
will provide program developers with a means of
being increasingly logical and accurate in their
thinking. Nonetheless, we acknowledge the hesitancy of the field to accept such propositions and
encourage both logical and empirical challenges
to our formulations.
The law of indirect effect
Prior work has defined a basic underlying assumption of all intervention programs. That is, interventions have their effects on behavior because they
change characteristics within individuals, within
social groups or within the social environment that
influence and account for behavior (Hawkins et al.,
1992). The focus of intervention has therefore been
The law of maximum expected potential effect
on changing mediating processes (often discussed
as either risk or protective factors) as a means of
changing behavior. The assumption of indirect
effects is the basic, central and primary postulate
that has come to guide all social scientific intervention development (MacKinnon and Dwyer, 1993).
The first law of prevention program development
therefore might be dubbed the law of indirect
effect. This premise, which is basic to all program
and policy development, is that behavioral interventions only have indirect effects. This law dictates dial direct effects of programs on behavior
are not possible. The expression or suppression of
a behavior is controlled by neural and situational
processes over which the interventionist has no
direct control. To achieve their effects, programs
or policies must alter processes that have the
potential to indirectly influence the behavior of
interest. Simply stated, programs do not attempt
to change behavior directly. Instead diey attempt
to change the way people think about themselves,
the way they think about the behavior, the way
they perceive the social environment that influences
the behavior, the skills they bring to bear on
situations that augment risk for the occurrence of
the behavior, or the structure of the environment
in which the behavior will eventually either emerge
or be suppressed. The essence of healdi education
is changing predisposing and enabling factors that
lead to behavior, not the behavior itself (Green
and Kreuter, 1991).
In statistical analyses where not all effects are
accounted for, there is a tendency to credit interventions as having had direct effects when variance
is left unaccounted for after completing mediating
variable analysis. However, it is more appropriate
to assume in such cases that methods employed
have failed to measure other mediators that would
otherwise account for observed effects. There are
always more variables that can potentially be
included in surveys and other sources of data
collection that can be addressed in any one study.
It must be assumed that had appropriate mediators
been identified and measured, 'direct' effects would
not have been observed. The primary challenge
that faces the field in this situation is having
sufficient resources (skill, time and funding) to
complete the task, often left incomplete because
of limited resources.
Estimating effects of interventions
To gain desired effects, the program must have
large effects on the targeted mediator and the
mediator must be strongly linked with the targeted
behavior. The expected behavioral impact of a
program that operates through a mediating process
is a simple multiplicative equation. In an idealized
mathematical form, the assumed rules by which a
program effects behavior may be expressed as
equation (1):
Equation (1) postulates that a program's effect
on changing behavior (ESpb) is the product of the
relationship between the mediator and the behavior
(Pmb) and the size of the effect the program has
on each mediator (ESp^.1 Hence, a program's
effect on changing behavior is the program's indirect effect via the mediator (MacKinnon, 1994). In
this formulation, ES^ and ES^ are conceived of
as effect size statistics.2 Effect sizes are, in concept,
fully subject to the influence of programmatic
efforts. The relationship between the mediator
and behavior (Pmb) is expressed as a regression
coefficient and is assumed to be empirically fixed;
the magnitude of each mediator-to-behavior relationship is expected to remain unaffected by the
introduction of an intervention.
The assumption of program development is that
the effect of the program on any given mediator
(ESpm) can be altered by delivering well-crafted
interventions. Since $„&, is nearly always less than
1.0, one expects a priori for observed effects on
behavior (££,*) to be smaller than observed program effects on mediators.
The logic of including mediating variables is
easily understood. Programs alter mediators and
altered mediators in turn cause changes in the
development of behaviors of interest. Thus, designing interventions that are capable of producing
large effects on selected mediators becomes the
503
W.B.Hansen and R.B.McNeal, Jr
inescapable focus of prevention intervention
research. When successful, a large effect of the
program on the targeted mediator (£Spm) is
expected to translate into a proportional effect on
behavior (ES^).
While this logic is compelling, equation (1) is
incomplete. The logic of program development
often fails to take into account the statistical and
empirical realities that face social science. Two
items must be added into the above equation and
must be considered in the logic that drives our
understanding of mediating variable relationships.
First, an adjustment is needed due to the fact
that there is always variance which has not been
accounted in our estimates. We need to include
some measure of confidence for this estimate of
the indirect effect. The analytic strategy for making
this adjustment has been fully developed for regression approaches and is presented in equation (2)
(Sobel, 1982)
J X seBJ)
(2)
Tests of significance on indirect effects via
mediating variables allow the researcher to assess
the magnitude of indirect effects through the calculation of asymptotic T values. The calculation of
this T value is presented in equation (3).
7" =
(3)
T values are not directly comparable between
studies because sample size affects the actual value
of T. However, £ 5 statistics allow for comparisons.
ES values can be calculated from values of T using
equation (4). Without making such an adjustment,
ES statistics are often biased due to the varying
degrees of power associated with each sample or
study (Hedges, 1986).
ES = TX
(4)
Finally, equation (5) appropriately depicts the
expected indirect relationship between program
and outcome via the indirect path in ES terms. In
504
O.I
f 0.6-
o
m
— ——
|0.5
<2o.4c
1"a.
I o,m
Bn*«.4O
Bn*»J0
Bn*-J0
Bn*-.10
Bmb*.O5
^--—~~
A
A.
0.0-
0.0 0.1 02 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
Effect Size: Program an Medlim (ESpm)
\2
Fig. 1. Relationship between changes in the magnitude of effect
of a program on five hypothetical mediators with varying
mediator-behavior relationships and the resulting magnitude of
effect on behaviors.
this equation, we have substituted flpn, for ES in
order to maintain an easily understood method for
calculating standard errors. This method may be
most easily used when examining empirical data.
5pn, X
2
MB^ X segj) +
_1_ J_
We
Nc
(5)
Implications of mediating variable
analysis
These analytic techniques have traditionally been
applied to understanding post hoc how programs
have achieved their effects. However, the implications of these methods have broader application.
Given the fact that the values in equation (5) can
be manipulated, the equation can be utilized to
gain an understanding of the potential behavioral
effects of programs when they have varying
degrees of success in altering mediators. For
example, Figure 1 presents the effect sizes on
behavioral outcomes (ES^) of a hypothetical program that selected five potential mediators to
effect The relationship between each mediator and
behavior is fixed at either £„* = 0.05, P mb = 0.10,
p ^ = 0.20, Pmb = 0.30 or P ^ = 0.40. The sample
The law of maximum expected potential effect
size is assumed to be large (N = 1000 in each
condition).3 Furthermore, standard errors of each
Pnrt, and £5pm are assumed to be equal and have
been arbitrarily set at 0.03, a near-average standard
error based on recent empirical findings (Hansen
and McNeal, 19%). The effect size of the program
on each mediating variable, ES^, varies between
0.0 and 1.2.
What becomes obvious from this exercise is that
for each mediating variable, the magnitude of the
expected effect size of the program on behavior
increases with an increase in effect of the program
on the mediator. This is what would be hypothesized given the current practice of targeting potential
mediating variables to alter behavior. However,
what is not often understood is that for each
variable, there is an asymptote. For a mediating
variable which has only a weak effect on the
behavioral outcome, P,^ = 0.05, the maximum
expected behavioral effect of the program is 0.055.
If, on the other hand, there is a moderately strong
relationship between the mediator and behavior,
Pmb = 0.40, the maximum expected behavioral
effect size nears 0.600. In either case, increases in
program effect on any mediator arc met with
diminishing returns regarding the magnitude of the
expected effect on the behavior.
In practical terms, each variable has a maximum
expected potential effect. The limit of this maximum expected potential effect is directly related
to the magnitude of the relationship between the
mediator and behavior (Pmb) and the magnitude of
the standard errors (which have been fixed for our
purposes). This asymptotic property acts as a
constraint on the maximum potential effect of any
given program on the behavioral outcome. In other
words, for any given circumstance in which an
intervention attempts to effect a change in behavior
by altering a mediating process (which would be
based on the law of indirect effect), there will be
a limit beyond which changes in the mediator
will no longer result in changes in behavior. For
example, we have recently analysed the mediating
processes which account for how the DARE program achieves its observed effects (Hansen and
McNeal, 1996). Such analyses demonstrate the
practical limits that are placed on programs that do
not change mediators that have a large statistically
defined potential to create behavioral effects.
Implications for program development
Such an application improves our understanding
of the true potential magnitude of program effects
that can be expected given the mediators that arc
selected for intervention. Once pnj, and an estimate
of the standard error of P ^ arc known, it should
theoretically be possible to calculate the maximum
expected potential behavioral effect of any given
intervention. Designing programs then becomes a
matter first of identifying mediating processes that
have the potential, based on the understanding of
the field about likely values of P ^ to account for
the behavior of interest The second step is then
to devise an intervention capable of creating a
positive change in these mediating processes.
Interventions that target mediators with high P mb
coefficients have the inherent potential to result in
meaningful behavioral effects. However, if the
mediator is associated with a small or weak P,^,
it may be impossible for interventions that target
such to yield desired behavioral effects.
There is no guarantee that simply targeting
appropriately strong mediators will result in success. The intervention may fail to achieve results
because its effect on the mediator may either be
insufficient or may unintentionally be in the wrong
direction (Hansen and McNeal, 1996). Indeed, it
is possible that programs intending to affect one
set of mediators may have numerous unintended
effects. This suggests that each intervention should
assess program effects on target mediators as well
as other mediators that have a statistically defined
potential to affect behavioral outcomes.
Directions for future research
In empirical studies, ES^ and P,^ and corresponding error terms arc typically estimated for each
potential mediating variable. Etiologists have provided an extensive literature from which P,^ and the
standard error of P.^, might eventually be calculated
505
W.B.Hansen and R.B.McNeal, Jr
for any given mediator. This will perhaps be a primary role for future meta-analyses; determining the
best quality estimates of the betas and their corresponding standard errors. Program developers could
then target mediators that have high potential payoff
and can trace the effect size of interventions on
mediators. If mediators are measured and ES^ and
the corresponding error term calculated, it should
then be possible, without waiting for longitudinal
results, to anticipate the potential effect of a program
by summing the expected indirect path effect sizes
across all potential mediators.
Three problems remain to be solved. The first is
developing a body of synthesized findings that have
realistic estimates of Pnj, and standard error terms for
mediating variables of interest to interventionists.
PmbS are needed that control for as many extraneous
variables and other mediators as possible since a
conservative approach to estimation is required. R2
values should be as high as possible and should
include variables that significantly contribute to predicting behavior as well as those that do not.
The second problem the field must address is the
empirical validity of this method of anticipating
behavior effects from programmatic effects on
mediators. The match between interim findings
where the effect size of the program on the mediator
is used to extrapolate the behavioral effect size
should be compared to observed behavior effects in
longitudinal studies. Because ES^, is an estimate,
error variance in outcomes is expected and multiple
studies may be required to sufficiently test the
method.
The third problem facing the field is a mathematical one. Linear structural equation methods have
been well developed that allow multiple mediators
to be included in estimating complex models (Joreskog, 1979). However, the tradition of mediating
variable analysis has been to treat mediational paths
as independent estimates (MacKinnon and Dwyer,
1993). A method for accounting for possible shared
variance between paths has not been fully
developed. It is possible and likely that mediators are
not independent It is also possible that interventions
may have collateral effects, altering mediators that
were not targeted. Finally, it is not necessarily true
506
that mediators are linear in their nature and may not
be simply partialed and summed across multiple
paths. Additional work that will further the ability
of researchers to apply these methods is needed.
Conclusion
Despite the challenges that face the field which need
resolution before being fully functional, the law of
maximum expected potential effect must be considered to be a basic law that constrains the potential
for social interventions to achieve effects. The maximum expected effect of any given intervention is
constrained by the magnitude of the relationship
between the mediating variable and the outcome.
Furthermore, the constraint is one such that there is
an asymptotic effect whereby more powerful modifications of the mediating variable will have diminishing behavioral returns.
The law has immediate applicability. This is true
even without a full resolution of the issues about
Prob estimates, empirical validation of a prospective
method of projection and the development of
adequate statistical methods for addressing the problem of shared program effect on mediating processes. Nearly all program developers have access
to information about the relationship between a set
of mediators and behaviors. These estimates can
either be obtained from existing data sets or published reports. Such data are immediately useful
for selecting mediating processes that are likely to
maximize potential effects. Even without the ability
to add precision to estimates, the ability to distinguish programmatic strategies that have and do not
have the potential for success can be developed.
Acknowledgements
This study was supported by a grant from the
National Institute on Drug Abuse, grant no. 1 R01
DAO7O3O.
Notes
I. The subscripts for the following notation are as follows: 'p'
indicates the program or intervention, 'b' indicates the
behavioral outcome and 'm' indicates the mediating variable.
The law of maximum expected potential effect
2. With the advent of meta-analytic methods (Glass et at., 1981;
Cooper and Hedges, 1994), a new statistical approach, effect
size (ES), has become increasingly important. Effect size
statistics were developed to provide a means of comparing
results across studies and across different statistical methods.
They provide the standardized differences between experimental and control group means (Hedges and Olkia, 1985).
The most basic formula for calculating £5 is a simple
calculation for the difference in means between two independent groups, presented in the equation below.
Xe-Xc
ES =
The ES statistic has two important benefits, specifically
addressing the weaknesses noted above. First, it is a standardized statistic diat allows comparisons across studies. Second,
it is a measure that attempts to quantify therelativemagnitude
of any given effect In essence, the ES statistic measures the
impact an intervention has on a behavioral outcome in the
scale of standard deviation units, independent of the particular
test statistic utilized (Hedges and Olkin, 1985; Tobler, 1994).
Effect sizes yield either positive or negative values, with
zero indicating a program has no effect on the behavioral
outcome. Theoretically, effect sizes range from negative to
positive infinity, though they rarely exceed 1.0 in practice
(Cohen. 1977). In fart, in many fields, an ES of 0.3 on
behavioral outcome is consideredremarkablystrong (Tobler,
1986). This failure to achieve large effect sizes might be
construed to be the first form of general empirical evidence
that limits in the magnitude of effect exist
3. The assumption of a large sample size is for didactic purposes.
The resulting effect sizes yielded from equation (5) are
technically biased. However, by assuming a large sample
size, the differences between die biased and unbiased ES
estimates are minimal. For example, to make the ES estimates
unbiased, one must multiply the resulting £5 by
1 4Ne + 4Nc - 9
as per Tobler (1994). In a large sample (1000 cases in each
condition), the adjustment factor is 0.9997. If there is a much
smaller number of cases in each condition (SO observations
per), the resulting adjustment factor would be 0.992. However, this assumption can be relaxed by correcting the ES
and does not alter the underlying logic or empirical properties
of the proposed method.
References
Blnug, M. (1978) Economic Theory in Retrospect, 3rd edn.
Cambridge University Press, Cambridge.
Cohen, J. (1977) Statistical Power Analysis for the Behavioral
Sciences. Academic Press, New York.
Cooper, H. and Hedges, L. V. (1994) The Handbook of Research
Synthesis. Sage, Beverly Hills, CA.
Farrington, D. P. (1992) Explaining the beginning, progress,
and ending of antisocial behavior from birth to adulthood.
In McCord, J. (ed.). Advances in Criminological Theory, Vol
3: Facts, Frameworks, and Forecasts. Transaction Publishers,
New Brunswick, NJ.
Glass, G. V., McGaw, B. and Smith, M. L. (1981) Metaanalysis in Social Research. Sage, Beverly Hills, CA.
Green, L. W. and Kreutcr, M. W. (1991) Health Promotion
Planning: An Educational and Environmental Approach, 2nd
edn. Mayfield, Toronto.
Hansen, W. B. (1992) School based substance abuse prevention:
a review of the state-of-the-art in curriculum, 1980-1990.
Health Education Research, 7, 403-430.
Hansen, W. B. and McNeal, R. B. (1996) How D.A.R.E. works:
an examination of program effects on mediating variables.
Health Education Quarterly, in press.
Hawkins, J. D.. Catalano, R. F. and Miller, J. Y. (1992) Risk
and protective factors for alcohol and other drug problems in
adolescence and early adulthood: implications for substance
abuse prevention. Psychological Bulletin, 112, 64-105.
. Hedges, L. V. (1986) Issues in meta-analysis. Review of
Research in Education, 13, 353-398.
Hedges, L. V. and Olkin, I. (1985) Statistical Methods for
Meta-Analysis. Academic Press, New York.
Johnston, L., O'Malley, P. and Bachman, J. (1994) National
Survey Results on Drug Use from the Monitoring the Future
Study, 1975-1993. VoL I: Secondary School Students. DHHS
pubL no. (NM) 94-0000. National Institute on Drug Abuse,
Rockville, MD.
Joreskog, K. G. (1979) Statistical models and methods for
analysis of longitudinal data. In Magidson, J. (ed.). Advances
in Factor Analysis and Structural Equation Models. Abt
Books, Cambridge, MA, pp. 129-169.
Kuhn, T. S. (1971) The Structure of Scientific Revolutions, 2nd
edn. The University of Chicago Press, Chicago, H_
MacKinnon, D. P. (1994) Analysis of mediating variables in
prevention and intervention research. In Cizares, A. and
Beatty, L. A. (eds), Scientific Methods for Prevention
Intervention Research. NIDA Research Monograph no. 139,
127-154. N1H, Rockville, MD.
MacKinnon, D. P. and Dwyer, J. H. (1993) Estimating mediated
effects in prevention studies. Evaluation Review, 17,144—158.
Marx, M. H. and Hillix, W. A. (1973) Systems and Theories
in Psychology, 2nd edn. McGraw-Hill, New York.
Samuelson, P. and Nordhaus, W. D. (1995) Macroeconomics.
15th edn. McGraw-Hill, New York.
Sobel, M. (1982) Asymptotic confidence intervals for indirect
effects in structural equation models. In Leinhard, S. (ed.),
Sociological Methodology, 1982. American Sociological
Association, Washington, DC, pp. 290-293.
Suppe, F. (1974) The Structure of Scientific Theories. University
of Illinois Press, Champaign-Urbana, IL.
Tobler, N. S. (1986) Meta-analysis of 143 adolescent drug
prevention programs: quantitative outcome results of program
participants compared to a control or comparison group.
Journal of Drug Issues, 16, 537-567.
Tobler, N. S. (1994) Meta-analysis of adolescent drug prevention
programs. Doctoral Dissertation. State University of New
York at Albany, June, 1994. Dissertation Abstracts
International, 55 (lla), UMI-Order Number 9509310.
Zemansky, M. W. (1968) Heat and Thermodynamics. McGrawHill, New York.
Received on December I. 1994; accepted on January 20, 1996
507