Political Ideology and the Implementation of Executive-Branch Reforms: The Contingent Impact of PART on Performance Management in Federal Agencies Stéphane Lavertu Assistant Professor John Glenn School of Public Affairs, The Ohio State University Donald Moynihan Professor of Public Affairs La Follette School of Public Affairs University of Wisconsin – Madison [email protected] May 23, 2011 1 Abstract A central purpose of performance management reforms such as the Bush administration’s Program Assessment Rating Tool (PART) is to promote the use of performance information in federal agencies. But as reforms become identified with a partisan agenda, the scope of their influence may decline. Using data from a survey of agency managers, we analyze the extent to which agencies’ ideological predispositions mediated the impact of PART reviews on managers’ use of performance information. The results indicate that the overall positive impact of managers’ involvement with PART reviews on information use may have been completely contingent on an agency being associated with a moderate or conservative ideology. Breaking down the analysis by management activity reveals that this ideological effect obtains for management activities that were more difficult for the Bush administration to observe, which is consistent with principal-agent theory. These and other results suggest that PART reviews were an ineffective mechanism for promoting performance management in liberal agencies when information asymmetries provided managers with discretion over information use. 2 Introduction Political executives—such as presidents, governors, and mayors—often come to office with the intent to reform the administrative agencies they oversee. Their reforms typically require some cooperation and effort from agency personnel. If executives have difficulty observing agency personnel’s actions and capacities, reform outcomes, or the link between personnel’s actions and reform outcomes, executives may face a principal-agent problem. When such information asymmetries exist, employees whose policy preferences differ from those of the executive may fail to devote necessary effort or may even sabotage the executive’s policy initiatives. In this paper, we focus on the pursuit of a management reform in federal agencies in the face of such information asymmetries, and we examine whether divergence in the ideological preferences of a presidential administration and federal agencies has an impact on that reform’s success. The 2001 President’s Management Agenda represented the Bush administration’s policy agenda for reforming management in federal agencies. It involved five management priorities, one of which was the integration of performance assessment and budgeting. The Office of Management and Budget (OMB) created and employed the Program Assessment Rating Tool (PART), a battery of diagnostic questions, to assist in pursuing that policy goal. The OMB used the tool to systematically evaluate the effectiveness of nearly all federal programs for the purpose of informing its budget formulation process and to improve and promote performance management in federal agencies. Indeed, OMB recommendations to agencies through PART reviews typically focused on promoting or altering performance measurement in federal agencies (GAO 2005). The administration’s success in promoting performance management within 3 agencies depended on the extent to which agency managers used performance information in their decision-making. We propose that the extent to which managers’ involvement with PART reviews promoted performance information use depended on the match in ideological orientation between the Bush administration and federal agencies, as well as the information asymmetry between the administration and agency managers. Managers in agencies pursuing conservative policies should have had relatively little concern that PART reviews and recommendations undermined their programmatic priorities, perhaps by altering program goals or justifying cuts in their budgets. Managers working in agencies pursuing liberal missions, however, were relatively more likely to possess such concerns. Moreover, while the collection of performance information is observable by political principals to some extent, the use of performance information is an action that is more difficult to observe, and the ability to observe it varies across management activities. For example, oversight of performance information use for the purpose of modifying performance measures is relatively observable by political appointees and OMB PART reviewers, as opposed to the use of performance information for the purpose of solving problems. Thus, we propose that managers in relatively liberal agencies were less likely than those in conservative agencies to use performance information if they were involved in PART reviews, particularly for management activities that were more difficult for the administration to monitor. We explore these propositions empirically using respondent-level data from a 2007 Governmental Accountability Office (GAO) survey of mid- and upper-level agency managers. The GAO oversampled managers in some agencies, which enables us to examine the impact of managers’ involvement with PART reviews on their use of performance information across 29 4 agencies and a number of management activities. The survey included a variety of questions that asked managers if they used performance data for a range of purposes. Thus, we are able to examine whether managers’ performance management practices are related to the ideological leanings of the agencies in which they work. The results indicate that the overall positive impact of managers’ involvement with PART reviews on information use may have been completely contingent on an agency being associated with a moderate or conservative ideology. Breaking down the analysis by different managerial uses of performance data reveals that this ideological effect obtains for management activities that were more difficult for the administration to observe, which is consistent with principal-agent theory. Additionally, the analysis indicates that managers in liberal agencies who were involved with PART reviews agreed to a greater extent than those not involved that performance measurement problems impeded the collection and use of performance information, whereas there generally were no such differences in moderate and conservative agencies. These and other results suggest that PART reviews were an ineffective mechanism for promoting performance management in liberal agencies when information asymmetries provided managers with discretion over information use. There is growing agreement among both academics (Moynihan and Pandey 2010; Van de Walle and Van Dooren 2008) and practitioners (GAO 2008; OMB 2010 & 2011) that managerial performance information use is the key goal of performance management systems, but we know little about how political ideology influences use. This is an important issue because performance management initiatives often come from political executives and therefore may be, or may be viewed as, politically motivated. The OMB devoted a great deal of effort to creating what it presented as a nonpartisan, objective, and rigorous management tool (Dull 2006). Yet, there is some evidence that ideological or policy preferences affected PART scores and that these scores 5 were more likely to be used to set the budgets of relatively liberal agencies and programs (Gilmour and Lewis 2006a, 2006b, & 2006c). This paper is the first to provide systematic evidence that ideological factors also were associated with the Bush administration’s success in promoting performance information use via PART reviews. The paper proceeds as follows. First, we provide background on the politics of presidential control and the politics of PART. Second, we describe the conditions under which agency managers’ involvement with PART reviews should lead to performance information use. Third, we describe the data and empirical methods. Fourth, we describe and discuss the results. Finally, we offer some concluding thoughts about the implications of this study’s findings. The Politics of Presidential Control The politics of public management enjoys perennial scrutiny now that scholars have reevaluated the politics-administration dichotomy (e.g., Appleby 1949; Lynn, Heinrich and Hill 2000) and emphasize the role of politics in the implementation of policy (e.g., Pressman and Wildavsky 1973) and the design of the federal executive branch (e.g., Gormley 1989; McNollGast 1987; Moe 1989). Much of the empirical research focuses on how legislatures, particularly Congress (e.g., Epstein & O’Halloran 1999; Lewis 2003), and political executives, particularly presidents (e.g., Lewis 2008; Rudalevige 2002), attempt to control executive-branch policymaking. A prominent narrative in the academic literature is that, as their political incentives have changed and the scope of the federal bureaucracy has expanded, presidents increasingly have sought to control bureaucratic behavior (Moe 1985). Modern accounts often begin with Nixon’s “politicization” of federal agencies through appointments and his impoundment of agency budgets, or Reagan’s imposition of various decision-making procedures 6 intended to limit agencies’ policymaking discretion and the promulgation of federal regulations altogether. David Lewis’s (2008) research, for example, shows that presidents of both parties have employed their power of appointment to influence the policymaking of agencies whose programs are tied to ideological constituencies that differ from their own, and that this politicization had a negative impact on the performance of public programs. The need for and effectiveness of presidential efforts to control bureaucratic behavior is often assumed. That presidential control attempts take place suggests that there is indeed some value in them. Importantly, however, research increasingly focuses on what motivates bureaucrats, sometimes examining whether principal-agent models imported from economics appropriately characterize the determinants of bureaucratic behavior (e.g., Brehm and Gates 1997; Golden 2000). Some of this research indicates that bureaucrats infrequently “shirk” with low levels of effort and seldom seek to “sabotage” programs with which they disagree, but both popular and academic accounts frequently suggest otherwise. Rosemary O’Leary (2006), for example, describes what she refers to as “guerrilla government,” in which career officials actively undermine the efforts of political appointees when they believe a president’s policies are too extreme. Although not necessarily so explicit, bureaucratic resistance to presidential initiatives aimed at altering public management practices should be expected. Civil servants are acutely aware of the implications of such presidential actions for their programs. Reagan’s management policies, for example, were perceived as an attempt to centralize authority and undermine liberal programs (Durant 1987; Tomkin 1998). Even reforms that are less overtly political are susceptible to bureaucratic indifference or even hostility if agencies are antagonistic to the president sponsoring them. A case in point is the negative reception for President Clinton’s Reinventing Government reforms in the Department of Defense (Durant 2008). 7 The Politics of the Program Assessment Rating Tool Efforts to make public management more goal- and results-oriented, known as “performance management” reforms, have long enjoyed bipartisan support. Depending on how one defines performance management, one might identify a number of starting points: the recommendations of the Hoover Commission, which President Truman established; the Planning, Programming, and Budgeting System implemented in the Department of Defense under President Kennedy; the expansion of Management by Objectives under President Nixon; or the promotion of pay-for-performance by President Carter and the first President Bush. But the point of origin for the modern era of federal performance management is probably the Government Performance and Results Act (GPRA) of 1993 (Radin 2000). The act requires agencies to set long-term strategic goals and short-term annual goals, measure performance toward achieving those goals, and report on their progress via performance plans and reports to Congress and the Office of Management and Budget (OMB). GPRA enjoyed broad bipartisan support when it passed in 1993 and when it was amended in 2010. At various times since its enactment members of both parties have sought to use for their political purposes the performance information that GPRA generated. For example, Majority Leader Dick Armey employed GPRA results to publicize the poor performance of federal agencies (Dull 2006). But the Clinton administration also valued GPRA. For example, Vice President Gore launched an initiative that utilized GPRA goals in an attempt to illustrate the performance improvement that took place during the Clinton era (Moynihan 2003). GPRA is generally credited as having helped foster results-oriented management in federal agencies—or, at least, as having helped set a foundation for the use of performance information in management activities (GAO 2004). 8 The Bush administration’s President’s Management Agenda built upon the bipartisan statutory framework for performance management that GPRA established (OMB 2001). By the time President Bush arrived in office, GPRA was not extensively used by either party and was seen by the Bush administration as a helpful but under-exploited tool for performance management (Dull 2006). Among other things, the President’s Management Agenda called for the integration of performance assessment and budgeting. As a result, the OMB created and employed PART to systematically evaluate nearly all federal programs for the purpose of informing its budget formulation process and for promoting program performance. Specifically, it used the PART to grade federal programs on an ineffective-to-effective scale according to four different criteria (program purpose and design, strategic planning, program management, and program results/accountability) and weighted those scores to assign programs an overall score. Evaluating programs using the PART was a labor-intensive process conducted by OMB budget examiners in cooperation with program managers. PART reviews were conducted in waves from 2003 through 2008 until nearly all programs were evaluated. Some of the Bush administration’s management practices were criticized as partisan and damaging to neutral competence (Pfiffner 2007); but the PART remained largely above such criticism, sometimes characterized as an inoffensive formal management agenda at odds with much of actual management practice (Moynihan and Roberts 2010). The PART was helped in this regard by the general perception that performance management is not an overtly political tool (Radin 2006). Perhaps more than any other scholar, Beryl Radin has sought to unmask the implicit normative assumptions of performance management, including the PART. While she argues that the PART clearly is not a value-free technical tool, and operates as part of a political process, she does not argue that it is a partisan tool (Radin 2000 & 2006). 9 The Bush administration took great care to establish the PART’s credibility as a management tool (Dull 2006). It was pilot-tested and revised based on extensive feedback from a wide range of experts and stakeholders. A special team within OMB was created to make early versions of the PART more systematic. An outside advisory council of public management experts and a workshop from the National Academy of Public Administration were consulted. PART questions were dropped if they were perceived as lending themselves to an ideological interpretation. The OMB-trained budget examiners created a 92-page training manual and established a team to cross-check responses for consistency, all in the name of reducing subjectivity. The OMB also made all of PART assessments public and available on the internet in order to check against examiner biases—a practice that demonstrates the confidence that the OMB had in the tool and the judgments it elicited. Mitch Daniels, the OMB director who created the PART, pushed staff to develop a non-partisan instrument (Moynihan 2008), and public presentations of the PART by OMB officials to stakeholders and agency personnel promoted it as a non-partisan tool. Why did the Bush administration devote such effort to developing a non-partisan tool and promoting it as such? It is because it wanted PART reviews to inform OMB, congressional, and agency decision-making (Dull 2006). PART reviews could serve the administration’s policy priorities and enable it to enhance its control of the budget formulation process, an important aspect of federal policymaking. Poor PART reviews, for example, could be used as justification for cutting programs. But the tool’s credibility had to be established if PART reviews were to affect the decision-making of legislators and agency managers. Dull (2006), for example, notes that management reform efforts seen as curbing neutral competence become tarnished and fail to gain the necessary credibility. He also states that “while recent presidents have often bypassed 10 OMB, believing the organization to be unresponsive to the president’s political needs, the Bush administration’s PART seeks to discipline and employ OMB’s policy competence to advance the administration’s political agenda” (Dull 2006, 207). Therefore, if seen as a credible tool for promoting performance management and enhancing the performance of federal programs, PART would also further the administration’s goals in budgeting, policymaking, and implementation. Whatever the intent of the Bush administration, many actors outside of the White House were skeptical or questioned the usefulness of PART reviews. There is evidence that PART scores influenced executive branch budget formulation (Gilmour and Lewis 2006a) but that they did not significantly influence congressional budgetary decisions (Heinrich forthcoming; Frisco and Stalebrink 2008). Few congressional staff used PART information (GAO 2008, 19) and congressional Democrats considered PART a partisan tool. Efforts to institutionalize PART review via statute failed, reflecting partisan disagreement about its purpose and merit (Moynihan 2008). Indeed, the Obama administration characterized PART as an ideological tool and decided against continuing its implementation. PART and Performance Management: The Perspective of Agency Managers Agency managers had a number of reasons to view PART review as an important process and, therefore, to take steps to improve performance management of the programs with which they are involved. PART reviews were a presidential priority, and PART scores had an impact on OMB’s budgetary decisions (Gilmour and Lewis 2006a & 2006b). The OMB also implemented mechanisms so that PART reviews would inform program management. For example, each PART assessment generated a series of recommendations for agency managers 11 that OMB officials could later review. The GAO concluded that “agencies have clear incentives to take the PART seriously” (GAO 2005b, 16). That many agency managers were directly involved in the PART review process also might have attenuated the type of suspicions that many members of Congress and congressional staff, who were likely less familiar with the PART review process, are thought to have held. 1 PART reviews were a product of a consultation between career counterparts at the OMB and the agency. When agency and OMB officials disagreed, it might have been chalked up to professional or interpersonal, rather than political, disagreement (Moynihan 2008). There were also reasons for agency officials to be wary of PART. Any government-wide reform will encounter claims that it lacks nuance and fails to appreciate the particular characteristics of a specific program (Radin 2006). PART, which was essentially a standardized questionnaire, was no exception. There is also the related issue of whether agency officials who enjoy an information advantage over OMB officials would accept PART evaluations as valid. The GAO (2008) asserted that agency managers’ lack of confidence in the credibility and usefulness of OMB’s assessments, primarily due to a lack of programmatic expertise by PART reviewers, was a key impediment to OMB’s leadership on performance management issues. There are at least three reasons why ideology might have had an impact on the extent to which involvement with PART reviews influenced managers’ use of performance information. First, ideological disagreement between the administration and agency managers could capture policy disagreement about appropriate management processes. For example, ideologically conservative agency managers may be more likely to agree with the notion of performance management and oversight via PART-like tools. (Although, the general bipartisan support for 1 Indeed, in an analysis of a GAO survey item that asks respondents to what extent PART needs to be changed, those involved with PART reviews expressed the need for less significant changes, regardless of the agency in which they worked. 12 performance management suggests that this may not be too significant a factor.) Second, managers who share a president’s ideology may be more receptive to presidential initiatives. Third, ideology roughly captures substantive policy preferences. Agency managers with relatively liberal policy preferences, or those who manage programs traditionally supported by liberal political constituencies, may resist attempts by the administration to alter programs in substantively significant ways—for example, through alterations in program goals and the use of performance measures that promote these altered goals. In addition to managers’ prior beliefs regarding the Bush administration’s policy preferences, PART reviews themselves may have been interpreted as signaling the administration’s policy priorities. Negative PART reviews, for example, might have signaled to managers that the administration was hostile to their programs. As we mention above, John B. Gilmour and David E. Lewis provide empirical evidence that political ideology played a role in the PART review process. Programs established under Democratic presidents received systematically lower PART scores than those created under Republican presidents (Gilmour and Lewis 2006c). Additionally, programs in traditionally Democratic agencies were the only ones that corresponded to OMB budgetary decisions, suggesting that programs more consistent with Republican ideology were insulated from PART scores during OMB’s budget formulation (Gilmour and Lewis 2006b). Managers in liberal agencies had good reason to believe that PART reviews were not benefiting their programs, and perceived or real policy differences surely influenced managerial receptivity to changes in performance management promoted via the PART review process. There is substantial agreement that a primary goal of PART reviews was to make federal managers more results-oriented (GAO 2005; Dull 2006; Moynihan 2008). And managers had 13 significant incentives to take PART reviews seriously (Gilmour 2006). On the other hand, there is evidence that political considerations may have had an impact on PART scores and OMB’s use of these scores in formulating the budget. Program managers in traditionally liberal agencies had reasons to perceive the review process as biased and invalid. The experience with PART gives rise to the following general proposition, which we elaborate in greater detail below. General Proposition: PART reviews are less likely to promote the use of performance information among managers whose policy preferences differ from those of the President. PART’s Impact on Performance Information Use: A Principal-Agent Theory In the parlance of principal-agent theory, when agency managers possess an informational advantage over a presidential administration regarding the extent to which they engage in performance management practices, there exists a moral hazard if agency managers’ goals differ from those of the administration. (See Dixit [2002] for a survey of this theoretical literature, and Heinrich and Marschke [2010] for a review of applications to performance management.) In other words, from the perspective of administration officials, all managers conceivably could devote less-than-optimal effort to performance management, and managers whose policy preferences conflict with those of an administration have incentives to pursue policies they prefer. PART reviews were, to a significant extent, the Bush administration’s costly attempt to reduce this information asymmetry and promote performance management by scrutinizing management practices. Thus, with regard to performance management activities on which PART reviews could shed light, such as modifying performance goals and measures, involvement with PART reviews might have promoted the use of performance information. 14 PART reviews may have created incentives for agency managers to focus on performance management if they did not already, but policy differences between the Bush administration and agency managers stood to influence the extent to which PART reviews promoted or hindered performance information use among agency managers. What the simple principal-agent perspective we offer above neglects is that the PART review process had an influence on the substance of program planning and performance measurement. PART reviews may have made, or may have been perceived as making, performance measures more or less useful to agency managers. As we mention in the previous section, managers who disagreed with the administration regarding programmatic goals or who discounted administration priorities (perhaps due to ideological differences) may have been less likely to perceive PARTinfluenced performance measures to be valid. Specifically, we offer the following hypothesis: H1: Managers who reported involvement with PART reviews perceived that performance measurement issues were impediments to performance information use to a greater extent than managers who did not report involvement with PART reviews if and only if their policy preferences differed from those of the administration. In turn, agency managers who disagreed with the performance goals and measures shaped by or provided through the PART review process should have been less likely to report using performance information, provided that information asymmetries permitted this “shirking.” Indeed, if this is the case, PART review could potentially have led to declines in the use of performance information. Specifically, we offer the following hypothesis: 15 H2: Managers who reported involvement with PART reviews used performance information to a lesser extent than managers who did not report involvement with PART reviews if and only if their policy preferences differed from those of the administration and management activities were sufficiently difficult for the administration to monitor. Empirical Approach Determining the extent to which policy preferences or political ideology affected PART reviews’ impact on performance information use requires measures of information use, PART implementation, and policy preferences. We employ data from a survey of agency managers to create measures of information use and PART implementation. Specifically, to create these measures we use survey items that ask agency managers to identify levels of information use, hindrances to information use, and involvement with PART reviews. To approximate differences in policy preferences or ideology, we employ a measure that categorizes agencies according to their ideological proclivities—liberal, moderate, or conservative. Thus, the results we present below are from models that estimate the relationship between managers’ reported involvement with PART reviews, the ideological tradition or orientation of the agency in which managers work, and managers’ reported information use and perceptions regarding the impact of performance measurement problems on information use. Additionally, to test the robustness of our findings, we employ control variables based on a number of items that ask managers about other factors thought to influence information use. 16 Data The Government Accountability Office administered surveys in 1996, 2000, 2003, and 2007 to collect data on the implementation of performance management reforms in federal agencies. They administered the surveys to a random, nationwide sample of mid- and upperlevel federal employees in the agencies covered by the Chief Financial Officers Act of 1990, and, in 2007, they over-sampled managers from certain agencies to facilitate comparisons across 29 different agencies. The timing of and items in the 2007 survey also permit an assessment of PART’s impact on performance management. Thus, our analysis employs the 2007 survey data. Tables 1 and 2 summarize the variables we employ (including descriptive statistics), so we do not describe most of them here. However, the manner in which we categorize measures of performance information use in each table of results, and the key measures indicating involvement with PART reviews, warrant further discussion. Aggregating all measures into a single index of use (as the GAO tends to do) is justifiable based on strong values of Cronbach’s alpha. But common factor analysis indicates that it is appropriate to categorize measures of use in terms of subcategories of the broader index: program planning, problem solving, performance measurement, and employee management. The two measures we use to create indexes for each type of activity (see the bottom of Table 1) were those that most clearly loaded on the four underlying factors. That these categories capture theoretically distinct activities lends them some additional validity. [Insert Table 1 and Table 2 about here.] The indicator PART is based on an item inquiring about the extent to which respondents report being “involved” in PART reviews. PART is coded 1 if respondents report being involved 17 “to a small extent” or more, and zero otherwise. 2 The item used to create this measure asks about “any involvement in preparing for, participating in, or responding to the results of any PART assessment.” The structure of the variable is intended to reflect that the process of implementing PART affected some employees and had no impact on others. The implementation of PART was intended to create communities of agency actors who were involved with PART assessments (Gilmour 2006). Agency employees responsible for performance measurement, planning and evaluation, and budgeting processes are likely to have been directly involved in negotiating with OMB officials over PART scores. Program managers and staff whose programs were evaluated became involved in collecting agency information and responding to management recommendations offered through the PART review process. The review process led employees to invest time and effort toward performance management, enhanced awareness of appropriate performance management practices, and communicated the importance that OMB and the White House placed on performance management. As the mean of the variable in Table 2 indicates, 31 percent of managers surveyed were involved with PART reviews, indicating that relatively large numbers of federal employees were engaged in the process. The measures of agency ideology, Liberal and Conservative, are indicator variables created based on measures created by Joshua Clinton and David Lewis (2008). They employed measurement models that estimate agency ideology based on a survey of experts. The survey item read as follows: 2 We also estimated models using the continuous measure of use. The results are similar and tend to be more statistically significant. However, the dichotomous measure facilitates interpretation, so we report the results of models that employ that measure. 18 Please see below a list of United States government agencies that were in existence between 1988–2005. I am interested to know which of these agencies have policy views due to law, practice, culture, or tradition that can be characterized as liberal or conservative. Please place a check mark (√) in one of the boxes next to each agency— ‘‘slant Liberal, Neither Consistently, slant Conservative, Don’t Know.”(p5) The measure takes on a value of -1 (liberal), 0 (moderate), and 1 (conservative), which we used to create our dichotomous indicators for liberal and conservative agencies. (See Table 3 for the list of agencies by ideology.) While it would be ideal to have individual- or program-level measures of political ideology, the GAO did not collect this information. The agency-level scores should capture political or policy preferences embodied in these organizations as a result of their programs, employees, and other organizational factors. Research has shown the utility of these agency-level ideology scores in understanding PART scores and budget decisions (Gilmour and Lewis 2006a, Gilmour and Lewis 2006a), but not in terms of agency managers’ response to PART. The use of agency-level ideology scores also provides some reassurance that any findings that emerge are not the function of response bias or common-source methods bias. [Insert Table 3 about here.] Finally, it is important to note some things about the control variables we include in the models that test the robustness of our findings. First, all control variables were centered before model estimation to facilitate interpretation; however, the descriptions presented in Table 2 are for the variables before centering, to facilitate interpretation of the descriptive statistics. Second, we include all variables that account for manager characteristics that the GAO provided (there are two) and include variables that are thought to influence information use—measures of 19 leadership commitment and decision-making authority, as well as measures of perceived oversight by political principals. There are a variety of additional items that inquire about potential hindrances to information collection and use in the survey, but we do not include them in the main statistical models because of the leading nature of the survey question (“… to what extent, if at all, have the following factors hindered measuring performance or using the performance information?”), and because of high collinearity among many of these items. It is important to note, however, that the results are analogous if these variables are included as controls and that we analyze many of these factors explicitly in the analysis below (see Table 5). Statistical Methods We estimated a number of statistical models. Initially, for all of the analyses, we estimated hierarchical ordered probit models for each measure of use, as well as ordered probit models with errors clustered by agency. The data are ordered and non-normal and, thus, the ordered probit model is the most appropriate. However, the coefficients in ordered probit models are difficult to interpret and may be misleading for the models of performance information use, which include interactions between Liberal and PART and Conservative and PART. Additionally, the results of ordered probit models are nearly identical to those of hierarchical linear models, as well as OLS models with errors clustered by agency. Thus, to facilitate interpretation, for models of estimating information use we present the results of hierarchical linear models, which provide more information about the variability in PART’s impact across agencies than OLS models with clustered errors. For the purpose of illustration, a basic hierarchical model that includes no level-1 controls and, for simplicity, includes only the 20 indicator for liberal agencies, is specified as follows, where i and j indicate the respondent and agency, respectively, and indicates a measure of use: Level 1: Level 2: Level 2: Plugging in level 2 equations and rearranging terms, one gets the following: The above model features four fixed effects coefficients ( , , , and ) corresponding to the constant (i.e., the overall average level of information use), the impact of Liberal when respondents do not report involvement with PART reviews, the impact of PART involvement for a moderate or conservative agency (i.e., the impact for the omitted agency category), and the adjustment in the impact of PART involvement for a liberal agency, as opposed to a moderate or conservative agency. The above model also estimates random effects ( and ) corresponding to the unexplained variance of PART’s impact across agencies and the unexplained variance in the dependent variable (i.e., information use) across agencies. All models were estimated using the xtmixed command in Stata 11.1 and using maximum likelihood estimation. The models also estimated the covariance between the impact of PART involvement across agencies and information use by agency. Finally, the results of ordered probit models with errors clustered by agency are reported for models that include no interaction variables. 21 Results The results presented in Table 4 are from hierarchical linear models estimating information use. The model in the first column employs an index that sums all of the measures of performance information use listed in the top portion of Table 1. The inclusion of interaction terms requires that the coefficients be interpreted carefully. The results indicate that, accounting only for agency random effects, involvement with PART reviews is positively associated with information use in ideologically moderate agencies (the coefficient for PART reveals its impact for managers in moderate agencies); agency ideology is unrelated to information use when a manager does not report PART involvement (the coefficients for Liberal and Conservative reveal ideology’s lack of impact when a respondent does not report involvement); there is no difference between moderate and conservative agencies in terms of the impact of PART involvement (the coefficient for PART*Conservative does not reach statistical significance); and the impact of PART involvement is significantly lower in liberal agencies than it is in moderate agencies (the coefficient for PART*Liberal is negative and statistically significant). [Insert Table 4 about here.] There are two noteworthy aspects to these findings. First, the key difference is between the impact of PART in liberal and non-liberal agencies, and this is the relevant comparison for subsequent analyses. Second, when considering performance information use across all management activities, the impact of PART involvement is completely negated in liberal agencies. For example, in moderate agencies, the average level of use across all activities when no PART involvement is reported is captured by the constant, about 2.49 (between “to a moderate extent” and “to a great extent”), and increases by the coefficient for PART (0.30) when PART involvement is reported, bringing the total to about 2.79. In liberal agencies, the average 22 level of use by managers with no PART involvement is about 2.59 (though this is not statistically different from 2.49) and increases to 2.62 when PART involvement is reported (2.49+0.10+0.300.27). Put differently, PART involvement does not significantly affect information use in liberal agencies, but it has a positive impact on information use in moderate and conservative agencies. The difference in the impact of PART across management activities also is noteworthy. Both interaction terms fail to reach statistical significance in the “performance measurement” model, indicating that the impact of PART is consistent across agencies of different ideologies when it comes to using performance information for “refining program performance measures” and “setting new or revising existing performance goals.” In other words, for those performance management activities that PART reviews monitor explicitly, agency ideology does not account for differences in the impact of PART reviews on performance management. On the other hand, the biggest disparity between liberal and non-liberal agencies in terms of the impact of PART involvement is in the “problem solving” model. Interestingly, this is the only activity for which there is a difference in the use of performance information by managers not involved with PART, in that managers in liberal agencies used performance information more (by 0.26) than moderate agencies. The impact of PART involvement on using performance information to identify and correct problems mitigates this disparity to some extent. Finally, another noteworthy finding is that, unlike other management activities, PART involvement has no impact on information use for employee recognition and rewarding in moderate and conservative agencies, and it may have had a negative overall impact on information use for this activity in liberal agencies (p=0.104 for a two-tailed test). The results presented in Table 5 indicate that these findings are robust to the inclusion of statistical controls. It is worth noting that, because the control variables were centered, the 23 constant refers to the average level of use for non-liberal agencies (and, in all but one model, liberal agencies as well) when no PART involvement is reported. The control variables are not the focus of our study, so we refrain from discussing their estimated coefficients here, except to say that leadership commitment, decision-making authority, and oversight by managers’ supervisors are strongly linked to performance information use across all activities. The findings regarding commitment and decision-making authority are consistent with previous studies (Dull 2009; Moynihan and Pandey 2010). [Insert Table 5 about here.] The results presented in Table 4 and Table 5 are consistent with our second hypothesis, which states that managers involved with PART and whose policy preferences differed from those of the administration used performance information to a lesser extent than those not involved if information asymmetries permitted it. The results indicate that the overall positive impact of managers’ involvement with PART reviews on information use is contingent on an agency being associated with a moderate or conservative ideology, but this disparity is driven by information use for management activities that are not easily monitored via the PART review process. When information use could be monitored effectively through the PART review process—such as in management activities involving performance measurement—differences in information use no longer correspond to agency ideology. The results clearly illustrate that the important distinction is between liberal and nonliberal agencies. The next analysis examines what it is about PART implementation in liberal agencies that accounts for the differences in use we have uncovered. The results presented in Table 6 are from ordered probit models (with errors clustered by agency) that estimate the extent to which agency managers perceive various factors as having hindered their collection or use of 24 performance information. Exact item wordings are provided in the table. The results reveal the estimated impact of managerial involvement with PART reviews on each factor. These PART effects were estimated separately for liberal and non-liberal agencies to facilitate comparisons. [Insert Table 6 about here.] A clear pattern emerges in the results presented in Table 6. Managers in liberal agencies involved with PART reviews agreed to a greater extent than those not involved with PART reviews that performance measurement problems hindered their collection and use of performance information. Strikingly, such effects are either non-existent or minimal for moderate and conservative agencies. It appears that “difficulty determining meaningful measures,” “difficulty obtaining valid or reliable data,” “difficulty obtaining data in time to be useful,” “difficulty distinguishing between the results produced by the program and results caused by other factors,” and “existing information technology and/or systems not capable of providing needed data,” were substantial impediments to performance management in liberal agencies only if managers reported involvement with PART reviews. On the other hand, there were few or no differences between managers in liberal agencies not involved with PART reviews and managers in moderate or conservative agencies (whether or not they were involved with PART reviews) in terms of the obstacles to performance management they perceived. These results suggest that the impact of ideology on information use occurred completely through the PART review mechanism—that they are not attributable to inherent differences in agencies’ ability or willingness to use performance management practices. One potential competing interpretation is that managers involved with PART reviews were simply more knowledgeable about the performance information limitations their agencies faced. But it is not clear why such an effect would be limited only to managers in liberal agencies, and the selection 25 effect is perhaps more plausible in the the other direction (managers who spend their career creating performance measures are more likely to reject the claim that it is hard to measure program performance). The findings in the models of use also tend to undercut concerns about the PART variable: when information asymmetries are sufficiently great, reported levels of use in liberal agencies do not differ between managers who are involved and those not involved, and use practices do not differ by agency ideology when no PART involvement is reported. Further, managers in liberal agencies who were not involved with PART reviews did not report perceptions that differed in statistically significant ways from those of managers in moderate and conservative agencies. Indeed, the results reveal that all managers (whatever their agency’s ideology and whether or not they were involved with PART reviews) had similar perceptions regarding the extent to which “difficulty determining how to use performance information to improve the program” affected information use. In other words, the differences in information use that we have uncovered are not attributable to differences in the ability to use performance information. Finally, fears of OMB micromanagement affected those involved with PART reviews similarly, whatever their agency’s ideology. It is almost exclusively in liberal agencies that managers involved with PART reviews agree to a greater extent than managers not involved that difficulties collecting valid, reliable, and timely data inhibited the collection and use of performance information. These results are consistent with our first hypothesis that managers in liberal agencies who were involved with PART reviews are more likely to perceive that performance measures are an impediment to information use. Discussion 26 The results offer a coherent narrative about how political ideology and information asymmetry affect the ability of a political executive to further performance information use. The results indicate that the overall positive impact of managers’ involvement with PART reviews on information use may have been completely contingent on an agency being associated with a moderate or conservative ideology. Breaking down the analysis by management activity reveals that this ideological effect obtains for management activities that were more difficult for the administration to observe. Additionally, the analysis indicates that managers in liberal agencies who were involved with PART reviews agreed to a greater extent than those not involved that performance measurement problems impeded the collection and use of performance information, whereas there generally were no such differences in moderate and conservative agencies. These and other results are consistent with our hypotheses based on principal-agent theory. They provide evidence that managers in liberal agencies did not respond to the administration’s approach to promoting performance management practices when PART reviews influenced performance measurement in ways they found problematic, and when information asymmetries between them and the administration permitted it. Our empirical analysis suggests that these results are driven by the PART-review monitoring mechanism, as opposed to differences in organizational or programmatic factors. Heinrich and Marschke (2010), employing a principal-agent perspective, argue that the impact of performance management reforms depends a good deal on implementation dynamics. In the course of examining these dynamics in the case of the PART, we contribute in important ways to a number of academic literatures. First, we contribute to the growing literature on the determinants of performance information use (e.g. Dull 2009; Moynihan and Pandey 2010) by examining the role of political ideology and how its impact varies across management activities. 27 A second contribution deals with the role of ideology in executive-branch policy implementation more generally. There is a growing literature that focuses on how political executives, particularly presidents, attempt to control executive-branch policy decisions using various mechanisms (e.g., see Moe 1985 and Lewis 2008) and the impact of control mechanisms on the behavior of agency personnel (e.g., see Brehm and Gates 1997; Golden 2000). And there is some research on the role of ideology in implementing performance management reforms (e.g., Durant 2008) and the use of the PART in particular (Gilmour and Lewis 2006a, 2006b, & 2006c). This paper is the first to provide systematic government-wide evidence that ideological factors influenced the Bush administration’s success in promoting performance information use via PART reviews. Performance reforms often are promoted by political executives. Understanding how the ideological gap between executives and managers matters has real importance. While the Bush OMB promoted PART as a nonpartisan tool, ideology nonetheless impeded the administration’s ability to promote performance management via PART reviews. Agency-OMB interactions related to PART reviews often focused on the issue of what constituted acceptable goals and measures (Frederickson and Frederickson 2006; Gilmour 2008). While PART assessments provided a mechanism through which OMB could make various programmatic and management recommendations, the majority of PART recommendations had to do with “performance assessments, such as developing outcome measures and/or goals, and improving data collection” (GAO 2005, 22). This was to be expected, since OMB examiners typically did not possess in-depth management programmatic knowledge and thus were likely to recommend their principals’ policy priorities and monitor compliance with these recommendations. They were less able, however, to monitor other uses of performance information by agency managers, such as the use of performance information for problem 28 solving. It is unsurprising, therefore, that managers in liberal agencies did not respond with increased performance information use when OMB could not effectively monitor such use. Managers in liberal agencies likely had policy preferences that differed from their political principals on average, and, thus, were more likely to perceive performance measures influenced by the PART review process to be invalid. For example, commenting on the sometimes insulated and unilateral way in which OMB attempted to influence program planning, the GAO stated that “while the PART clearly must serve the President’s interests, the many actors whose input is critical to decisions will not likely use performance information unless they feel it is credible and reflects a consensus” (2005, 14). Our study demonstrates how this combination of preference disagreement and information asymmetry forms perhaps the most potent threat to executive-led efforts to pursue performance management. Conclusion This study deals with a fundamental issue: how politics interacts with public administration. The claim that politics affects administration in innumerable ways is uncontroversial, and scholars have expended significant effort to understand better how political principals attempt to influence administrative behavior. There has been significantly less systematic research, however, that examines how public managers receive and affect these attempts at administrative control. Our study provides a contribution in that it examines how managers received and responded to PART reviews, and how they affected the implementation of the Bush administration’s model of performance management. Like many tools of management, performance management is often perceived to be politically neutral (Radin 2006), and the Bush administration and the OMB expended significant effort to create and promote a 29 nonpartisan tool (Dull 2006). Yet, the results of this study lead us to question whether any administrative reform that is identified with a political actor can be truly neutral in its design, implementation, and impact. 30 References Appleby, Paul H. 1949. Policy and Administration. Birmingham, AL: University of Alabama Press. Arnold, Peri. 1995. Reforms Changing Role. Public Administration Review 55(5): 407–17 Brehm, John and Scott Gates. 1997. Working, Shirking, and Sabotage: Bureaucratic Response to a Democratic Public. Ann Arbor, MI: University of Michigan Press. Clinton, Joshua D. and David E. Lewis. 2008. Expert Opinion, Agency Characteristics, and Agency Preferences Political Analysis 16:3-20. Dixit, Avinash. 2002. “Incentives and Organizations in the Public Sector: An Interpretative Review.” Journal of Human Resources 37(4): 696-727. Dull, Matthew. 2006. Why PART? The Institutional Politics of Presidential Budget Reform. Journal of Public Administration Research and Theory 16(2): 187–215. -------. 2009. Results-model reform leadership: Questions of credible commitment. Journal of Public Administration Research and Theory 19:255–84. Durant, Robert F. 1987. Toward Assessing the Administrative Presidency: Public Lands, the BLM, and the Reagan Administration. Public Administration Review 47(2): 180–89 -------. 2008. “Sharpening a Knife Cleverly: Organizational Change, Policy Paradox, and the "Weaponizing" of Administrative Reforms.” Public Administration Review 68(2): 282-294. Epstein, David and Sharyn O'Halloran. 1999. Delegating Powers. New York: Cambridge University Press. Frederickson, David G. and H. George Frederickson. 2007. Measuring the Performance of the Hollow State. Washington, DC: Georgetown Univ. Press. Frisco, Velda and Odd J. Stalebrink. 2008. Congressional Use of the Program Assessment Rating Tool. Public Budgeting and Finance 28(2): 1-19. Gilmour, David. 2006. Implementing OMB’s Program Assessment Rating Tool (PART): Meeting the Challenges of Integrating Budget and Performance. Washington D.C.: IBM Business of Government. Gilmour, John B., and David E. Lewis. 2006a. “Assessing performance assessment for budgeting: The influence of politics, performance, and program size in FY2005.” Journal of Public Administration Research and Theory 16(2):169-86. --------. 2006b. “Does Performance Budgeting Work? An Examination of the Office of Management and Budget’s PART Scores.” Public Administration Review 66(5): 742-52 31 --------. 2006c. “Political appointees and the competence of Federal Program Management.” American Politics Research 34(1):22-50. Golden, Marissa. 2000. What Motivates Bureaucrats? New York: Columbia University Press. Gormley, William. 1989. Taming the Bureaucracy. Princeton, N.J.: Princeton University Press. Heinrich, Carolyn J. forthcoming “How Credible is the Evidence, and Does It Matter? An Analysis of the Program Assessment Rating Tool.” Public Administration Review. Heinrich, Carolyn J. and Gerald Marschke. 2010. Incentives and Their Dynamics in Public Sector Performance Management Systems. Journal of Policy Analysis and Management 29(1): 183-208. Lewis, David. 2003. Presidents and the Politics of Agency Design. Palo Alto: Stanford University Press. ------. 2008. The Politics of Presidential Appointments. New York: Cambridge University Press. Lynn, Laurence E., Carolyn J. Heinrich and Carolyn J. Hill. 2000. Studying Governance and Public Management: Challenges and Prospects. Journal of Public Administration Research and Theory 10(2): 233-62. McCubbins, Mathew D., Roger G. Noll, and Barry R. Weingast (McNollGast). 1987. "Administrative Procedures as Instruments of Political Control." Journal of Law, Economics, & Organization 3 (2):243-77. Moe, Terry. 1985. The Politicized Presidency. In The New Direction in American Politics, edited by John Chubb and Paul Peterson, 235-271. Washington, DC, Brookings Institution. -------. 1989. “The Politics of Bureaucratic Structure.” In Can the Government Govern? (Chubb and Peterson, eds.). Washington, DC: The Brookings Institution. Pp. 267-329. Moynihan, Donald P. 2003. “Public Management Policy Change in the United States 19932001.” International Public Management Journal 6(3): 371-394. -------. 2008. The Dynamics of Performance Management: Constructing Information and Reform. Washington D.C.: Georgetown University Press. Moynihan, Donald P., and Sanjay K. Pandey. 2010. The Big Question for Performance Management: Why do Managers Use Performance Information? Journal of Public Administration Research and Theory 20: 849-866. 32 Moynihan, Donald P. and Alasdair Roberts. 2010. “The Triumph of Loyalty over Competence: The Bush Administration and the Exhaustion of the Politicized Presidency.” Public Administration Review 70(4): 572-581. O’ Leary, Rosemary. 2006. The Ethics of Dissent: Managing Guerrilla Government. Washington D.C.: CQ Press. Pfiffner, James P. 2007. The First MBA President: George W. Bush as Public Administrator. Public Administration Review 67(1): 6-20. Rudalevige, Andrew. 2002. Managing the President’s Program: Presidential Leadership and Legislative Policy Formulation. Princeton: Princeton University Press. Tomkin, Shelley Lynne. 1998. Inside OMB: Politics and process in the president’s budget office. Armonk, NY: M. E. Sharpe. U.S. Government Accountability Office (GAO). 2004. Results-oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: Government Accountability Office. --------. 2005. Program Evaluation: OMB’s PART Reviews Increased Agencies’ Attention to Improving Evidence of Program Results. GAO-06-67. Washington, D.C. : Government Accountability Office. --------. 2008. Government Performance: Lessons Learned for the Next Administration on Using Performance Information to Improve Results. GAO-08-1026T. Washington, D.C. : Government Accountability Office. U.S. Office of Management and Budget (OMB). 2001. The President’s Management Agenda. Washington D.C.: Government Printing Office. -------. 2010. The President’s Budget for Fiscal year 2011: Analytical Perspectives. Washington D.C.: Government Printing Office. -------. 2011. The President’s Budget for Fiscal year 2012: Analytical Perspectives. Washington D.C.: Government Printing Office. Radin, Beryl A. 2006. Challenging the performance movement: Accountability, complexity, and democratic values. Washington, DC: Georgetown Univ. Press. -------. 2000. “The Government Performance and Results Act and the Tradition of Federal Management Reform: Square Pegs in Round Holes.” Journal of Public Administration and Research Theory 10 (1):111-135. Van de Walle, Steven, and Wouter Van Dooren (eds). 2008. Performance Information in the Public Sector: How it is Used. Houndmills, UK: Palgrave. 33 Table 1. Measures of Performance Information Use Variables capture the extent to which respondents report using performance information for a particular set of activities. Responses range from “to no extent” (0) to “to a very great extent” (4). Variable Activity Description N Mean (S.D.) ORDINAL MEASURES Strategy Developing program strategy 2,572 2.54 (1.07) Priorities Setting program priorities 2,591 2.66 (1.05) Resources Allocating resources 2,543 2.62 (10.6) Problems Identifying program problems to be addressed 2,627 2.71 (1.04) Correction Taking corrective action to solve program problems 2,631 2.70 (1.06) Processes Adopting new program approaches or changing work processes 2,625 2.58 (1.06) Coordinating program efforts with other internal or external organizations 2,579 2.46 (1.10) 2,537 2.31 (1.09) 1,868 2.17 (1.23) Refining program performance measures 2,519 2.46 (1.11) Setting new or revising existing performance goals 2,534 2.59 (1.10) Expectations Setting individual job expectations for the government employees the respondent manages or supervises 2,568 2.70 (1.03) Rewards Rewarding government employees that the respondent manages or supervises 2,556 2.66 (1.06) Average response to all activities above 1,668 2.58 (0.87) Average response to Priorities and Resources 2,504 2.65 (0.99) Average response to Problems and Correction 2,613 2.71 (1.01) Average response to Measures and Goals 2,494 2.53 (1.07) Average response to Expectations and Rewards 2,544 2.68 (0.98) Coordination Sharing Contracts Measures Goals Identifying and sharing effective program approaches with others Developing and managing contracts INDEXES Overall Program Planning Problem Solving Performance Measurement Employee Management 34 Table 2. Predictor Variables Variable Description IMPLEMENTATION Whether (1) or not (0) a respondent reports any involvement in PART PART-related activities N [range] Mean (S.D.) 2,937 [0,1] 0.31 (0.46) Whether (1) or not (0) the respondent works in a liberal agency according to Clinton & Lewis (2008). 2,937 [0,1] 0.29 (0.45) Whether (1) or not (0) the respondent works in a conservative agency according to Clinton & Lewis (2008). 2,937 [0,1] 0.37 (0.48) 2,937 [0,1] 0.20 (0.40) 2,891 [1-4] 2.49 (1.13) 2,711 [1-5] 3.54 (1.09) 2,886 [1-5] 3.20 (1.09) 2,823 1.83(1.83) 2,904 3.57(1.21) 2,913 2.16(1.92) 2,907 1.80(1.73) 2,914 2.21(1.87) AGENDY IDEOLOGY Liberal Conservative RESPONDENT CHARACTERISTICS Whether (1) or not (0) respondent is a member of the Senior SES Executive Service “or equivalent” Supervisor Yrs # of years (from 4 ranges) respondent reports serving as a supervisor CONTROLS Use Commitment Authority Secretary Supervisor OMB Congress Audit Extent to which respondents agree that their “agency's top leadership demonstrates a strong commitment to using performance information to guide decision making.”(10H) Extent to which respondents agree with this statement: “Agency managers/supervisors at my level have the decision making authority they need to help the agency accomplish its strategic goals.” (10A) Extent to which respondents believe that the Department Secretary, the individual they report to, the office of management and Budget, congressional committees, or the audit community (e.g., GAO, Inspectors General) “pay attention to their agency’s use of performance information in management decision making” (12A,12C,12F,12G,12H) [Ordinal range of each variable is from 0 to 5, as “not applicable” and “don’t know” were coded 0] 35 Table 3. Agencies Categorized by Perceived Ideology The categorizations are from Clinton and Lewis (2008). Some agencies within departments (specifically, CMS, FAA, IRS, and Forest Service) are coded in the same way as the departments in which they are housed. Liberal Moderate Conservative AID Forest Service Commerce Labor Agriculture (Not Forest Service) Defense Education General Services Administration Justice EPA FEMA Energy Centers for Medicare and Medicaid Services NASA Homeland Security (Not FEMA) HHS (Not CMS) Office of Personnel Management Interior HUD State Nuclear Regulatory Commission NSF FAA Small Business Administration Social Security Administration Transportation (Not FAA) IRS Veterans Affairs Treasury (Not IRS) 36 Table 4. The Interactive Relationship between Ideology and PART on Information Use The results below are from hierarchical linear models estimating the impact of PART involvement on information use contingent on agency ideology. All dependent variables are indexes that sum activities listed in Table 2. All models account for the random effects for 29 agencies. Significance levels are based on twotailed z-tests or chi-square tests: **p<0.05 and *p<0.10 (so that *p<0.05 for a one-tailed test). All Program Problem Performance Employee Activities Planning Solving Measurement Management Fixed Effects -0.27** (0.13) -0.27** (0.12) -0.39** (0.11) -0.16 (0.14) -0.20 (0.13) PART*Conservative -0.01 (0.13) -0.05 (0.12) -0.08 (0.11) -0.01 (0.13) 0.03 (0.12) PART 0.30** (0.09) 0.29** (0.08) 0.31** (0.08) 0.47** (0.09) 0.11 (0.09) Liberal 0.10 (0.10) 0.16 (0.10) 0.26** (0.11) 0.09 (0.10) 0.07 (0.08) Conservative -0.02 (0.09) 0.07 (0.09) 0.05 (0.10) 0.02 (0.09) -0.04 (0.07) Constant 2.49** (0.06) 2.51** (0.07) 2.57** (0.07) 2.36** (0.06) 2.67** (0.05) 0.02(0.02) 0.02(0.01) 0.00(0.01) 0.72(0.03) 1,668 21.57** 25.86** 0.01(0.02) 0.03(0.01) -0.01(0.01) 0.95(0.03) 2,504 20.96** 30.17** 0.00(0.00) 0.04(0.01) 0.01(0.01) 0.97(0.03) 2,613 30.61** 70.52** 0.03(0.02) 0.02(0.01) 0.00(0.01) 1.07(0.03) 2,494 60.42** 30.30** 0.02(0.02) 0.01(0.01) 0.01(0.01) 0.95(0.03) 2,544 5.91 19.52** PART*Liberal Random Effects var(PART) var(constant) cov(PART,constant) var(residual) N Wald Chi2 (5) LR vs. Linear Chi2 (3) 37 Table 5. The Interactive Relationship between Ideology and PART on Information Use The results below are from hierarchical linear models estimating the impact of PART involvement on information use contingent on agency ideology. All dependent variables are indexes that sum activities listed in Table 2. All models account for the random effects for 29 agencies. Significance levels are based on twotailed z-tests or chi-square tests: **p<0.05 and *p<0.10 (so that *p<0.05 for a one-tailed test). All Program Problem Performance Employee Activities Planning Solving Measurement Management Fixed Effects -0.19** -0.17* -0.29** -0.09 -0.21** PART*Liberal (0.09) (0.10) (0.09) (0.11) (0.10) 0.15** 0.13** 0.13** 0.26** 0.05 PART (0.05) (0.06) (0.05) (0.06) (0.06) 0.09 0.12 0.24** 0.09 0.10 Liberal (0.08) (0.09) (0.09) (0.09) (0.07) 2.50** 2.56** 2.61** 2.40** 2.64** Constant (0.04) (0.05) (0.05) (0.05) (0.04) Controls (centered) SES Supervisor Yrs Use Commitment Authority Secretary Supervisor OMB Congress Audit Random Effects var(PART) var(constant) cov(PART,constant) var(residual) N Wald Chi2 LR vs. Linear Chi2 0.07 (0.05) 0.01 (0.02) 0.26** (0.02) 0.06** (0.02) -0.02* (0.01) 0.20** (0.02) -0.02 (0.01) 0.00 (0.02) 0.03** (0.02) 0.03 (0.05) 0.03 (0.02) 0.25** (0.02) 0.08** (0.02) 0.00 (0.01) 0.20** (0.02) -0.02 (0.01) 0.01 (0.02) 0.03** (0.01) 0.06 (0.05) -0.01 (0.02) 0.23** (0.02) 0.09** (0.02) -0.01 (0.01) 0.20** (0.02) -0.04** (0.01) 0.00 (0.02) 0.04** (0.01) 0.21** (0.05) 0.03 (0.02) 0.21** (0.02) 0.13** (0.02) -0.01 (0.01) 0.18** (0.02) 0.02 (0.02) -0.01 (0.02) 0.02* (0.02) 0.02 (0.05) -0.01 (0.02) 0.21** (0.02) 0.07** (0.02) -0.01 (0.01) 0.22** (0.02) -0.03* (0.01) 0.02 (0.02) 0.01 (0.01) 0.01(0.01) 0.02(0.01) 0.00(0.01) 0.49(0.02) 1,512 684.94** 34.49** 0.01(0.01) 0.03(0.01) -0.02(0.01) 0.71(0.02) 2,224 732.66** 44.82** 0.00(0.00) 0.04(0.01) 0.00(0.00) 0.75(0.02) 2,306 651.68** 74.32** 0.02(0.02) 0.03(0.01) -0.01(0.01) 0.82(0.02) 2,220 650.30** 33.48** 0.02(0.02) 0.01(0.01) 0.00(0.01) 0.73(0.02) 2,259 624.71** 23.13** 38 Table 6. Involvement with PART Reviews and Perceived Impediments to Information Use Results below are from ordered probit models estimating perceived impediments to performance management based on managers’ involvement with PART reviews. The items are associated with the following question: “Based on your experience with the program(s)/operation(s)/project(s) that you are involved with, to what extent, if at all, have the following factors hindered measuring performance or using the performance information?” Responses are coded as follows: 1=To no extent; 2=To a small extent; 3=To a moderate extent; 4=To a great extent; 5=To a very great extent. Standard errors were clustered for 29 agencies. Significance levels are based on two-tailed z-tests: **p<0.05 and *p<0.10 (so that *p<0.05 for a one-tailed test). Impact of PART NOTE: When no PART involvement is reported, there is no statistically significant Without Controls With Controls difference in responses to these items between managers in agencies associated Moderate/ Moderate/ Liberal Liberal with different ideologies. Conservative Conservative Difficulty determining meaningful measures 0.31(0.14)** 0.09(0.06) 0.30(0.16)* 0.07(0.08) 0.06(0.09) -0.02(0.04) 0.12(0.14) -0.03(0.05) Difficulty obtaining valid or reliable data 0.29(0.14)** 0.03(0.04) 0.30(0.18)* 0.04(0.05) Difficulty obtaining data in time to be useful 0.31(0.14)** 0.08(0.04)* 0.30(0.14)** 0.06(0.05) Lack of incentives (e.g., rewards, positive recognition) 0.01(0.07) -0.11(0.06)* 0.11(0.10) -0.02(0.06) Difficulty resolving conflicting interests of stakeholders, either internal or external 0.16(0.15) -0.02(0.04) 0.23(0.14)* 0.01(0.05) Difficulty distinguishing between the results produced by the program and results caused by other factors 0.36(0.14)** 0.09(0.06) 0.38(0.15)** 0.12(0.07)* Existing information technology and/or systems not capable of providing needed data 0.20(0.10)** 0.09(0.03)** 0.30(0.10)** 0.07(0.04) Lack of staff who are knowledgeable about gathering and/or analyzing performance information -0.04(0.10) -0.05(0.04) -0.02(0.12) -0.02(0.04) Lack of ongoing top executive commitment or support for using performance information to make program/funding decisions -0.08(0.10) -0.11(0.08) 0.08(0.15) 0.05(0.07) Lack of ongoing Congressional commitment or support for using performance information to make program/funding decisions -0.12(0.10) 0.02(0.05) -0.10(0.10) 0.14(0.04)** Difficulty determining how to use performance information to improve the program -0.01(0.11) -0.05(0.07) -0.01(0.12) -0.01(0.01) 0.42(0.09)** 0.34(0.07)** 0.43(0.11)** 0.42(0.08)** Different parties are using different definitions to measure performance Concern that OMB will micromanage programs in my agency 39
© Copyright 2026 Paperzz