J. OF PUBLIC BUDGETING, ACCOUNTING & FINANCIAL MANAGEMENT, 14(1), 53-73 SPRING 2002 PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER. BUT WHAT IS THE QUESTION?1 Bernard Pitsvada and Felix LoStracco* ABSTRACT. In the world of public budgeting, ideas and concepts often come, go, and then resurface years later in a slightly modified version. Performance budgeting was first abandoned in the 1960s; this paper examines its rebirth in an attempt to determine if it will make a significant contribution to American budgeting in the 21st century. Does it make for better budgetary decisions? What are the questions that performance budgeting is supposed to answer? Is it just another procedure that helps avoid focusing on problems of our “capacity to govern” (Schick, 1990)? The paper concludes that while there are positive things to say about the drive to performance budgeting, the Office of Management and Budget should not recommend blanket adoption. A BUDGETING CONCEPT FROM THE PAST Forty years ago, Verne Lewis (1952) put forward “alternative budgeting.” In this procedure, budget analysts would weigh “...the relative value of alternative uses of each increment of funds as a step in developing the alternatives to be submitted to the next higher level in the organization” (p. 37). Next, the President’s budget submission to Congress would outline “the major alternatives from which he made his selection” (p. 37). Although it was not adopted, the proposal was often cited in scholarly works as a method for improving budgetary decisionmaking. A quarter of a century later, alternative budgeting was reincarnated as Zero Base Budgeting (ZBB), a series of alternative -----------------------* Bernard Pitsvada, Ph.D., is a Professor, Department of Public Administration, George Washington University. His teaching and research interests are in public budgeting. Felix LoStracco, MPA, is an analyst at the Congressional Budget Office. Copyright © 2002 by PrAcademics Press 54 PITSVADA & LOSTRACCO decision packages structured “from scratch.” ZBB disappeared from the scene when Jimmy Carter’s presidency ended. For those who participated in this exercise at the federal level, few mourned its passing because they perceived little benefit from the procedure in relation to the expenditure of time and the generation of mountains of paper (Lee & Johnson, 1983). “Incrementalism” is another widely held budgetary concept. Usually attributed to the early works of Aaron Wildavsky and further developed by Richard Fenno, incrementalism was a bottom-up method of developing the budget through a series of political adjustments that tended to increase the budget base in small controlled increments (Wildavsky, 1964; Fenno, 1966). After serving as the standard explanation of federal budgeting for fifteen years, incrementalism gave way to “decremental budgeting” and “top down budgeting” during the early Reagan presidency (Heclo, 1984). For budget-makers, however, the concept of incrementalism has always explained how the overwhelming majority of budgetary decisions are made—starting from an established base, and then moving a little up or a little down. In analytical terms, there is little difference between decrementalism and incrementalism. Both are adjustments at the margin: one down, one up. Although they seemed large at the time, the budget cuts of the Reagan-era (such as the $10 billion reduction in nondefense discretionary outlays between fiscal 1981 and 1982) were small when compared to the reductions made in defense spending in the 1990s (see Figure 1). The end of the Cold War enabled sizeable cuts in defense but in the aggregate of the entire budget, these savings were largely offset by many incremental increases in domestic spending. In fiscal 1991, $320 billion was spent on national defense while $214 billion was spent on nondefense discretionary programs (Office of Management and Budget, 2001b, Table 8.1). Ten years later, $20 billion less was spent on defense while $136 billion more was spent on nondefense discretionary programs (Office of Management and Budget, 2001b, Table 8.1).2 Politicians have yet to prove that they can make major cuts to nondefense spending. Even obscure programs like wool and mohair PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 55 FIGURE 1 Defense and Nondefense Discretionary Outlays, Fiscal Years 19802002 (In billions of dollars) 400 375 350 Defense Nondefense 325 300 275 250 225 200 175 150 125 1980 1983 1986 1989 1992 Source: Office of Management and Budget (2001). Historical Tables, Fiscal Year 2002, Table 8.1. 1995 1998 2001 Budget of the United States: subsidies, which were zeroed-out in the name of deficit reduction, were resurrected once surpluses emerged. In the first half of the 1990s, the discretionary spending caps spurred incremental reductions to agency programs.3 This simply resulted from not adjusting the caps for inflation while granting annual pay raises and benefits to federal employees. This slow continued growth in the federal budget by a series of incremental adjustments was often based on nothing more than calibrated increases in the cost of living—indexing. By the end of the decade, with dollars 56 PITSVADA & LOSTRACCO overflowing the Treasury, the caps could not restrain political will and the budget continued to expand. Discretionary spending grew from $533 billion in 1996 to approximately $695 billion six years later.4 All of this leads to the most recent innovation from the past— performance budgeting. In the 1950s, Catheryn Seckler-Hudson defined it, based upon the 1949 Hoover Commission report, as “...a focus of attention on the ends to be served by the government rather than on the dollars to be spent” (Seckler-Hudson, 1953). Key ingredients of performance budgeting were the relationship of the costs of programs to resources, and the achievement of an approved plan. It should be noted that Seckler-Hudson said performance budgeting was already an “old idea” in 1953. Performance budgeting in the 1950s came to mean developing measures of performance or workload, and relating them to the costs of achieving such activities. Unit costing was usually at the heart of performance budgeting. This approach became more useful to local governments than to the federal government, which at the time (1951-61) was spending over 60 percent of its budget on national defense. These activities do not lend themselves to unit costing or even measuring performance short of a war, which was exactly what the programs were designed to avoid. Before it could be completely implemented at the federal level, performance budgeting saw itself subsumed under the next major budget reform—program budgeting. With Robert McNamara leading the way, the Defense Department implemented a specific form of program budgeting called Program, Planning and Budgeting System (PPBS) in 1962.5 Within a few years, President Lyndon Johnson extended PPBS to the entire federal government. Performance measurements or performance factors became vital components since they provided much of the quantitative data critical to PPBS analyses. Performance budgeting as a subset of program budgeting suffered the same fate as program budgeting in most agencies when the Nixon Administration abandoned it in 1969. However, where remnants of PPBS survived, so did performance measurement; in some respects, the Pentagon led the recent march back to performance budgeting because PPBS remains its operative budget system. PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 57 Elsewhere, performance budgeting continued in the backwater of budgeting during the seventies and eighties except for a brief respite as part of ZBB. In the 1990s two events, the Government Performance and Results Act of 1993 (GPRA) and the issuance of the National Performance Review (NPR) later that year, brought it back from exile and placed it in the forefront of budget reform. The former gave performance budgeting a statutory foot in the door which it had previously lacked. The latter indicated political support from an administration that was “committed” to management reform as an answer to meeting what was expected to be an era of declining resources and to fend off increasing public concerns about government performance.6 THE RESULTS ACT GPRA requires federal agencies to develop strategic plans that include “comprehensive mission statements” and “general goals and objectives, including outcome-related goals and objectives.” Supporting these plans are annual performance plans that cover “each program activity set forth in the budget of such agency.”7 Next, annual performance reports look-back to the previous year to see if targets have been met and explain any deficiencies. The final step is performance budgeting. While there is no standardized definition offered for performance budgeting in the pertinent legislation, it is described as presenting “varying levels of performance, including outcome-related performance, that would result from different budgeted amounts.” This sounds once again like Verne Lewis’ alternative budgeting. When GPRA was passed in 1993, the evidence about how difficult it would be to implement was speculative and derived largely from what people said (and did not draw on the experiences of the few practitioners from the 1950s who were still around). The federal bureaucracy was also eager to put a positive “can-do” spin to the law and marched off to perform. Table 1 outlines the implementation schedule and major features of GPRA. 58 PITSVADA & LOSTRACCO TABLE 1 Implementation Schedule and Major Features of the Government Performance and Results Act September 1997: Strategic Plans Agencies submit plans which cover five-years, and must be updated at least every three years. The plans include a mission statement, how goals and objectives will be achieved, relate those goals to strategic objectives, and program evaluations. February 1998: Performance Plans Annual plans, which are consistent with the strategic plan, specify the performance to be achieved by each program activity, express goals in measurable form, describe resources needed to meet goals, and provide a basis of comparing actual and projected performance. March 2000: Performance Reports Annual reports that review whether goals for the previous year were achieved, and if they were not met, explain the reasons. February 2002: Performance Budgeting Beginning with the fiscal 2003 budget submission, OMB plans to fully integrate performance with budget decisions. Initially, OMB will work with agencies to select outcomes for a few important programs, the outputs that influence these outcomes, how much the options cost, and how effectiveness could be improved. Source: Schick, A. with the assistance of LoStracco, F. (2000). The Federal Budget: Politics, Policy, Process (Rev. ed.). Washington, DC: Brookings Institution; Office of Management and Budget (2001, August) The President’s Management Agenda: Fiscal Year 2002. Washington, DC. In 1994, Office of Management and Budget Circular A-11 called upon agencies “to include more program performance indicators and performance goals in the budget decision-making process and budget document the focus would be toward developing quantitative and PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 59 qualitative measures of outputs and outcomes” (OMB, 1994, section 12.10(C)). While acknowledging the difficulty and complexity of such an undertaking, the Office of Management and Budget (OMB) offered no further specifics to the agencies at that time. Six years later, OMB issued 65 pages of guidance regarding the preparation and submission of strategic plans in a revised A-11. As was the case with PPBS and ZBB, OMB chose not to issue agency-specific instructions; what emerged was generalized guidance. Goals and indicators should “be expressed in an objective and quantifiable manner” and “be mainly those used by managers as they direct and oversee how a program is carried out” (OMB, 1994, section 12.10(C)). Annual plans “should strike a balance between too few and too many measures” (OMB, 1994, section 12.10(C)). A return to unit costing is encouraged “even if only approximate costs can be estimated.” With this guidance, agencies developed their budgets and GPRArequired plans and reports. Under President Bill Clinton, OMB claimed that its instructions gave agencies the flexibility they needed from the very start. OMB chose “to encourage agencies to think on their own about what was best for them” (U.S. Congress, 2000a). Under President George W. Bush, OMB will initially “work with agencies to select objectives for a few important programs, assess what programs do to achieve these objectives, how much that costs, and how effectiveness could be improved” (Office of Management and Budget, 2001a, p. 29). Eventually, OMB asserts that “high performing programs will be reinforced and non-performing activities reformed or terminated” (Office of Management and Budget, 2001a, p. 29). Shortly after issuance of agency performance plans in 1997, reports surfaced that Senator Ted Stevens (R-AK), chairman of the powerful Appropriations Committee, had threatened to impose penalties for poor performance plans (McAllister, 1997). Another Senator is quoted in the article as saying, “Of all the plans we have received, with the exception of NASA’s plan, most are too general and all of them need more work” (McAllister, 1997, p. A17). House Majority Leader Dick Armey (R-TX) issued a report card grading each agency’s performance plan. Despite the fact that preparing the annual performance plan, which is based on a mission statement, is probably one of the easiest parts of implementing GPRA, the average 60 PITSVADA & LOSTRACCO score was a mere 42 percent. The Department of Transportation received the highest mark, 71 percent, while at the other end of the scale the General Services Administration (GSA) received a paltry 14 percent. GAO found that “most plans did not explain how funding would be allocated to achieve performance goals,” and that “agencies were significantly more likely to have allocated funding to program activities if they showed simple, clear relationships between program activities and performance goals” (General Accounting Office, 1999, p. 2). By the end of March 2000, when agencies delivered their first GPRA-required annual performance reports, interest from politicians had sharply decreased. Agencies had expended very much time and energy to produce these reports that received scant publicity. The most attention came from Sen. Fred Thompson (R-TN), who released a document grading the agencies’ reports. The Department of Transportation again led all agencies with an 85 percent score. For a second time, GSA scored very poorly, but it was replaced by the National Science Foundation as the agency with the worst score (35 percent (Ellig, p. ii). There is further evidence that GPRA’s march has stalled, perhaps buried under the weight of the paper it has generated. The Department of Agriculture’s strategic plan sprawled over 500 pages, its performance plan was another 500-plus pages, and then its performance report was 380 pages (U.S. Department of Agriculture, 1999; U.S. Department of Agriculture, 2000a; U.S. Department of Agriculture, 2000b). In a relatively small agency like the Nuclear Regulatory Commission, those three reports totaled almost 500 pages. To top it all off, GAO has written over 200 products related to GPRA over the past few years. Politicians and the media have lost interest in GPRA before the final stage, performance budgeting, has fully begun. According to the OMB, not a single agency volunteered to participate in pilot tests of performance budgeting in 1998. For fiscal 2001, five agencies of varying sizes agreed to the experiment. As it is in the nature of budgeteers to support more budgeting procedures, OMB plans to forge ahead and to begin to produce performance-based budgets for fiscal PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 61 2003. It does so, despite the facts that “after eight years of experience, progress toward the use of performance information for program management has been discouraging and agencies may be losing ground in their efforts to building organizational cultures that support a focus on results” (Office of Management and Budget, 2001a, p. 27). It is one thing to demand performance information from an agency’s budget office in headquarters and another thing to get performance from the employees on the front lines. NATIONAL PERFORMANCE REVIEW Shortly after passage of GPRA, Vice-President Al Gore, who was not particularly known for his interest in “reinventing government” as a U.S. Senator, issued the basic NPR report. The report claimed that more than $100 billion could be saved over a few years by “creating a government that works better and costs less” (the actual subtitle of the report) (Gore 1993). While the focus of the report was on “streamlining” governmental processes such as budgeting, procurement, and personnel management, the data on how these “savings” were to be achieved was fuzzy at best and actually preceded specific proposed changes. Regarding performance budgeting, NPR merely stated that to “...invent a government that puts people first,” one of the eight steps is to “develop budgets based on outcomes” (Gore, 1993, p. 7). The report also promoted “an executive budget resolution” that set “broad policy priorities” and allocated “funds by function for each agency.” A case can be made that presidents have been doing this since the Budget and Accounting Act of 1921 mandated an executive budget. NPR also championed another idea from the past, biennial budgets. The report borrowed from Gaebler and Osborne calling for “mission-driven, results-oriented budgeting” but offered no real explanation of how this apparently new focus in budgeting was related to performance budgeting (beyond demanding effective implementation of GPRA). NPR also issued a companion report on federal accounting standards, integrating budget, financial, and program information, and streamlining financial services. This report was silent on performance budgeting and 62 PITSVADA & LOSTRACCO did not explain how an improved financial structure is critical to meaningful measurement of performance in the budget. It suggests that “strong financial management infrastructure” (as the report calls it) is sufficient in and of itself, and necessary regardless of the cost. Reasons Why Performance Budgeting Will Fail Again Perhaps an argument could be made for measuring outcomes when Verne Lewis wrote his article on alternative budgeting in the 1950s when discretionary spending was nearly three-quarters of the federal budget. But the federal government abandoned performance budgeting in the 1960s, during a time when there was more discretion in budgetary decisions. Why should performance budgeting matter today when it did not then? Growth in Uncontrollable Spending. In some respects, the concern with outcomes appears to have arrived very late in the budgetary game, perhaps too late to do much good. If we discuss the budget in terms of net interest, mandatory spending, and discretionary spending, it appears that the primary area where performance outcomes could influence budgeting is in the ever-shrinking last category. In fiscal year 1962, net interest expenditures were 6 percent of the budget and mandatory spending was 26 percent; only 32 percent of the budget was “relatively uncontrollable” (see Figure 2). Four decades later, relatively uncontrollable has more than doubled (approximately 65 percent of the budget will be spent on net interest and mandatory programs), and this upward trend will continue for the foreseeable future unless major policy changes occur. In other words, two-thirds of the budget is required regardless of outcomes because current law requires these activities. Interest on the debt will be paid with or without any further analysis, although there may be some options as to how and using what instruments (process-related decisions). Hence, this part of the budget must simply be excluded from performance budgeting. For mandatory programs, several questions arise. Can or should government try to determine the outcome of spending for pensions or medical care? Is the outcome longer life? If so, how much longer? Is it a better quality of PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 63 FIGURE 2 Composition of Federal Outlays, 1962, 1982, 2002 (in percent) 6 11 18 25 19 49 26 45 1962 10 1982 Defense Discretionary 16 Net Interest 19 Nondefense Discretionary 55 Programmatic Mandatory 2002 (est.) Source: Office of Management and Budget (2001). Budget of the United States, Historical Tables, Fiscal Year 2002. Tables 6.3 and 8.1. life for the elderly? If so, how much better? Is there some benchmark of how much health and happiness should be achieved for varying levels of outlay? Even if such data could be developed, would anyone seriously use it to support decision-making or set funding levels for Social Security, Medicare, and Medicaid? We get as much outcome as we are willing to pay for; the budget will drive policy, not the other way around. What is true for Social Security and medical entitlements is largely the case for the other major entitlements. They have become “rights” and are not dependent on outcomes. Rights do not exist because they are used wisely, effectively, or efficiently; they are not subject to sound 64 PITSVADA & LOSTRACCO management. Citizens are simply entitled to these benefits based upon existing laws that usually change infrequently. But even when Congress reconciles an entitlement law on a regular basis, as it has done with Medicare, the entitlement offers basically the same benefits year after year. As the mandatory portion of the budget grows each year, the discretionary portion subject to annual appropriation shrinks (see figure 2). But to assume that all discretionary appropriations are a matter of choice is insupportable. How much defense is really open to preference? Whether we do or do not have a Federal Bureau of Investigation, Immigration Service, Environmental Protection Agency, or Treasury Department is a matter of choice only in the most marginal terms. Incremental adjustments of a few percent certainly are possible but one must wonder whether it is worth all this “analytical” effort. Do we believe that as the portion of the budget subject to discretionary appropriations dwindles, measurement of outcomes is going to play a larger role in what programs continue and which terminate? Discretionary programs (beyond defense) that survive for the next quarter century will remain in existence because they have political support or their own revenue source to keep them going, rather than quantitative measures of outcome. Political Will. Relating outcomes to what is spent on discretionary programs should play a role in deciding whether to continue, increase, or terminate such programs. While this makes for wonderful theory, it is not how budgetary decisions are made. Quantitative data is not powerful enough to replace political support and political judgment. Performance budgeting cannot thwart politicians, interest groups, and organized constituencies that doggedly defend the budget’s largesse. When performance budgeting is initiated for discretionary programs, politicians may find it useful in making decisions, and federal managers may see a practical application in reaching “more informed” decisions. Hence, when it produces results with which politicians and federal managers agree, those politicians and managers will support performance budgeting. When it produces results with which politicians and managers disagree, however, those results are likely to be ignored. In the end, performance budgeting cannot change many minds. If this is the PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 65 way it will play out, is it worth the massive effort? The most likely use for performance budgeting would be to give politicians cover for those things they have already decided to do or do not want to do. Remember it was a Democratic-controlled Congress that passed GPRA, but then a Republican-controlled Congress apparently accepted GPRA to serve their political ends. This suggests that GPRA is a political tool, not an analytical device. Inherent Difficulties The underlying complaint of NPR is that past analytical efforts in the government focused on process rather than outcome; the end focus of performance budgeting should be what government has achieved (outcome), not how it goes about achieving it (process). GPRA and NPR want to carry this one giant leap further. The catch is that they do not acknowledge that identifying outcomes is much more difficult than measuring how much government spends and what the output of government spending is in terms of goods and services. The techniques, management structures, and imagination needed for identifying and measuring outcomes are not out there just waiting to be tapped by policy analysts. How do we know that what happened was a result of a government program? In short, cause and effect will be the perennial problem. Much has already been written about the difficulty in measuring outcomes due to:8 S the long range nature of many programs, S the fact that many programs are conducted by second and third parties (primarily contractors and local governments) who use, if at all, different measures, and S the uncertainty of what outcomes the government is trying to achieve in the first place. But even when agencies are able to quantify performance, the Congressional Budget Office (CBO) concluded that few federal agencies use performance measures to reallocate resources among programs and functions. CBO acknowledged the difficulty in “...finding a way to apply these performance measures to the allocation or management of 66 PITSVADA & LOSTRACCO resources in the public sector” and that the “...road to improving federal performance and tying its measurement to the budget process is studded with obstacles” (Congressional Budget Office, 1993). For years analyzing budgets in terms of output and unit cost was viewed as a laudable achievement of performance budgeting (i.e., linking inputs or resources to outputs or goods and services). Although measuring outputs is easier, especially compared to measuring how programs achieved outcomes, a direct link between the two is necessary for performance budgeting. Unless a creditable link can be made, all that is left is process. How long does it take to process and issue welfare checks? How many Social Security checks are received on time? How many checks are delivered to dead people? How much fraud, waste, and abuse is there? How well government performs these administrative functions in implementing public policy should not be denigrated, as NPR appears to do. It is important and should not be shrugged off as merely “measuring process.” Once policy decisions are made, all that is left is public administration to carry out the day to day, year after year tasks. When simply delivering goods and services to citizens, outcomes are not as significant to civil servants as process. In 1887, Woodrow Wilson wrote that “administration is the most obvious part of government; it is government in action; it is the executive, the operative, the most visible side of government, and is of course as old as government itself” (Wilson, 1887, p. 198). For many programs, the added complexities of performance budgeting are unnecessary. Next, there is the verification issue. Can outcomes be measured in an objective manner when these types of measures are not necessarily ‘scientific’ or value free? There is little question that the incentive exists for agencies to make themselves look good by “claiming” positive results, just as presidents take credit for a booming economy and mayors take credit for everything from declining crime rates to rising SAT scores. Moreover, OMB is beholden to agencies for much of the information it needs on how well programs are performing (Schick, 2001). Unless outcomes can be verified independently (by a GAO-type agency) they will remain suspect. But what outside party can validate outcomes by replicating all of the calculations? The scope of this step seems to be overlooked by the proponents of GPRA. It is questionable that two honest analysts using the same resource and workload data will PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 67 always agree on outcomes. We should probably focus on things that will be useful for managers in carrying out their duties. Data that will be used simply to “inform” politicians, the press, and the public will always be questionable; outcomes that managers themselves use can be trusted. In effect this is why GPRA needs to be a management tool much more than a budget tool. Whether the pursuit of outcomes is achieved in any meaningful manner also depends on the benefits to be achieved by the players—it is a matter of incentives. Will information on program outcomes be of importance to decision-makers? Will it make for better choices? Will it be of importance to federal managers? If managers are able to discover cost savings in their current processes, they lose these savings at the end of the fiscal year. Similarly, if managers are able to make do without expending their entire appropriation, they will most likely receive less funding the following year. But those that have cost overruns in one year may receive additional funding the following year. The result is rewarding managers who overspent with additional funds while punishing those who are under budget (Melese, 1999). So what do managers see in this for them? In short, the proper incentives for all interested parties are missing. It is no wonder that Rep. Steve Horn (RCA) recently said, “Widespread, enthusiastic leadership is still missing at all levels of government—from Congress to the Office of Management and Budget and each federal department and agency” (U.S. Congress, 2000b). Improved Financial Management Is there anything positive to say about the drive to performance budgeting? First, agencies and OMB must take the proper steps and not shrug performance budgeting off as simple and obvious. Performance budgeting and measurement begins with the development of verifiable standard accounting systems that provide the agency with real time data for financial control, budget preparation, and managerial purposes. Horror stories abound about the lack of good accounting systems in the federal government; regardless of performance budgeting, we need to 68 PITSVADA & LOSTRACCO overcome this shortcoming. The NPR Financial Management report recommended comprehensive accounting standards, but this only begins the process. The NPR report talked about fully integrating budget, financial, and program information, but focused on OMB circulars as the proper vehicle for achieving this. If the answer is that simple, one has to ask, “What took so long?” Once an accounting system is in place, then workload measures or performance factors need to be identified and collected. Many federal agencies gather performance measures, but determining how to assemble and use them is critical. Settling on the proper performance measures is controversial and the subject of much disagreement within agencies. There is even a strong recognition that some programs such as research and development probably have no meaningful performance measures. In fact most programs have multiple measures of performance. In those cases the problem becomes how many measures should be gathered, evaluated, and reported. In an entity the size of a federal agency, the sheer volume of data collected can defeat most attempts to utilize such data. Witness ZBB, which in most respects drowned itself in paper. Most federal accounting systems record how and when money is spent. Recording the financial aspects of governmental activity is only half of what is needed; the other half is workload or performance measures. Both financial and performance measures need to be recorded at the same time by the same transaction. If dollars and workload are recorded separately, the problem of linking them will always exist. And when linkage is not made, it will be estimated by necessity. Numbers will be forced to match each other whether accurate or not. Any experienced budgeteer knows that budget numbers must match in order for an activity budget to convey the story that management wants told. Budgeteers also know how to make workload and dollars match because much of these data cannot be verified by outside sources. Budget examiners on the Hill or OMB are not likely to recalculate budgetary computations because of the sheer magnitude of the task. For agencies, there is safety in numbers—the more the better. As long as the numbers PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 69 are developed rationally, they can probably be justified, regardless of accuracy. Establishing compatible resource and workload systems for budgetary purposes takes time. Since virtually all budgetary data is comparative data—this year to last year, actual versus planned, budgeted versus appropriated—data for several years are required. Systems will need to be checked and rechecked for accuracy, utility, and timeliness. But given the proper commitment of resources, an accounting system recording financial obligations and workload performed could be developed. CONCLUSION Knowledgeable observers agree that a serious need exists to link what any government does to what it costs. But even if this link is achieved, it still will not answer the question of why the government has undertaken such a myriad of programs. And for this answer, one cannot look to performance budgeting. The public sector lifted agriculture from the doldrums of the Great Depression through programs such as subsidies and government-backed insurance. Today, U.S. agriculture is the most productive in the world; roughly three percent of the U.S. labor force feeds this entire nation and supplies enough to export. But many agriculture programs have countless flaws and create perverse economic incentives. New Deal-era programs continue in the budget despite efforts, such as the Freedom to Farm Act of 1996 that attempt to replace subsidies with a more efficient market-based system. Even with that “landmark” law in place, politicians poured in billions of “emergency” money, pushing agriculture spending to historic levels. Data on program performance is not a strong enough tool to successfully counter constituent forces that demand programs continue, if not expand. Simply changing the format of the budget will not change budget outcomes, the budget process, or the behavior of managers. The federal government should not leap ahead with GPRA and institute performance budgeting governmentwide. Not only are the costs of doing so significant, but agencies are not ready or able to do performance budgeting. Certainly Congress will not change their perspective on how 70 PITSVADA & LOSTRACCO funds are to be appropriated; they still demand all of the details that have been asked for over the years, and now voluminous GPRA-related materials are piled on as well. While there were at least 37 laws enacted in the 105th Congress that contained performance-related provisions (Congressional Research Service, 1998), agencies and programs that were performing well were slapped with across-the-board rescissions in 2000 and 2001—no differently than poorly performing agencies and programs. Following a decade that saw a steep rise in omnibus legislation in which eleventh-hour negotiations decided what substantive provisions were crammed in (with many members voting on the legislation without knowing the details), performance budgeting would not matter. It would be degraded to a waste of time and energy. On the other hand, to the extent the GPRA and the drive to performance budgeting lead to better accounting systems that are useful in gathering performance measures and identifying outcomes for managerial use, they are a positive step in federal management. But we have yet to achieve that step. In their first performance reports, the three things that agencies did worst were: supply cost data, assess reliability of their data, and demonstrate that agency actions actually made a difference in the performance measures (Ellig, 2000). Once agencies have reliable cost information and Congress decides to analytically use this information, a clear message will be sent. This will benefit management and employees as they start thinking in terms of mission, goals, and effectiveness of goal achievement. What we should avoid at all costs is political gamesmanship and gimmicks that produce a mountain of paperwork with little visible improvement in government, and much waste and lost time generating reports. Finally, the last decade has seen so many attempts at reform that there has been a case of overload as civil servants have been pushed hither and yon to improve the functioning of the bureaucracy. It is time to put aside administrative and management reforms that ask how to do the things we presently do in a better way. We should raise the ante up the ladder of government reform, from organizational and structural relationships, to management-driven and policy-related program reforms, and finally the political system itself. PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 71 NOTES 1. The views expressed in this paper are those of the authors and should not be interpreted as those of the George Washington University or the Congressional Budget Office. 2. International spending comprises a small percentage of nondefense discretionary spending; since 1982, it has comprised less than two percent of total discretionary spending. 3. The caps were enacted in the Omnibus Budget Reconciliation Act of 1990, and extended in 1993 and again in 1997. 4. Despite this surge in discretionary spending, spending on mandatory programs has exceeded the growth in discretionary spending. 5. PPBS could claim as its forerunners the first Hoover Commission, the 1949 National Security Act Amendments, contractual work performed by the Rand Corporation, and Hitch and McKean’s Economics of Defense in the Nuclear Age. 6. One could make a case that a third event, the passage of the Chief Financial Officers Act of 1990, (P.L. 101-576) should be added to this list. The CFO Act, however, was not directly aimed at budgeting but to the broader area of financial management. Moreover, agency CFOs are not the proper people to develop systematic performance measures for their agencies (as the law requires) because to participate in budget preparation hampers their ability to review subsequent performance in an objective manner. 7. See the Government Performance and Results Act, P.L. 103-62, Section 6, 119 (b). 8. For numerous studies on these difficulties, see the General Accounting Office’s website at http://www.gao.gov. REFERENCES Congressional Budget Office. (1993, July). Using Performance Measures in the Federal Budget Process. Washington, DC: Author. 72 PITSVADA & LOSTRACCO Congressional Research Service. (1998, December). Performance Measure Provisions in the 105th Congress: Analysis of a Selected Compilation. Washington, DC: Author. Ellig, J. (2000, May). Performance Report Scorecard: Which Federal Agencies Inform the Public? Washington, DC: Mercatus Center at George Mason University. Fenno, Jr., R. (1966). The Power of the Purse: Appropriations Politics in Congress. Boston: Little, Brown. General Accounting Office. (1999). Performance Budgeting: Initial Experiences Under the Results Act in Linking Plans with Budgets (GAO/AIMD/GGD-99-67). Washington, DC: Author. Gore, A. (1993). From Red Tape to Results: Creating a Government that Works Better and Costs Less (Report of the National Performance Review). Washington, DC: Government Printing Office. Heclo, H. (1984). “Executive Budget Making.” In G. Mills and J. Palmer (Eds.). Federal Budget Policy in the 1980s (pp. 262-270). Washington, DC: Urban Institute. Lee, Jr., R., & Johnson, R. (1983). Public Budgeting Systems (3rd ed.). Baltimore: University Park Press. Lewis, V. (1952, Winter). “Toward a Theory of Budgeting.” Public Administration Review, 12, 1, 43-54. McAllister, B. (1997, June 25). “Sen. Stevens Threatens Penalties For Agencies: Low Quality Of Performance-Measuring Plans Disturbs Senate Appropriations Chairman.” Washington Post, A17. Melese, F. (1999, Spring). The Latest in Performance Budgeting: The Government Performance and Results Act. Armed Forces Comptroller, 44, 19-22. Office of Management and Budget. (1984). Circular A-11. Washington, DC: Author. Office of Management and Budget. (2001a). The President’s Management Agenda, Fiscal Year 2002. Washington, DC: Author. PERFORMANCE BUDGETING—THE NEXT BUDGETARY ANSWER 73 Office of Management and Budget. (2001b). Budget of the United States, Fiscal Year 2002. Washington, DC: Author. Seckler-Hudson, C. (1953, reprinted in 1978). “Advanced Management.” In A. Hyde. (Ed.), Government Budgeting (pp. 5-9). Pacific Grove, CA: Brooks Publishing. Schick, A. (1990). Capacity to Budget. Washington, DC: Urban Institute. Schick, A. (2001, April). “The Changing Role of the Central Budget Office.” OECD Journal on Budgeting, 1, 1, 9-26. United States Department of Agriculture. (1997). Strategic Plan for Fiscal Years 1997-2002. Available: http://www.usda.gov/ocfo/ strat/index1.htm. United States Department of Agriculture. (1999a). Performance Plan for Fiscal Year 1999. Available: http://www.usda.gov/ocfo/ ap1999/apcontnt.html. United States Department of Agriculture. (1999b). Performance Report for Fiscal Year 1999. Available: http://www.usda.gov/ocfo/ ar1999/arcontnt.html. United States Congress. Committee on Rules. Subcommittee on Rules and Organization of the House. (2000a). “Testimony of Joshua Gotbaum, Executive Associate Director and Controller, OMB.” The Government Performance and Results Act and the Legislative Process of House Committees. Washington, DC: Author. United States Congress, Committee on Rules, Subcommittee on Rules and Organization of the House. (2000b). “Remarks by Rep. Steve Horn.” The Government Performance and Results Act and the Legislative Process of House Committees. Washington, DC: Author. Wildavsky, A. (1964). The Politics of the Budgetary Process. Boston: Little, Brown. Wilson, W. (1887, June). The Study of Administration. Political Science Quarterly, 2, 197-222.
© Copyright 2026 Paperzz