Educational Administration Quarterly Vol. 37, No. 4 (October 2001) 571-599 Educational Administration Quarterly Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY Moving From Bureaucratic to Market Accountability: The Problem of Imperfect Information Gregg Garn Accountability is a complex idea frequently invoked as political rhetoric. Accordingly, a theoretical framework that includes bureaucratic, performance, market, and professional models of accountability is necessary to analyze the accountability system created for Arizona charter schools. Using a qualitative case study that relied on observations, interviews, and document analysis, this case indicated that valid and reliable performance data are not an intrinsic element of a school choice program. Accordingly, mechanisms that capture and distribute accurate, accessible, and wide-ranging information about schools of choice will assist consumers in making informed decisions when selecting among schools. Throughout the 1980s and into the early 1990s, policy makers in Arizona were not satisfied with the ineffective reform efforts of the traditional district-run school system. Interviews with leading legislators and the sheer volume of policies aimed at enhancing accountability in public schools confirmed that state policy makers were frustrated with the status quo. In 1994, state legislators approved plans for an alternative model of public education in the form of charter schools with innovative accountability mechanisms. Tom Patterson, a sponsor of the charter legislation in the Arizona Senate, articulated the central motivation for the charter reform: That charter schools are in a way a test of an entirely different accountability method which is decentralized, which depends, rather than on bureaucratic Author’s Note: I would like to thank Robert Stout, Mike Langenbach, Casey Cobb, Katrina Bulkley, the anonymous reviewers, and the editors of Educational Administration Quarterly for their insightful comments in preparing the final manuscript. © 2001 The University Council for Educational Administration 571 572 Educational Administration Quarterly rules and regulations, on first of all these being schools of choice. It’s accountability that comes from the parents and the consumers.1 Charter schools are nonsectarian, publicly funded schools free of most bureaucratic rules and regulations established through a written contract between a sponsor and an applicant. Through the charter school legislation, Arizona legislators intended to promote competition among public schools by giving parents and students choices about which school to attend rather than compelling them to attend the neighborhood district school. Advocates argued that charter schools would break up the district school monopoly— parents would select the school that best met their children’s educational needs. They reasoned that charter schools would be more sensitive to customer demands or they would risk losing students and the per-pupil allocations that followed each child. Moreover, losing too many students would result in closure. Charter school advocates predicted that, over time, charter schools would be a lever to force change in district public schools. Parents, voting with their feet, would force district public schools to improve, thus creating a system of high-performing public schools. Leading policy makers theorized that moving to a market-driven system would ensure greater levels of accountability. When putting this theory into practice, authors of the Arizona charter school legislation included several key provisions to encourage the development of a charter school market. First, they established multiple charter school authorizers: the State Board of Education (SBE), the State Board for Charter Schools (SBCS), and the board of education for a public school district (Arizona Revised Statute [ARS] §15-183C, 2000). Second, they placed no limit on the total number of charter schools that could open each year and allowed multiple sites to spawn from a single charter (ARS §15-183C2, 2000). Third, they allowed any individual or group (private or public) to operate a charter school (ARS §15-183B, 2000). Fourth, they created a 15-year contract between the charter school and the sponsor (ARS §15-183I, 2000). Fifth, they exempted charter schools from the statutes and rules relating to schools, governing boards, and school districts (ARS §15-183E5, 2000). Sixth, they allowed charter school operators to own the property purchased for the school (ARS §15-183U, 2000). Finally, they provided charter schools with complete financial (ARS §15-185, 2000) and legal autonomy (ARS §15-183D, 2000). These key provisions provided the freedom from bureaucratic regulations and the autonomy to innovate that appealed to many educational entrepreneurs. According to the Arizona Department of Education Data, by the spring of 2001, 455 charter school sites were operating in the state (http://www.ade. Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 573 FIGURE 1: Arizona Charter School Density by County state.az.us/charterschools/search/SiteList.asp), approximately 22% of the national total (The Center for Education Reform, 2001). Overwhelmingly, these charter schools are located in urban settings. (See Figure 1.) Of Arizona charter schools, 66% are located in Maricopa and Pima counties, which include the Phoenix and Tucson metropolitan areas. Although the state allocates $174 per student for transportation, charter schools are not required to provide transportation to students. Many schools located in urban areas provide students with city bus passes or organize parental carpools to help with transporting children to and from schools. For the most part, parents have the sole responsibility for transporting their children to charter 574 Educational Administration Quarterly schools. However, our lack of understanding about the socioeconomic status (SES) of parents who chose charter schools obfuscates our understanding of families who may be constrained from selecting schools that best meet their academic needs. U.S. census bureau data on median family income and the percentage of children in poverty are not especially useful because Arizona’s 15 counties are extremely large. A 1999 study published by the U.S. Department of Education, titled The State of Charter Schools Third-Year Report, sheds some light on the demographics of charter school parents. The report concludes that Arizona charters enroll nearly the same percentage of White students (56.1% in charters and 56.7% in district schools) and nearly the same percentage of students eligible for free and reduced lunches (39.4% in charters and 40.1% in district schools) as public schools in the state (http://www.ed. gov/pubs/charter3rdyear/C.html). However, the report did not examine if White students and those who qualify for government-subsidized lunches were concentrated in a few schools or distributed across all charter schools. By all accounts, policy makers were successful in creating additional choices for parents and students who live in densely populated counties. Rarely does the rhetoric of politicians translate smoothly into practice, and the establishment of charter schools in Arizona was no exception. Although policy makers created a thriving charter school market, they neglected to articulate the details of a new market-based accountability system. As a result, personnel at the SBE, the SBCS, the Arizona Department of Education (ADE), and the auditor general’s office struggled to understand their roles in the accountability system for charter schools. This role confusion by formal implementation agencies led to persistent information problems for consumers. To analyze the accountability system for Arizona charter schools, I developed a heuristic framework to evaluate how schools were being held accountable. This framework included a typology of four distinct types of accountability distilled from the literature: bureaucratic, performance, market, and professional. Moreover, this study examined how policy makers developed various accountability mechanisms but failed to consider how the mechanisms would interact. The failure to consider the outcomes of the accountability mechanisms had consequences for parents attempting to hold charter schools accountable. After reviewing the accountability literature, this article addresses three primary research questions: 1. How did state agencies cope with the charter school reform from 1994 to 1998? Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 2. 3. 575 What types of accountability were present in the Arizona charter school system? What are the implications for consumers in a market-driven accountability system? METHOD The research questions lent themselves to an exploratory and descriptive qualitative case study methodology (Merriam, 1998; Stake, 1995; Yin, 1994) that incorporated three research methods: document analysis, observations, and focused interviews. The theoretical framework for this case focused on the issue of charter school accountability. Reviewing the scholarly literature indicated that there were many distinct models of accountability. Accordingly, while collecting and analyzing data, I was looking for evidence of various types of accountability. Sources included documents, observations, and interviews with policy makers. Documents obtained from interviews, site visits, meetings, library searches, and public records were gathered and analyzed. Documents reviewed for this study included newspaper articles from the Arizona Republic, the Phoenix Gazette, and the Arizona Business Gazette; the minutes of the Committee on Education for the Arizona State Senate and the House of Representatives during the first and second regular sessions of the 43rd legislature; charter applications from schools that were approved and operating during the 1995-1996 and 1996-1997 school years; charter school report cards; ADE site visit monitoring notes; and the Arizona charter school statute. With the exception of the charter school applications, the legislative minutes, and the site visit monitoring notes, all documents were available electronically. This allowed me to explore thousands of pages of text quickly through targeted searches for documents that contained the key words charter school and/or accountability in the title or text. The documents served three important functions. First, they were used to provide background for the interviews. Second, they guided observations of important actors. Third, the documents were used to verify and strengthen data from other sources as a confirmatory source after interviews and observations were conducted (Yin, 1994). Stake (1995) maintained observations that focused on important issues of a case that increase the researcher’s understanding. For this study, essential actors were observed in four different contexts, including 3 meetings of the SBE, 5 meetings of the SBCS, 10 meetings of the Committee on Education in the Arizona State Senate, and 9 meetings of the Committee on Education in 576 Educational Administration Quarterly the Arizona House of Representatives. I targeted the aforementioned meetings specifically because they were the primary public venues that allowed leading policy makers a public forum to discuss and defend their views on charter school accountability. Elements from the physical setting, the participants, the activities and interactions, the frequency and duration, and more subtle factors outlined by Merriam (1988, pp. 90-91) structured the observations. My role in the field was clearly toward the observer side of the participant-observer continuum (Gold, 1969). In 1998, focused interviews were conducted with 24 individuals purposefully selected from the SBE, the SBCS, the ADE, the auditor general’s office, and the Arizona state legislature using the snowball sampling technique (Patton, 1990). Criteria for selecting legislators and bureaucrats alike were that they were referenced by others as experts on the charter school policy. Interestingly enough, legislators frequently mentioned the names of bureaucrats and vice versa. When interviewing the department of education and auditor general’s office personnel, in addition to sponsoring board staff members, techniques articulated by Spradley (1979) were employed. A different approach, based on the ideas described by Hertz and Imber (1995), was utilized when interviewing political elites. Interviews lasted from 30 to 90 minutes and followed a semistructured protocol, which used targeted questions that focused on the central themes of the study. However, the protocol was open-ended enough that the respondents were able to discuss issues they felt were particularly relevant to accountability in charter schools (Lincoln & Guba, 1985). Sixteen of the interviews were recorded on audiotape and transcribed verbatim. Eight interviews were conducted over the telephone and documented through extensive notes. Within 2 weeks of the interview, participants received a copy of the interview transcripts and/or notes. They were given the opportunity to clarify and comment on the interview transcript. Offered the choice of releasing or protecting their identity, legislators and sponsoring board members were typically interested in getting credit for their opinions and ideas. Others were less attracted to the publicity; consequently, job titles replaced names to convey a respondent’s background and insight. Miles and Huberman (1994, p. 10) defined the three components of data analysis as data reduction, data display, and conclusion drawing/verification. Through multiple readings of the data, information was coded and sorted into four categories derived from the accountability framework. Other codes emerged from the data as the study evolved and concepts established in the anticipatory data reduction process were refined. To manage the large volume of information collected in this study I created a matrix with information Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 577 from all three sources (see Miles & Huberman, 1994, p. 11) to better understand the data. The final aspect of analysis included conclusion drawing and verification. My understandings were continually tested against the rest of the data and verified or discarded. “The meanings emerging from the data have to be tested for their plausibility, their sturdiness, their conformability—that is their validity” (Miles & Huberman, 1994, p. 11). The conclusions presented in this study were verified using alternative data sources. Each of the three data sources was used to check the interpretations and conclusions. As a final check of reliability and credibility, three researchers familiar with the Arizona charter school context and political environment reviewed the manuscript. All three confirmed the plausibility of the conclusions. This research faced two important limitations. Although this case provides a detailed account of Arizona, time and expense prohibited a multistate comparison. Accordingly, theoretical questions rather than a concern for representativeness drove the data collection process (Miles & Huberman, 1994). A second limitation is the lack of interviews with consumers (i.e., parents and students) to understand their perception of amount and quality of charter school information. The ADE, the state sponsoring boards, and charter school directors were unwilling to release identifying information of students enrolled in charter schools despite repeated requests. AN ACCOUNTABILITY TYPOLOGY Few people argue against increasing accountability in public education. Consequently, proponents of the charter school reform (among others) frequently use accountability as an effective rallying cry. Although accountability is a term used extensively in the popular press and educational reform literature, it has multiple meanings. However, few people recognize the complexity associated with this concept, and most use the term assuming that it has a single accepted definition—their definition. Browder (1975, as cited in Kirst, 1990) completed an extensive review of the accountability literature and concluded: 1. 2. There are no commonly agreed upon definitions. The range is from simply holding someone responsible for doing something to highly detailed technical specifications. As a concept, accountability needs refinement. Confusion abounds among such terms as general accountability, institutional accountability, and technological accountability. There is no common framework to organize the vast array of frameworks. 578 Educational Administration Quarterly 3. Accountability has become highly politicized. Various groups who can be held accountable attack the concept and pounce on malfunctions in order to discredit it. (p. 6) The relevant literature is replete with discrete definitions of accountability (see Darling-Hammond, 1988, 1989; Gintis, 1995; Glass, 1972; Hansen, 1993; Levin, 1974; Macpherson, 1996; Newmann, King, & Rigdon, 1997; Richards, 1988). Each of these authors offers definitions of accountability that differ in slight but meaningful ways. Accordingly, more precise accountability models have evolved. Levin (1974) identified four strands of accountability: (a) performance reporting, (b) technical process, (c) political process, and (d) institutional process (p. 364). Kogan (1986, as cited in House, 1993) presented three models of educational accountability: (a) state or public control, (b) professional control, and (c) consumer control (p. 35). Darling-Hammond (1988) articulated five forms of accountability: (a) political, (b) legal, (c) bureaucratic, (d) professional, and (e) market (p. 61). Kirst (1990) distinguished six types of educational accountability: (a) performance reporting, (b) monitoring and compliance with standards or regulations, (c) incentive systems, (d) reliance on the market, (e) changing the locus of control, and (f) changing professional roles (pp. 7-10). For this study, a typology including bureaucratic, performance, market, and professional accountability models was created from the educational research literature. In turn, four types of accountability were deployed to analyze the Arizona charter school program. 1. 2. 3. 4. Bureaucratic accountability is based on procedural compliance with established standards and regulations evaluated by local, state, or federal bureaucrats analyzing compliance reports and/or monitoring at the school site (see Cuban, 1988; Darling-Hammond, 1988; Kirst, 1990). Performance accountability is based on data from various indicators that may be used to stimulate action, monitor compliance, and include rewards or sanctions (Levin, 1974; Rivera, 1994). Market accountability is based on student/parental choice among schools. Good schools demonstrate accountability by attracting students and maintaining enrollment. Bad schools are held accountable by parents who leave. Market accountability is predicated on the assumption that consumers have access to product information. Market accountability is measured by consumer participation and could be recognized through waiting lists or attendance records that indicate increases or decreases in student enrollment (Chubb & Moe, 1988, 1990b; Darling-Hammond, 1988; Kirst, 1990). Professional accountability is based on the demonstration of educators to their peers that they have the appropriate knowledge, values, and skills to ensure competence and serve the public interest (Darling-Hammond, 1988; Firestone & Bader, 1992; Rivera, 1994). Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 579 Creating a System Bureaucratic, performance, market, and professional accountability have particular strengths and weaknesses in producing various outcomes. Kirst (1990), Darling-Hammond (1989), and House (1993) theorized that various models must be integrated into an accountability system to counteract the limitations of each type. Kirp (1982) further argued that different forms of accountability were frequently in conflict with one another when present within the same accountability system: Professionalism, legalism, bureaucratization, and politicization pull and tug against one another. . . . Problems arise when one or another framework becomes too powerful—for instance, when legalism engulfs in procedural snarls questions that may either be unresolveable or better resolved less formally, when professionals deprive parents of effective voice in decisions concerning their children, or when bureaucratic rules undermine the exercise of wise professional discretion. Policy remedies take the form of redressing the balance among these frameworks. (p. 173) However, because the American public school system has historically relied on bureaucratic accountability mechanisms, our lack of experience makes it difficult to anticipate the interactions of professional, performance, and market mechanisms. Our knowledge of market mechanisms as they apply to education is especially weak and rooted more in economic theory than empirical data. Boyd and Lugg (1998) provided a comprehensive analysis of the evidence that informs our understanding of market-based school choice programs. They argued, “Depending on how they are designed and regulated, choice policies can either promote desirable reforms or cause serious problems” (p. 2). They went on to quote Glenn’s (1989) conclusions from an international analysis of school choice programs: The experience of other nations yields no conclusive evidence that parent choice has a decisive effect, either positive or negative, on the quality of schooling. Evidence is extensive, however, that choice may have either a positive or negative effect upon equity, depending on how the process is structured and what incentives are included for ethnic and class integration. (cited in Boyd & Lugg, 1998, p. 2) Yet policy makers are rarely impeded by a lack of compelling empirical evidence. Although thoughtfully designed accountability frameworks ensure that the various accountability mechanisms required of schools complement rather than detract from each other, legislators often fail to map out the 580 Educational Administration Quarterly potential interactions of various accountability models. Frequently this is viewed as a detail for implementers. POLICY MAKING AS TRIAGE In this case, formal implementers faced significant obstacles when attempting to design a coherent accountability system for charter schools. The Arizona state legislature approved charter school legislation on June 17, 1994. During the 90 days before the statute took effect, the ADE was inundated by charter applications. Only 11 months after the legislation was approved, the first charter school began instructing children. Struggling to cope with the time pressures and the large number of applicants, ADE staff developed a triage mentality—deal with the most critical problem first, and then go on to the next most serious issue. When the ink was not even dry on the legislation that allowed charter schools to take place . . . there were people waiting with applications to have charter schools. . . . It was many people. So they never had the opportunity to set these systems up, and we haven’t had a breather to be able to do it. Wouldn’t it be wonderful if we could just stop and use this year for planning and getting it all together, and then start chartering again? But I don’t think that’s going to happen either. . . . It would be wonderful, but we’ll never have that luxury.2 In addition to planning on the fly, a personnel shortage created an additional obstacle. The charter statute assigned two state sponsoring boards, the SBE and the SBCS, responsibilities for approval and oversight of charter schools. Policy makers limited funding for administrators to staff both of the state boards, in part to discourage a new bureaucracy from developing. So, there are two things that happened in the legislation. . . . One, there was no planning time for the Department to develop the program; and, two, there was no appropriation tied to the bill for the legislation. And here I have people lining up saying, “I’m all ready to open up my charter school now.” What are you 3 going to do? The lack of appropriations for administrators provided a significant challenge to both sponsoring boards when attempting to meet their approval and oversight obligations. Overwhelmed with responsibilities and lacking staff, the ADE personnel were compelled to share basic approval and oversight responsibilities with the sponsoring boards during the first application cycle (without additional funding). Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 581 However, the ADE was unable to offer consistent assistance to the state sponsoring boards due to a concurrent downsizing. Elected superintendent of public instruction in November 1994 (2 months after the charter legislation went into effect), Lisa Graham-Keegan quickly followed through on her pledge to cut the bureaucracy at the department of education. The year before Keegan took office, the ADE had 460 full-time staff members; by 1996, she had reduced that number to 231 (personal communication, ADE, payroll division, April 1998). As I recall, one person and a secretary that first year were trying to handle all of that, all of the application proceedings and getting them answered, getting them to board members. . . . And I remember it was overwhelming. . . . And so by early fall we were scrambling to get stuff in place and get the procedures together and get the applications written up and those kinds of things.4 While the ADE was taking on additional responsibilities for charter schools, the reduction in staff stymied the development of a proactive policy for charter school accountability. Consequently, contacts with charter schools evolved based on addressing immediate problems, rather than anticipating future ones. The executive director of the SBE, summarized the situation: We have never been able to get out of the reaction stage, you know, we have never been able to think. We never had time to think far enough ahead to what is the next step that is going to come. . . . So every time we have been so involved with a particular stage of development, that we have done no long-term planning or anticipating for what we have to do later. Reactive planning, a lack of funding for administrative staff, and a concurrent downsizing at the ADE combined to hinder the development of a well-planned accountability system for charter schools by formal implementers. FORMAL IMPLEMENTATION AGENCIES However chaotic the birth of the Arizona charter school policy may seem, the result was concordant with the spirit of the legislation: Charter schools were accountable to parents and students, not bureaucrats. The following three sections illustrate the difficulties faced by formal implementation agencies when moving toward a market-based accountability system. The experiences of the state sponsoring boards, the ADE, and the auditor general’s office, all state agencies that were given statutory responsibilities in charter 582 Educational Administration Quarterly school accountability, were examined. Because of the contextual factors documented in the previous section, all three entities relied heavily on consumer choice as the primary form of accountability. State Sponsoring Boards The charter statute endowed the SBE and the SBCS with general responsibilities for charter schools, including the power to approve new charter schools, sanction existing charter schools for minor infractions, and revoke the contract for poorly performing charter schools. Through 1998, the state-level sponsoring boards relied on market accountability mechanisms, by design and default. The charter statute allowed the governor to appoint members to the SBCS. Accordingly, Governor Fife J. Symington, a strong proponent of school choice (he advocated voucher plans in the 1991-1994 legislative sessions) selected seven individuals who strongly supported the idea of school choice. Furthermore, all members of the SBE were appointed by pro school-choice governors and were generally supportive of market accountability (Garn, 1999). By design, these board members focused on parental choice as the primary form of accountability. Due to minimal staffing provisions (Bulkley, 1999), both state charter school authorizing agencies focused their energies on approving new schools, entrusting the monitoring responsibilities to parents. Internal documents, along with interviews of board members and administrative staff, confirmed that more time was devoted to the approval process than to any other area. In contrast, once the charter school opened, school contact with the sponsoring agency was nominal. After the charter school was up and running, the SBE and the SBCS required charter schools to complete an annual written report detailing outcomes achieved and progress toward the goals specified in the charter contract. But there was no prescribed format for this report, and it was up to the charter school operator to determine what information to include in this self-evaluation. Actually, by law they [charter school directors] are required to do that [submit an annual written report to their sponsor]. They [the board members] don’t evaluate it in any way. I think since this is the first year we have . . . asked for that report. The report is basically there for the board to look at. . . . The [written] report is basically almost like the [oral] presentation where it’s information you want to tell us, but it also talks budgets and finances. . . . But we don’t nail them with anything.5 Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 583 The information in the annual written report was neither evaluated nor verified for accuracy by administrative staff or board members. The SBCS required charter school operators to make an oral report to the board in addition to the written report. Part of it [oral presentation]—is this is the time to showcase your school. Get up here and tell us good things that are going on at your school. It also allows us to drop in questions. Some of them are softball questions that are just interesting, that we’d like to know. . . . For most schools, it’s the best time and for most board members it’s the best time.6 In addition to the annual written and oral reports, an informal goal for the administrative staff of both state boards was to visit the schools on a yearly basis. The visits were unstructured, informal, and aimed at showing off the successful aspects of the operation, much like the annual reports. During the 1997-1998 school year, the SBCS staff and board members followed through with this unwritten policy; however, the SBE was unable to implement a similar plan as the school year ended. Observations of the board meetings indicated that formal complaints filed against a charter school were addressed in the oral presentation. Conversely, the annual written reports and the site visits allowed the charter school directors broad discretion over the information they provided to the state sponsoring boards. Moreover, board members did not evaluate the information that was provided in any systematic manner. A primary legislative intent of the charter statute was to encourage consumer-driven accountability mechanisms. The two state sponsoring boards faithfully executed the spirit of the legislation, but the lack of administrative staff reinforced the underlying philosophy that parental monitoring was favorable to bureaucratic monitoring. In sum, parental choice proved to be the primary accountability tool employed by the state sponsoring boards. ADE ADE staff reported that they had a great deal of difficulty attempting to comply with the spirit of the charter school statute. The legislation explicitly reduced the bureaucratic reporting requirements for charter schools by providing a blanket waiver from the rules and regulations by which district schools must abide (ARS §15-183E5, 2000). This confounded ADE bureaucrats who attempted to devise accountability mechanisms in an environment that discouraged bureaucratic monitoring and reporting. 584 Educational Administration Quarterly “State departments of education also have traditionally been involved in overseeing the implementation of education policies and programs . . . often [they] receive requests from legislative policy committees to conduct monitoring or evaluation studies” (Wohlstetter, 1991, p. 32). Because charter schools were primarily accountable to the sponsoring boards and parents, the ADE staff members reported that their primary function was to inform the boards of achievements or problems in charter schools. The sponsoring boards were then expected to reward or sanction charter schools appropriately. One attempt to identify the achievements and shortcomings of charter schools was a monitoring program implemented during the 1996-1997 school year. A former associate superintendent at ADE discussed the motivation for the monitoring program: We would hear horror stories or parents would call in with these unbelievable tales of stuff, and nobody really knew what was happening, nobody knew if the schools were delivering what they said they would. Were they complying with their charter or not? Nobody really knew. There were some concerns on the part of three or four of us that we just decided that somebody needs to be paying attention to this and somebody needs to be getting at least very basic information so that if anybody wants to know, we can at least say, “Well, at least we have done a minimal amount of checking and here’s what we found, here’s what we know at this point.” Somebody needs to know what is going on out there.7 Two ADE staff members worked full-time on the monitoring project and coordinated the effort. We elicited support from about 50 people . . . in total. Most of the ADE, in various divisions, and some were external, either charter school people, or public [district] school people, or University people, or whatever volunteered. We did not do a programmatic audit in that we did not look at quality or the process of education, or the programs that they were actually implementing. What we were monitoring were those things that have either legal or statutory compliance or charter compliance, meaning that they were doing things that either weren’t in their charter, or they weren’t doing things that were in their charter. So, we looked for the compliance in state and federal statutes.8 When the monitoring teams completed the site visit, all the artifacts collected during the visitation were stored in a file folder for each charter school. Evaluators formally documented the site visits, either writing directly on the monitoring instrument or separate notepads. As the ADE staff member explained, Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 585 The plan was that the secretaries would be typing the reports and putting them on file. . . . In the meantime, there were secretaries that left and came and went and came and went. And the bottom line is that the reports, the majority of the reports, unless they were the ones that were in severe trouble or that needed to have records that were going to go to a hearing or whatever, then I personally typed those . . . otherwise all the notes were left in file folders.9 In sum, only a fraction of the monitoring notes were typed up, the information from the reports was very rarely passed along, and neither the state sponsoring agencies nor the charter schools received meaningful information about the monitoring visits. When eventually made public, most of the reports were still in the form of handwritten notes. The last page of the site visit protocol included a summary sheet that allowed the monitoring team to summarize comments on charter and statutory compliance and to determine if a follow-up was necessary. The summary sheets were blank or missing from 20 of the 32 monitoring visit reports analyzed for this study. Of the 12 that did have comments in the summary area, 9 required follow-ups for noncompliance with either their charter or relevant statutes. It is important to note that the state sponsoring boards never asked for or received that critical information. After only 1 year, Superintendent of Public Instruction Graham-Keegan discontinued the department’s involvement in the charter school monitoring program. Both state boards attempted to take over the monitoring responsibilities in the 1997-1998 school year. With limited administrative staff and a steadily increasing number of charter school sites (455 by 2000-2001 school year), it remained unclear how monitoring by the state sponsoring boards would be accomplished. The monitoring effort by ADE was unique in that different divisions cooperated in the effort. Generally, divisions within the ADE developed independent strategies for dealing with charter schools. Two examples illustrate the consequences. ARS §15-183E7 (2000) required charter schools to comply with all statutes relating to the education of children with disabilities in the same manner as district schools. This compelled the Exceptional Student Services staff members within the ADE to establish some oversight mechanisms to ensure compliance with federal and state laws. During the 1995-1996 school year, Exceptional Student Services staff members responded to specific complaints but lacked a systematic program to monitor compliance among charter schools. The following school year an outreach program was initiated and a sample of 25 charter and district schools were monitored for compliance with special education rules and regulations. Charter schools from the sample were subjected to a group evaluation 586 Educational Administration Quarterly whereby a team of staff members visited the charter school, evaluated the program for disabled students, and provided technical assistance when it was required. Exceptional Student Services staff documented their findings in a Corrective Action Plan. However, interviews with Exceptional Student Services staff and the SBCS administrative staff confirmed that information from the Corrective Action Plans was not forwarded to the state sponsoring boards: That information could be provided to us. . . . We have started asking for [reports] because that has not been the case. We wondered, “Why do you keep this?” and “Where does it go?” So, we’ve said this could be a part of our monitoring. Special Ed[ucation] is a very important part of the charter. And these people [Special Education consultants] are obviously overwhelmed with all the schools they have to go to, but we need to know the information. If we don’t know the information, how do we determine whether or not the school is in compliance? I mean I’m sure if it was a real imperative issue they would come to us, but we want to see the report.10 Academic Support Services, another division within the ADE, experienced problems similar to the Exceptional Student Services division. Staff in the Academic Support Services division monitored charter schools that received federal funding for programs targeted to special needs students with a high degree of at-risk factors, including financial deprivation and language barriers. Federal programs within this unit included Indian Education, Early Childhood Development, Titles I and VI, Bilingual Education, and the Dwight D. Eisenhower program. Numbers are checked against other departments within the Academic Support Division. If lots of variance is identified then more extensive follow-up is done; we contact the school immediately and sometimes go out into the school. We can catch most of the problems with those checks. To the best of my knowledge, this information has not been passed on to the sponsor. I think in the future it will be a good idea to write a two-page letter to the sponsor and summarize our findings. (personal communication, ADE, Title VI administrative staff, March 17, 1998) Much like the Special Education monitoring visits, these schools were examined for procedural compliance. Similar to the Special Education staff, Academic Support personnel reported that the information collected was not sent to the state sponsoring boards. ADE personnel failed to create a functional system for collecting and transferring information to the state sponsoring boards. Accordingly, the de facto accountability mechanism was parental Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 587 choice. Nonetheless, board members and parents who were ultimately responsible for holding charter schools accountable were deprived of important information about charter school quality. Auditor General The auditor general’s office was the final state agency to play a formal role in charter school accountability. Like the other state-level entities, the auditor general’s office absorbed the additional responsibilities for charter schools into its existing budget allocation. Unlike the department of education, this office did establish a formal line of communication with the sponsoring agencies. However, policy actions of the SBCS and subsequent statutory amendments led to a decreased role for this agency. When first conceived, ARS §15-183E6 (2000) required charter schools to follow the same financial reporting system as district schools—the Uniform System of Financial Records (USFR). This statute authorized the auditor general to conduct financial, program, or compliance audits to ensure that charter schools were following the USFR correctly. This statute formally brought the auditor general into the accountability system for charter schools. Subsequently, legislators amended ARS §15-183E6 (2000) and allowed charter schools to follow a modified financial reporting system the Uniform System of Financial Records for Charter Schools (USFRCS). A 1997 amendment by the state legislature, known as the Single Audit Act, placed an additional burden on staff at the auditor general’s office. The governing board of a charter school is required to comply with the Single Audit Act, plus [it] must contract for an annual financial and compliance audit, with financial transactions subject to the Single Audit Act. . . . There was no provision in the charter school statute for our office or CPAs [Certified Public Accountants] to do audits. That was added later—[in] the [Single] Audit requirements.11 Before the Single Audit Act, most charter schools were covered by legislation that required all nonprofit organizations to conduct yearly audits. For-profit charters were exempt from any external auditing requirements. The Single Audit Act required all charter schools (including the for-profit charter schools) that expend more than $300,000 to complete a Federal Compliance Audit as well as a Financial Statement Audit. These audits combined were referred to as a Single Audit. Charter schools that did not exceed $300,000 in expenditures were still required to complete the Financial Statement Audit; however, they were exempt from the Federal Compliance Audit. 588 Educational Administration Quarterly Auditor general’s office staffers analyzed the results of the audit opinion rather than performing the formal audit. After the director of a charter school selected a CPA firm and the contract was approved, ARS §15-271D (2000) required the auditor general to inform any charter school failing to meet the requirements, as prescribed by the USFRCS, that it had 90 days to correct the deficiencies. For the auditor general to make such a determination, external auditors completed a USFRCS compliance questionnaire along with a legal compliance questionnaire. When the audit was completed, the auditor general’s office performed a working-paper review of the audit to check the quality of the CPA’s work as well as to ensure that the charter school was financially sound. If the independent examination identified problems, the charter school operator received a letter from the auditor general explaining the specific requirements and placing the 90-day deadline to resolve any deficiencies. A copy of the letter was also sent to the sponsoring board. After 90 days, staff from the auditor general’s office followed up to make sure that the necessary corrections had been completed and documented their findings in a second letter that informed the charter school and the sponsoring board of the charter school’s compliance status. The mechanisms developed by the auditor general’s office exemplified performance accountability. Auditors reviewed data from financial and compliance audits that were then used by sponsoring agencies to stimulate action. The executive director of the SBE described the process: They [auditor general’s office] send us [the school’s sponsor] a letter that says they have found a school to be not in compliance with the USFRCS and . . . the state board determines that they are not in compliance with the USFRCS and they must request a hearing before the board. And if they don’t request a hearing, the superintendent [of public instruction] is allowed to withhold their funds. So 100% of the time people will request a hearing. . . . So, we pretty much assume there is compliance unless we hear contrary to that. And that is the same with district schools.12 In other words, financial indicators stimulated action. Depending on the progress made toward addressing the problems, the sponsoring board could grant an extension or withhold funds. However, it is important to note that the state sponsoring board, not the auditor general’s office, had enforcement power over the charter schools. In the 1997-1998 school year, the SBCS granted 22 charter schools waivers from the USFRCS. The charter school legislation was specific that the auditor general was required to notify charter school staff if they did not comply with the USFRCS. If the charter school did not have to follow the Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 589 USFRCS accounting form, the auditor general lost jurisdiction, and the SBCS assumed responsibility for financial oversight. In a special summer session, legislators made the waiver from the USFRCS permanent and once again amended the financial requirements prescribing that “on or after July 7, 1999, a state sponsored charter school’s compliance with the USFRCS will be determined by its sponsor” (Haggerty & Sauv, 1999). Whether the administrative staff for the sponsoring boards had the training or the board members had the time or expertise to monitor the financial health of charter schools was unclear. It was apparent, however, that they did not have the same level of skills as the professionals who routinely performed this kind of work in the auditor general’s office. Whether it was a lack of will or capacity or combination of both, formal implementors from the auditor general’s office, department of education, and sponsoring agencies failed to design and implement a multifaceted accountability system for charter schools. The following section applies the theoretical framework to analyze the specific forms of accountability in the policy. ACCOUNTABILITY TYPOLOGY When devising an accountability system, Kirst (1990), House (1993), and Kirp (1982) all warn policy makers to compensate for the weaknesses of individual accountability types by balancing among the frameworks in a proactive accountability policy. This case reveals that Arizona policy makers relied predominantly on market mechanisms and minimized the use of professional, bureaucratic, and performance mechanisms. Professional Accountability The Arizona charter school policy disregarded professional accountability. There were no procedures to ensure competence through teacher or charter school accreditation. The Arizona charter school legislation did not require charter school teachers to have state certification, a degree in education from an accredited school, or even a college degree. Policy makers purposely wrote this provision into the charter school legislation to encourage individuals with nontraditional backgrounds to become involved with these schools. Legislators argued that certification requirements discouraged individuals with a background outside of education from teaching. Therefore, professional accountability mechanisms did not play a significant role in the accountability system created for Arizona charter schools. 590 Educational Administration Quarterly Bureaucratic Accountability Policy makers were clear about their disdain for bureaucratic accountability mechanisms and attempted to limit the role of this model in the accountability framework. Lisa Graham-Keegan, an articulate supporter of the charter reform who left her position as chair of the House Education Committee to become superintendent of Public Instruction, commented: “I hope this will begin to demonstrate you don’t need all the bureaucratic overlay we now have in public schools. . . . What they [charter schools] are getting is freedom from regulation in return for greater accountability” (Mattern, 1994, p. A1). Divisions within the ADE attempted to monitor compliance with standards for special education, financial reporting, and federally funded programs. However, they were unable to develop and implement bureaucratic accountability mechanisms that effectively transferred information to the state sponsoring boards. The SBE and the SBCS had statutory authority to level punishments for noncompliance with rules and regulations. Yet without staff to carry out this task and with board members who disfavored bureaucratic accountability, market accountability mechanisms became the default instrument for charter school evaluation. “How do we know [if they are in compliance]? By parents calling and saying that they are doing something that they’re not supposed to do, because we have no staff to go out.”13 A former SBE and SBCS board member confirmed the lack of bureaucratic accountability mechanisms implemented by the two state sponsoring agencies: As I recall it [the annual report] is kind of like a self-report deal. I don’t know that there is a prescribed format, and so they are going to report what they want you to hear. . . . So right now the only evaluation that is happening is their own self-reporting through report cards, and through their annual report to the board.14 The auditor general’s office also lacked the will and capacity to institute bureaucratic accountability mechanisms. Accordingly, policy makers were able to limit most of the bureaucratic rules and regulations for charter schools that traditional district public schools must endure. Performance Accountability Performance accountability was present but underutilized in the accountability framework devised for Arizona charter schools. The auditor general’s office was the only government agency that was able to implement effective performance mechanisms. Reviewing data from independent financial auditors allowed this office to get a more accurate understanding of the financial Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 591 stability of a given school. However, exemptions granted by the SBCS and subsequent statutory amendments limited the use of performance accountability significantly. Test scores, however fallible (see Shepard, 1991), were one source of information about school performance that was available. Parents and sponsoring board members could get some insight into charter school quality based on standardized test scores. However, changes in testing policy reduced the usefulness of test scores. Arizona first administered the Iowa Test of Basic Skills in 1995-1996, but switched to the Stanford Nine during the 2nd year of the charter reform. This change in testing policy disrupted the collection of trend data and rendered impossible the assessment of school improvement or decline. A final potential source of performance information about charter schools was a school report card required of all public schools. The report cards were supposed to include information about a broad array of school performance indicators. However, because the report cards were overwhelmingly incomplete, they were only useful as a sound bite for policy makers rather than for a parent trying to evaluate charter school performance. Many public schools (charter and district) failed to fully complete and return the report card data because there were no sanctions or rewards attached to this request. In addition, the incomplete report card information was available through the ADE web site (Nowicki, 1998). This posed a vital problem for a large percentage of parents who did not have access to the Internet. Consequently, performance reporting was evolving slowly and was more symbolic than practical in the first several years of the Arizona charter school program. Market Accountability It’s [the charter reform] been one of those things that I think we had a pretty clear idea of what kind of principles we wanted it based on, and particularly what kind of accountability we wanted for schools. . . . And we were astonishingly successful.15 Market accountability mechanisms were the foremost model in the accountability system for Arizona charter schools. Creating a framework with limited bureaucratic oversight, few (if any) professional standards, and slowly evolving performance reporting standards ensured that consumers were the most important component in the accountability system for charter schools. If parents enrolled their children, this was a clear indication of satisfied consumers. Conversely, if enrollment declined, parents were holding charter schools accountable for their dissatisfaction with the educational 592 Educational Administration Quarterly product produced by the school. Senator John Huppenthal, a leading advocate for the charter school reform, stated, Well, I mean right now it’s basically 100% of the equation. You know, there’s different kinds of accountability—there’s product quality and there’s financial accountability. So, on the financial accountability, the parents play no role. Some parents are on boards that play a little bit of role. But they play 100% of the role in determining on the academic side, the cognitive development, and the affective development, parents are 100% of the equation right now; but with zero percent on the financial side.16 An interview with the superintendent of public instruction confirmed that the monitoring responsibilities for parents of children enrolled in charter schools were greater than their counterparts with children enrolled in district schools: The reality is that parents need to know about the risk factors, and know that they are putting their students into start-up businesses. Maybe parental monitoring is more reliable than the district monitoring system that is currently in place, nobody ever thinks about it that way. As a parent, you can’t relax anywhere, either in the district or the charter schools. But there is definitely a higher degree of risk with the charters because they are essentially start-up businesses, and parents need to acknowledge that risk. Parents need to know who is running the school and the philosophy and is it fiscally sound. What the standards are, and they should just pull out the state standards and see how they have added on. They need to go directly to the school and meet people. They can’t just read this on the report card piece of paper. Parents need to ask a lot of questions.17 Data from multiple sources confirm that parental choice was the principal accountability mechanism. However, by marginalizing all other forms of accountability, parents and students selected schools with imperfect information. CONCLUSIONS This case study was structured by three research questions: (a) How did state agencies cope with the Arizona charter school reform? (b) What types of accountability were present in the charter school system? and (c) What were the implications of the market-based accountability system for consumers? This case illustrated that personnel at the SBE, SBCS, ADE, and auditor general’s office found it problematical to adapt to the changes required by the charter school legislation. The state sponsoring boards focused their limited Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 593 resources on approving new schools; however, monitoring charter schools that were serving students was a responsibility deferred to parents and other state agencies. Personnel at the ADE had a difficult time attempting to comply with the intent of the charter statute because bureaucratic monitoring was discouraged. Bureaucrats working at that agency did not have the resources to collect and distribute information about charter schools to consumers. Despite a lack of funding and personnel, staff at the attorney general’s office were able to deal with the new demands imposed by the charter school legislation. Personnel in that agency developed financial auditing procedures and passed that information on to the sponsoring boards. This case also demonstrated that market accountability was the leading paradigm in the Arizona charter school program. Data from interviews, observations, and document analysis confirmed that by limiting oversight from bureaucrats in state agencies, requiring minimal professional requirements, and failing to develop uniform and comprehensive performance reporting mechanisms, policy makers caused market mechanisms to become the primary accountability model. However, even the staunchest charter school advocates, including the chairman of the Senate education committee, Senator John Huppenthal, recognized the information problem: To make that purchase decision something that means something, they have to have maximum information, so the things that I focused on that we don’t have yet are academic productivity and quality ratings and, to a lesser extent, student quality ratings and teacher job satisfaction. If you know those things, I think you know a tremendous amount about the quality of the school. Right now, we don’t have any of that data. So right now, in terms of any kind of data that’s available on schools, almost all of the data that we have now in my mind is worthless, so right now we have no way of keeping score. None of the methods right now, none of the data we have right now coming in has a whole lot of value for someone making a purchase decision.18 The implications for consumers who participate in a market-based accountability system are important to consider. IMPLICATIONS Thoughtful critics and advocates of educational choice programs emphasize the importance of reliable product information. Levin (1997) contends that “in order to make informed choices, parents need information on alternatives” (p. 14). Chubb and Moe (1990a) would “require” consumers to visit a “Parent Information Center” to assist in the selection of a school (p. 10). The 594 Educational Administration Quarterly case of Arizona charter schools reveals that trustworthy performance information is not an intrinsic element of a consumer-driven educational system. Policy makers must develop policies that ensure consumers have access to accurate information about schools. Performance-reporting requirements would complement market-based (or bureaucratic) accountability mechanisms. However, collecting and distributing the productivity, satisfaction, and quality information to consumers without increasing bureaucratic reporting requirements for charter school directors remains an oxymoronic objective. Although key political leaders in Arizona agreed that consumers of choice schools assumed more responsibility (for selecting and monitoring schools), they failed to implement a mechanism for capturing and distributing uniform and dependable performance data that would encourage more informed consumer choices. School report cards, required of all public schools in Arizona, were intended to address this information problem; however, they were wholly inadequate. During the boundaries of this case (1994-1998), compliance with the report card mandate was sporadic. Accordingly, the implications for consumers selecting among schools in a choice program revolve around three issues: the breadth, accuracy, and accessibility of school information. It is important that consumers are able to consider a wide range of information about choice schools. In the form of report cards, Arizona charter and district schools were expected to provide information on several broad areas including school organization, philosophy, and academic goals. In addition, report cards solicited information about the number of school days, average daily instruction time, school honors, instructional programs offered, standardized test scores, grades served, enrollment, school site council composition and duties, school and parent responsibilities, staff information (i.e., number of administrators, teachers, and teacher aides), transportation policy, resources available at the school site, school safety, food programs, attendance rate, mobility rate, retention rate, and dropout rate. Policy makers might consider expanding the report cards to include information on faculty and staff certification and experience, school accreditation, the school’s governance structure, admissions policy, discipline policy, curricular focus, testing policy, after-school activities, and utilization of technology to further aid consumer choice. In sum, there was no single reliable measure of school performance or quality. Thus, providing consumers with information about many aspects of the charter schools allows them the opportunity to better evaluate the educational product produced at one school compared to others. With a broad scope of information, individuals may select different criteria Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 595 for comparing school quality. This in turn allows consumers to select the school that best fits their needs and interests. Ensuring accuracy is a second prerequisite of performance reporting to consider. Simply relying on self-report data may be inadequate when parents are making such a critical choice. Akerlof (1970), in a seminal article on the quality and uncertainty associated with the theory of markets, argued that sellers have little incentive to offer accurate or expanded information about their product. Using the automobile market as an example, he stated: There are many markets in which buyers use some market statistic to judge the quality of prospective purchases. . . . There is an incentive for sellers to market poor quality merchandise, since the returns for good quality accrue mainly to the entire group whose statistic is affected rather than the individual seller. As a result there tends to be a reduction in the average quality of goods and also in the size of the market. It should also be perceived that in these markets social and private returns differ, and therefore, in some cases, governmental intervention may increase the welfare of all parties. (p. 488) Akerlof (1970) also reasoned that the effects of quality uncertainty can be reversed by guarantees, brand names, or licensing practices—“one natural result . . . is that the risk is borne by the seller rather than by the buyer” (p. 499). In the absence of guarantees, parents and students assumed full risk when selecting among Arizona charter schools. To minimize this risk, limited government intervention may be warranted in the Arizona charter school program. There are several options for increasing the quality of information about Arizona charter schools. First, staff at the aforementioned state agencies could shift roles and audit or review externally audited information about choice schools, much like the role taken by the auditor general’s office with regard to financial reporting requirements. In line with market ideology, one could also expect to see private individuals or groups providing report card compliance support to charter schools. If school officials were unwilling to spend time collecting and reporting the information, they could subcontract with external groups to fulfill these requirements. These information accountants would collect and organize information about school performance. Finally, the issue of accessibility to the school performance information must be considered. Simply posting the report card information on the state department of education web site is insufficient. Consumers without access to the Internet are severely disadvantaged. Mailing school performance information to interested parents from the state department of education or other information clearinghouses would be a logical first step. Additionally, allowing parents to acquire standardized and reliable information when 596 Educational Administration Quarterly visiting a school (district or charter), utilizing public libraries and other public facilities to distribute information, and other outreach steps could be taken to promote more equity in information accessibility. If consumers are expected to reward good schools by enrolling and sanction bad schools by leaving, they must have quality performance information to make such decisions. To minimize the risk for parents and students who participate in the charter school program, Arizona policy makers may have a compelling reason to implement a performance reporting system that ensures consumers have access to broad and accurate information about charter and district public schools. For example, a consumer buying a car or selecting a school would receive more accurate safety, quality, and performance data from an independent evaluator than from a salesperson or enrollment officer. Without access to information about school performance that is valid, reliable, and wide ranging, consumers will use whatever information is available when selecting a school. A radio or television commercial, a slick marketing brochure, or a conversation with a friend may be enough to buy a car, but can we afford to settle for the same standard when selecting a school? In the case of Arizona charter schools, thousands of educational consumers selected charter schools with limited safety, quality, and performance information. NOTES 1. Recorded interview with Tom Patterson, Phoenix, AZ (March 16, 1998). 2. Recorded interview with Arizona Department of Education administrative staff, Phoenix, AZ (February 27, 1998). 3. Recorded interview with the executive director, State Board of Education, Phoenix, AZ (February 27, 1998). 4. Recorded interview with the former associate superintendent, Arizona Department of Education, Tempe, AZ (May 12, 1998). 5. Recorded interview with State Board for Charter Schools administrative staff, Phoenix, AZ (March 11, 1998). 6. Recorded interview with the former executive director, State Board for Charter Schools, Phoenix, AZ (March 9, 1998). 7. Recorded interview with the former associate superintendent, Arizona Department of Education, Tempe, AZ (May 12, 1998). 8. Recorded interview with Arizona Department of Education administrative staff, Phoenix, AZ (March 11, 1998). 9. Recorded interview with Arizona Department of Education administrative staff, Phoenix, AZ (March 11, 1998). 10. Recorded interview with the State Board for Charter Schools administrative staff, Phoenix, AZ (March 11, 1998). 11. Recorded interview with auditor general’s office staff, Phoenix, AZ (February 11, 1998). Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 597 12. Recorded interview with the executive director of the State Board of Education, Phoenix, AZ (February 27, 1998). 13. Recorded interview State Board of Education administrative staff, Phoenix, AZ (February 27, 1998). 14. Recorded interview with the former State Board of Education and State Board for Charter Schools board member, Tempe, AZ (May 12, 1998). 15. Recorded interview with Senator Tom Patterson, Phoenix, AZ (March 16, 1998). 16. Recorded interview with Senator John Huppenthal, Phoenix, AZ (March 23, 1998). 17. Recorded interview with Lisa Graham Keegan (April 21, 1998). 18. Recorded interview with Senator John Huppenthal, Phoenix, AZ (March 23, 1998). REFERENCES Akerlof, G. A. (1970). The market for “lemons”: Quality, uncertainty and the market mechanism. Quarterly Journal of Economics, 84, 488-500. Arizona Revised Statute §15-183 (2000). Retrieved March 1, 2000, from the World Wide Web: http://www.azleg.state.az.us/ars/15/183.htm Arizona Revised Statute §15-185 (2000). Retrieved March 1, 2000, from the World Wide Web: http://www.azleg.state.az.us/ars/15/185.htm Arizona Revised Statute §15-271 (2000). Retrieved March 1, 2000, from the World Wide Web: http://www.azleg.state.az.us/ars/15/271.htm Boyd, W. L., & Lugg, C. A. (1998). Markets, choice and educational change. In A. Hargraves, A. Lieberman, M. Fullan, & D. Hopkins (Eds.), International handbook of educational change: Part one (pp. 349-374). Boston: Kluwer. Retrieved February 8, 2001 from the World Wide Web: http://www.personal.psu.edu/faculty/i/6/i6b/marketschoice.htm Bulkley, K. (1999). Charter school authorizers: A new governance mechanism? Educational Policy, 13(5), 674-697. The Center for Education Reform. (2001). Charter school highlights and statistics [Online]. Retrieved February 22, 2000, from the World Wide Web: http://www.edreform.com/pubs/ chglance.htm Chubb, J. E., & Moe, T. M. (1988). What price democracy? Politics, markets and American schools. Washington, DC: The Brookings Institution. Chubb, J. E., & Moe, T. M. (1990a, Summer). Choice is a panacea. Brookings Review, 8(3), 4-12. Chubb, J. E., & Moe, T. M. (1990b). Politics markets and America’s schools. Washington, DC: The Brookings Institution. Cuban, L. (1988). The managerial imperative: The practice of leadership in schools. New York: New York University Press. Darling-Hammond, L. (1988, Winter). Accountability and teacher professionalism. American Educator, 12(4), 8-13, 38-43. Darling-Hammond, L. (1989, Fall). Accountability for professional practice. Teachers College Record, 91(1), 59-80. Firestone, W., & Bader, B. (1992). Redesigning teaching: Professionalism or bureaucracy? Albany: State University of New York Press. Garn, G. (1999). Solving the policy implementation problem: The case of Arizona charter schools. Education Policy Analysis Archives, 7(26) [Online]. Retrieved January 22, 1999, from the World Wide Web: http://olam.ed.asu. edu/epaa/v6n1/ 598 Educational Administration Quarterly Gintis, H. (1995). The political economy of school choice. Teachers College Record, 96(3), 492-511. Glass, G. V. (1972, June). The many faces of educational accountability: What is genuine accountability and what is sham? Phi Delta Kappan, 53, 636-639. Gold, R. L. (1969). Roles in sociological field observation. In G. McCall & J. L. Simmons (Eds.), Issues in participant observation (pp. 30-39). Reading, MA: Addison-Wesley. Haggerty, M. D., & Sauv, R. (1999, June 28). USFRCS Memorandum No. 24, Office of the Auditor General and Arizona Department of Education. Hansen, J. B. (1993). Is educational reform thorough mandated accountability an oxymoron? Measurement and Evaluation in Counseling and Development, 26, 11-21. Hertz, R., & Imber, J. B. (Eds.). (1995). Studying elites using qualitative methods. Thousand Oaks, CA: Sage. House, E. R. (1993). Professional evaluation: Social impact and political consequences. Newbury Park, CA: Sage. Kirp, D. (1982). Profesionalization as a policy choice. World Politics, 34(2), 137-174. Kirst, M. (1990). Accountability: Implications for state and local policymakers. Washington, DC: U.S. Department of Education, Office of Educational Research and Improvement, Information Services. Levin, H. (1974). A conceptual framework for accountability in education. School Review, 82, 363-391. Levin, H. M. (1997, December). Educational vouchers: Effectiveness, choice, and costs. Paper presented at the annual meetings of the American Economics Association, New Orleans, LA [Online]. Retrieved October 23, 1997, from the World Wide Web: http://epn.org/sage/ 97levin.html Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage. Macpherson, R.J.S. (1996, February). Educative accountability policy research: Methodology and epistemology. Educational Administration Quarterly, 32(1), 80-106. Mattern, H. (1994, October 4). It’s not much now. It’s humble, but that will change: Phoenix site will give rise to one of state’s 1st charter schools. Arizona Republic, p. A1. Merriam, S. B. (1988). Case study research in education: A qualitative approach. San Francisco: Jossey-Bass. Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco: Jossey-Bass. Miles, M. B., & Huberman, A. M. (1994). An expanded sourcebook: Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage. Newmann, F. M., King, B. M., & Rigdon, M. (1997, Spring). Accountability and school performance: Implications from restructuring schools. Harvard Educational Review, 67(1), 41-74. Nowicki, D. (1998, August 28). Governor aims at charters: Says schools need more oversight from state, better business plans. Tempe Tribune, pp. A1, A4. Patton, M. Q. (1990). Qualitative evaluation methods (2nd ed.). Thousand Oaks, CA: Sage. Richards, C. E. (1988, March). Indicators and three types of educational monitoring systems: Implications for design. Phi Delta Kappan, 69(7), 495-499. Rivera, M. J. (1994). Accountability and educational reform in Rochester, New York (Doctoral dissertation, Harvard University, 1994). DAI 55 NO7A:1775. (University Microfilms No. 9432425) Shepard, L. A. (1991). Will national tests improve student learning? Phi Delta Kappan, 73(3), 232-238. Spradley, J. P. (1979). The ethnographic interview. New York: Holt, Rinehart & Winston. Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage. Garn / BUREAUCRATIC TO MARKET ACCOUNTABILITY 599 U.S. Department of Education. (1999). The state of charter schools third-year report [Online]. Retrieved March 1, 2001, from the World Wide Web: http://www.ed.gov/pubs/ charter3rdyear/C.html Wohlstetter, P. (1991). Accountability mechanisms for state education reform: Some organizational alternatives. Education Evaluation and Policy Analysis, 13(1), 31-48. Yin, R. K. (1994). Case study research: Design and methods. Thousand Oaks, CA: Sage.
© Copyright 2026 Paperzz