Case Study Project Delivery Method Performance for Public School Construction: Design-Bid-Build versus CM at Risk Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. Noel Carpenter, Ph.D. 1; and Dennis C. Bausman, Ph.D. 2 Abstract: The construction of public schools is a complex endeavor due to the involvement of multiple and diverse parties, funding and budgetary concerns, and statutory limitations imposed by local, state, and federal agencies, all of which serve to increase project risk. Project delivery methods using distinctive procedures to manage the design and construction process have been developed to reduce risk and improve project performance. To increase the probability of successful project outcomes, those responsible for public school construction require conclusive delivery method performance data. Completed in 2014, this two-year study used actual construction documents from 137 southeastern public schools to analyze and determine project delivery method cost, time, quality, and claims performance. The analysis indicated that performance of the Design-Bid-Build (DBB) method was significantly superior across all cost metrics, whereas the Construction Manager at Risk (CM at Risk) method produced higher levels of product and service quality. Essentially, public school administrators were paying a significant premium to obtain perceived improvements in both service and product quality. This research empowers decisionmakers and benefits the public by providing evidence of the most efficient and effective means for the construction of new public schools. DOI: 10.1061/(ASCE)CO.1943-7862.0001155. © 2016 American Society of Civil Engineers. Author keywords: Project delivery method; Design-Bid-Build (DBB); Construction Manager at Risk (CM at Risk); Project success factors; Project performance factors; Contracting. Introduction Although all construction projects are subject to a wide variety of complex issues and risks (Saporita 2006; Zaghloul and Hartman 2003; Akintoye and MacLeod 1997; Gordon 1994), the construction of public schools is particularly complex due to the diversity of the parties involved, schedule and funding intricacies, and statutory requirements (Vincent and McKoy 2008). Over the past 20 years, the public education system has spent more than $174 billion on new schools, and the median per-square-foot construction cost of these facilities has doubled (Abramson 2013). During this same period, budget shortfalls experienced at federal, state, and municipal levels have placed pressure on capital expenditures for public schools, while growing populations and changing demographics have served to increase demands on aging and outdated facilities (Abramson 2012; Oliff et al. 2012; McNichol et al. 2011; U.S. Census Bureau 2011). Comprehensive management systems, known in the construction industry as project delivery methods, have been developed to reduce project risk by assigning contractual responsibilities to those involved in the design and construction process (Kenig 2011). The most widely used project delivery method is the Design-Bid-Build (DBB) method (D’Agostino and Bridgers 2010). However, proponents of alternative methods, such as Construction Manager at Risk (CM at Risk) and Design-Build (DB), believe that these methods offer the promise of better performance when used 1 Clemson Univ., 312F Lee Hall, Clemson, SC 29634 (corresponding author). E-mail: [email protected] 2 Clemson Univ., 312F Lee Hall, Clemson, SC 29634. E-mail: [email protected] Note. This manuscript was submitted on October 21, 2015; approved on January 12, 2016; published online on April 19, 2016. Discussion period open until September 19, 2016; separate discussions must be submitted for individual papers. This paper is part of the Journal of Construction Engineering and Management, © ASCE, ISSN 0733-9364. © ASCE on certain types of projects (Cox et al. 2011; Konchar and Sanvido 1998). District managers and other public school administrators, acting in the capacity of facility owners as guided by state and district procurement policies, are responsible for selecting the delivery methods they believe are best suited to provide positive results for their new school projects. This project delivery method selection can often influence project success or project failure (Demkin and AIA 2008). Ghavamifar and Touran (2008) suggested that unnecessary added costs, lack of experience with the processes, loss of owner control, and the fear of favoritism associated with the awarding of construction contracts may be influencing public decisions regarding the adoption and use of alternative methods of project delivery. Additionally, anecdotal evidence suggests that state and district agencies and their public school administrators and managers may be continuing to avoid alternative delivery methods because of their lack of knowledge and experience or because of traditional operating procedures currently in effect (Carolinas AGC 2009). Public school decision-makers require current, relevant, and significant delivery method performance data along with a thorough working knowledge of the factors affecting school construction to make informed, critical public school construction decisions (Vincent and McKoy 2008). However, a review of the literature revealed that only a limited amount of empirical research has been conducted on project delivery method performance for the construction of public school projects. The empirical performance data obtained through such research is required to assist decisionmakers in their efforts to select the most appropriate delivery methods for the construction of their schools. Therefore, the purpose of this research was to provide current, statistically significant, empirical evidence to define the performance attributes of the traditional DBB project delivery method as compared with the alternative delivery methods of CM at Risk and DB when used for the construction of public school projects. 05016009-1 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. Definitions Although varying definitions exist within the construction industry, the following descriptions obtained from the Associated General Contractors 2011 publication, Project Delivery Systems for Construction by Michael E. Kenig, were used to define the project delivery methods discussed within this research. The operational definitions adopted for this study include the following: Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. Project Delivery Method The comprehensive process of assigning contractual responsibilities for designing and constructing a project, which should include the definitions of project scope, contractual responsibilities, interrelationships of the parties, and the processes for managing time, cost, safety, and quality. Design-Bid-Build (DBB) The defining characteristics of this project delivery method are as follows: 1. The design and construction are separate contracts: ownerdesigner and owner-contractor; and 2. The total construction cost is a factor in the final selection of the constructor. Construction Manager at Risk (CM at Risk) The defining characteristics of this project delivery method are as follows: 1. The design and construction are separate contracts: owner to architect, owner to CM at Risk; and 2. The total construction cost is not a factor in the final selection of the constructor. Design-Build (DB) The defining characteristic of this project delivery method is that the design and construction responsibilities are contractually combined into a single contract with the owner. Public Schools Public schools refer only to publicly funded school(s), and grades kindergarten through twelfth grade (K-12). District Manager District manager refers to the person responsible for the construction of public schools at the district level (the facility owner). of the multiple entities began to complicate the communication and decision-making required during design and construction. The reduced cooperation and collaboration experienced among the parties often causes the development of adversarial relationships and increases project risk, which may lead to reduced quality, schedule overruns, change orders, claims, and litigation (Saporita 2006). Contracting terminology and methodologies have been created to define the project scope and assign responsibilities for completing the work to avoid, reduce, transfer, or maintain risk. Development of the DBB project delivery method occurred toward the end of the nineteenth century following a number of fraud and abuse cases associated with the transcontinental railroad and other large U.S. infrastructure projects (U.S. DOT 2006; Heady 2013). Known as the traditional method throughout the United States, DBB was developed primarily to reduce the risk of corruption and cost overruns. It can be used to produce quality results on both public and private projects when used under the proper conditions. Advantages associated with the DBB method include an easily understood and well-documented process, the perception of fairness, owner control of the process, reduced chances for corruption, reliable schedule predictability, and initial cost certainty (Rojas and Kell 2008; Kenig 2011; U.S. DOT 2006). Disadvantages of this approach include limited opportunities for collaboration, development of adversarial relationships (which may lead to increased litigation), and an increased probability of contractor failure as a result of the competitive nature of the bidding process. Additionally, because of the linear structure of the design and construction process, DBB is known to possess a restricted and expensive process for incorporating constructive changes (Konchar 1997; O’Connor 2009; U.S. DOT 2006; Cox et al. 2011). Alternative delivery methods such as CM at Risk and DB seek to close the gaps created by fragmentation by using collaborative and communicative strategies in an effort to reduce both the probability of risk occurrence and the exposure to risk issues (Konchar 1997; Akintoye and MacLeod 1997; O’Connor 2009). Establishment of a team environment with collaborative exchanges of information and experience among the owner, architect, and contractor throughout the project life-cycle has the ability to improve project performance in terms of cost, schedule, quality, and productivity (Konchar 1997; Konchar and Sanvido 1998; O’Connor 2009; National Research Council 2009; Kenig 2011). Additionally, the procurement of CM at Risk and DB projects is typically accomplished using a more collaborative qualifications-based approach in lieu of competitive bidding, further improving the project team relationships (Kenig 2011). An added benefit of these methods is the opportunity to begin construction of the project before the completion of final design documents, which may improve schedule performance. Project Success Factors Literature Review Project Delivery Methods Prior to the Renaissance, most large-scale construction projects were completed using a master builder approach, in which a single person or entity would perform the duties of the architect, engineer, and contractor (Addis 2007; Fitchen 1986; Ghavamifar and Touran 2008). Fragmentation of the master builder into the separate disciplines of architecture, engineering, and construction occurred during the Renaissance and continued into the twentieth century, allowing for the specialization of construction services required to support technological advances and increasing complexity of building projects. As a consequence of this separation, the disparate interests © ASCE Project delivery method selections are based on the personal experiences and organizational purchasing philosophies of those in decision-making capacities (Sanvido and Konchar 1999). Research indicates that the factors used to define project success are determined by the experiences and perceptions of the owner, which have a direct influence on project delivery method selection (Chan and Chan 2004). The factors used to define construction project success have been the focus of many research efforts, but the first mention of critical success factors (CSFs) comes from information systems’ research conducted at the Massachusetts Institute of Technology (Rockart 1979). The definition was refined and presented by Bullen and Rockart (1981) as “the limited number of areas in which satisfactory results will ensure successful competitive performance for the individual, department, or organization. CSF’s are the few key 05016009-2 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. areas where ‘things must go right’ for the business to flourish and for the manager’s goals to be attained (Ibid).” Although many indicators exist to gauge a wide array of performance factors, this research uses the CSFs most often used for measuring construction project success: cost, schedule, and quality (Atkinson 1999). Cost and schedule performance are seen as objective measures, whereas work quality (product quality) and client satisfaction (service quality) are viewed as more subjective or intangible (Chan and Chan 2004). The premise behind the development of the differing project delivery methods is the result of the belief that each of them offers different methods for managing and controlling the CSFs in an effort to improve project performance. For example, CM at Risk provides an opportunity to reduce construction duration (an objective measure of the schedule) by changing the linearity of the project life-cycle. Additionally, client satisfaction (a subjective measure of quality) may be improved by the perceived increase in cooperation and communication instilled by CM at Risks’ more collaborative properties. Project Delivery Method Research A limited amount of research has been performed to directly compare the performance of project delivery methods when used for the construction of public schools. A report issued by the California Council of the American Institute of Architects (AIA) cited the “lack of quality data and information on school construction cost, schedule, and scope, (and that) little research on these (construction) processes exists” (Vincent and McKoy 2008). Foundational research conducted to compare the performance of project delivery methods was conducted by Bennett et al. (1996), Konchar (1997), and Konchar and Sanvido (1998); these studies continue to be the most widely cited and accepted. Results of the Bennett et al. (1996) research indicated that alternative methods, such as DB and CM at Risk, performed better than DBB in terms of cost and time. Univariate analysis conducted by Konchar (1997) using median data indicated that the performance of DB and CM at Risk projects exceeded the performance of DBB in terms of cost, schedule, and almost all issues of quality. Konchar’s multivariate analysis indicated that DB outperformed both CM at Risk and DBB in terms of unit cost and construction speed (Ibid). In the Konchar and Sanvido (1998) research, DB was shown to outperform both CM at Risk and DBB in terms of cost and schedule, whereas quality performance results were mixed. Findings of the Konchar (1997) and Konchar and Sanvido (1998) works were challenged by Williams (2003) based on the statistical significance of the results and wide variations of the comparative project types included in their study, which led to gross disparities of overall project costs, square foot sizes, and complexity. The Williams (2003) research was directed at the comparative performance of 215 Oregon public schools constructed with the CM/GC (CM at Risk) and DBB methods using statistical analysis and data envelope analysis (DEA). Results of the study indicated that statistically significant differences did not exist in terms of cost or schedule, although observable evidence revealed that schools constructed using CM at Risk had a higher per-square-foot cost than DBB schools. Additionally, the study revealed the CM at Risk method may not provide the reduction in risk characteristics touted by its supporters, and that no “interaction effect” (advantage based on management strategy or tactics) was found for either method. However, the study did indicate that CM at Risk may provide superior performance for projects that require accelerated (fast-track) project schedules. Additional research contradicting the CM at Risk performance superiority findings provided by Konchar and Sanvido (1998) was © ASCE conducted by Rojas and Kell (2008). Focused on cost control performance during the construction of public schools in Oregon and Washington, the study revealed only marginal, yet insignificant, differences (1.55% mean, 0.48% median) between CM at Risk and DBB in terms of change order costs, 75% of CM at Risk projects exceeding the Guaranteed Maximum Price (GMP), and CM at Risk being significantly less effective at controlling costs during buyout (12.25% higher bid cost growth and 15.15% higher project cost growth). Essentially, the results indicate that owners using CM at Risk for construction of public schools experienced significantly greater cost growth above the prebid estimates, without receiving lower change order costs, nor the touted benefits of reduced risk and lower cost volatility expected with a “Guaranteed” Maximum Price. Methodology This research was conducted during a 2-year period that began in 2012 and concluded in 2014. The study was conducted in the southeastern states of Florida, Georgia, North Carolina, and South Carolina, and included all public schools (K-12) that were completed between January 2006 and December 2012. The work included a two-stage data collection effort, consisting of a historical document review and assemblage followed by a survey of the district managers (owners) responsible for the construction of the public school projects. The focus of the research was to determine the actual design and construction costs of public schools, and to define the comparative performance differences among the DBB, CM at Risk, and DB delivery methods. The metrics that are developed to assess and compare project performance in terms of cost, time, and quality are presented in the following sections. Cost Metrics Costs were limited to the design and construction costs associated with the actual public school buildings and supporting sitework. To the extent possible, costs for offsite roadwork, owner-furnished equipment, and land were not included in the calculations. RS Means (2013) factors were used to normalize all costs to 2012 dollars. The operational and maintenance costs of these facilities and other life-cycle issues related to building performance were not taken into consideration in this study. Therefore, a determination of owner-perceived values obtained through the use of high-performance materials or energy-efficient equipment was beyond the scope of the research. The following cost variables were collected from the project documents and then used to formulate the cost metrics shown in Table 1: • Original Contract Cost—the amount listed in the construction contract and entered as Original Contract Sum on the Contractor’s Application for Payment; • Preconstruction Cost—the amount listed in the construction contract payable to the contractor for preconstruction services, if applicable; • Other Cost—the amounts listed on separate contracts including sitework or other scopes and phases of work that were not included in the Original Contract Sum, but were required to complete the entire project scope; • Final Contract Cost—the amount entered as the Contract Sum to Date or other adjusted total amount listed on the Final Contractor’s Application for Payment; • Original Design Cost*—the amount listed in the design contract. In cases in which a percentage fee was listed, the amount was calculated based on the architect contract terms; and 05016009-3 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. Table 1. Cost Metrics Metric Formula Original construction cost ($) Original project cost ($) Final construction cost ($) Final project cost ($) Construction cost growth (%) Project cost growth (%) Unit cost ($=m2 ) Student cost ($=student) Original contract cost ($) + preconstruction cost ($) + other cost ($) Original construction cost ($) + original design cost ($) Original construction cost ($) + change orders, fees, adjustments costs ($) Final construction cost ($) + final design cost ($) ½ðFinal construction costð$Þ − original contract costð$ÞÞ=original contract costð$Þ × 100 ½ðFinal project costð$Þ − original project costð$ÞÞ=original contract costð$Þ × 100 Final project cost/facility gross square meters Final project cost/facility student capacity Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. Table 2. Time Metrics Metric Formula Planned construction (days) Actual construction (days) Planned project (days) Actual project (days) Construction growth (%) Project growth (%) Construction intensity (m2 =day) Construction intensity ($=day) Number of days between construction start date and original completion date Number of days between construction start date and substantial completion date Number of days between design start date and original completion date Number of days between design start date and substantial completion date ½ðActual constructionðdaysÞ − planned constructionðdaysÞ=planned constructionðdaysÞ × 100 ½ðActual projectðdaysÞ − planned projectðdaysÞÞ=planned projectðdaysÞ × 100 Facility gross m2 =actual projectðdaysÞ Final project costð$Þ=actual projectðdaysÞ Table 3. Quality Metrics Metric Formula Product quality Service quality Service quality (readiness) Product Product Product Product quality (readiness) quality (readiness) and service quality and service quality Owner satisfaction ratings of work quality for building exterior, building interior, environmental systems, and the overall project Owner satisfaction ratings of the design and construction team’s ability to manage and control project costs, schedule, and product quality using methods of communication, collaboration, and cooperation Number of days between the date of substantial completion and date of final completion (time to complete all punch list and other miscellaneous items of work) Number of warranty and callback incidents Cost severity (to owner) of warranty and callback issues Number of construction claim incidents Cost severity (to owner) of construction claims • Final Design Cost*—the total amount listed on the architect’s Final Invoice. *Separate Design Cost analyses were conducted in which reimbursable costs were both included and excluded in the Design Cost calculations, which resulted in no significant differences between the two. All known reimbursable costs have been included within the research analysis and reporting. Time (Schedule Duration) Metrics The following time variables were collected from the project documents, which were then used to formulate the time metrics presented in Table 2: • Design Start—date listed in the design contract or, in the case that the date was not provided, the date the design contract was executed; • Construction Start—date listed in the construction contract or as listed in the Notice to Proceed; • Original Completion—substantial completion date listed in the construction contract or in the Notice to Proceed, or computed by adding the number of days listed in the construction contract to Construction Start; • Revised Completion—date that the Original Completion is shifted to by adding (or subtracting) the number of days allowed (or deducted) because of a change orders; © ASCE • Substantial Completion—date listed on the Certificate of Substantial Completion; and • Final Completion—date that the architect executed the Contractor’s Final Application for payment. Quality Metrics The research challenge was in formulating metrics that properly assessed the levels of product and service quality performance and in crafting survey questions that would properly illicit participant perspectives regarding these subjective issues. Care was taken to establish multiple metrics and targeted survey questions to capture the true measures of product and service quality performance, while reducing variations that result from the subjectivity of the respondents. The complete list of Quality Metrics is presented in Table 3. Ratings of owner satisfaction with regard to public school product quality performance were obtained using survey data with metrics directed at the workmanship of the building interior and exterior, environmental systems, and the overall project. Service quality satisfaction ratings were obtained using metrics focused on the abilities of the design and construction teams to manage and control project costs, schedule, and product quality using methods of communication, collaboration, and cooperation. 05016009-4 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. Three empirical measures of building readiness were developed as additional measures of quality performance. First, schedule data were used to establish the number of days required by contractors to complete all punch list and other miscellaneous items of work. For this service quality metric, readiness was defined as the measure of days between the date of Substantial Completion and the date of Final Completion (date the architect executed the Contractor’s Final Application for Payment). The second and third empirical measures of readiness were developed using records of warranty and callback issues as indicators of product quality. The number of warranty and callback incidents and their severity (in terms of cost) were used to define these comparative measures of readiness. The final two empirical measures of quality were developed through analysis of owner-provided construction claims data. Often caused by issues related to communication, project documentation, contract terminology, relationships, productivity, and workmanship, construction claims can lead to increased costs and schedule overruns (Saporita 2006; Akintoye and MacLeod 1997; Gordon 1994; Kangari 1995). Measures of the number of construction claim incidents and their severity (in terms of cost) were used as indicators of overall (product and service) quality performance. Data Collection and Distributions To reduce the variability among projects included within the study, only new construction, full facility, public school projects were targeted for data collection. The initial search for historical project data was targeted at the departments of education within each state included in the study area. As shown in Fig. 1, state records contained a population of 829 public school projects that were completed during the study period. Based on the population, calculations were completed to determine a minimum sample size of 90 projects and the proportional numbers of projects required from each state to achieve valid results through statistical analysis. Because of the limited and varied types of public school construction data collected and maintained at the state level, the historical project data collection effort was focused directly at the more than 247 local school districts in which the public schools were constructed. All school districts were solicited for participation in this study, and a concerted effort was focused on school districts with active expansion programs, which correlated with those districts having larger student populations. Use of a survey approach for historical data collection purposes proved to be ineffective in obtaining precise cost and schedule information during early research efforts. Crosschecks of districtprovided data revealed that some respondents appeared to be approximating the project cost and schedule information without providing accurate figures. Therefore, it was determined that direct reviews of actual project documents would be required to verify and record accurate design and construction costs and schedule durations. Copies of the necessary documents were collected using postal mail, email, and a number of on-site meetings during which the researcher was able to review and copy the pertinent project documents directly from district files. Complete sets of documents that were assembled and reviewed for each project included the owner contractor agreement, owner architect agreement, notice to proceed, certificate of substantial completion, final construction application for payment, final architect invoice/billing, and final construction change order. This method of data collection improved the accuracy of the data and the precision of the results achieved during the data analysis stage. This is the first and only known project-delivery-method performance research of this magnitude that was conducted using actual construction project documents for data collection and verification purposes. The initial review of public school construction data within the study area revealed that the DB method had only been used for new construction of full-facility public school projects in the states of Georgia and Florida during the study period. Furthermore, only 2 of the 286 projects completed in Georgia and only 15 of the 230 qualifying projects completed in Florida were constructed using DB. Therefore, due to the limited use of the DB method and the consequent limited amount of project data representing this method, analysis of the performance of the DB method for public school construction was not included within this research. Once the historical data collection work was completed for a particular project, an Internet-based survey was distributed directly to the district manager (owner) responsible for the construction of public schools within the individual districts. The survey was used to obtain owner perceptions and ratings of the product quality of the new facility and the quality of service provided by the construction and design teams during the design and construction process. Because surveys were only distributed to managers in districts where complete sets of actual project documents had been obtained and personal discussions with respondents regarding the forthcoming survey had been conducted, a 93.3% survey response ratio was obtained. Complete project delivery method performance data sets, including information obtained through historical document reviews and surveys, were obtained from a sample of 137 proportionally distributed projects as shown in Fig. 1, which exceeded the calculated minimum requirements previously discussed. When distributed by project delivery method, the sample data of completed projects reflect similar ratios to those of the U.S. commercial sector. As presented in Fig. 2, the largest proportion of projects (65%) were completed using the DBB method, with a lesser portion (35%) being completed with CM at Risk. However, when distributed by both project delivery method and state as shown in Fig. 3, the data are clearly differentiated. The project delivery methods used in North Carolina and South Carolina resemble those of the population and the U.S. commercial sector. Districts in Georgia predominantly use DBB, whereas those in Florida overwhelmingly use the CM at Risk method. Normalization and Testing Procedures Before conducting the analysis, all costs were normalized to 2012 U.S. dollars using historical cost indexes and methods provided by Population: N = 829 Projects South Carolina, 109, 13% North Carolina, 195, 24% Sample: n = 137 Projects Florida, 239, 29% South Carolina, 24, 18% Georgia, 286, 34% North Carolina, 43, 31% Florida, 30, 22% Georgia, 40, 29% Fig. 1. Population and sample distributions © ASCE 05016009-5 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. Sample: n = 137 Projects US Commercial Sector (FMI/CMAA 2010) Design-Bid-Build, 86, 63% Design-Bid-Build, 55 , 55% Other, 21 , 21% CM at Risk, 24 , 24% CM at Risk, 51, 37% Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. Fig. 2. Sample distribution by project delivery method North Carolina Sample: nNC = 43 CM at Risk, 16, 37% South Carolina Sample: nSC = 28 Design-Bid-Build, 27, 63% Florida Sample: nFL = 51 CM at Risk, 26, 87% Design-Bid-Build, 19, 79% CM at Risk, 5, 21% Georgia Sample: nGA = 43 Design-Bid-Build, 4, 13% Design-Bid-Build, 36, 90% CM at Risk, 4, 10% Fig. 3. Sample distribution by project delivery method and state RS Means (2013). Hypothesis tests and statistical evaluations were completed using two group t-tests of the mean performance values of cost, schedule, size, capacity variables, and chi-square (x2 ) distributions for the quality-performance metrics. All testing procedures were carried out based on 95% confidence intervals using the statistical analysis system (SAS/STAT version 9.3) software. More than 600 individual tests were completed to compare the mean performance values of the CM at Risk and DBB public school projects. An analysis of the responses to individual survey questions was completed using chi-square (x2 ) distributions. Although the lower two survey-response categories, “Very Dissatisfied” and “Dissatisfied,” were selected more often by DBB managers than by CM at Risk managers, very few responses were provided within these two categories for any individual survey question for either method, thus rendering analysis of these categories unviable for some questions. In most cases, the responses received in the lowest two categories were combined into a single “Dissatisfied” category to improve the viability for statistical analysis. In cases in which an adequate number of combined responses still did not exist, the lower two categories were eliminated, leaving only the highest two categories, “Satisfied” and “Very Satisfied,” available for comparable analysis. Therefore, the quality performance Tables 7–10 present the combined total percentages of owner responses provided in the “Satisfied” and “Very Satisfied” categories. © ASCE Table 4. Size and Capacity Comparison of Design-Bid-Build and CM at Risk Metric DesignBid-Build CM at Risk 13,999.56 13,386.32 Gross project size (m2 ) Student capacity 1,073.66 1,041.78 Area per 12.93 13.14 student (m2 ) Difference Percentage P-Value 613.24 4.38 0.5766 31.88 −0.21 2.97 −1.62 0.6643 0.6570 Results Comparative Size and Capacity Characteristics Initial testing was completed by comparing the size and capacity characteristics of the DBB and CM at Risk school projects, resulting in no statistically significant differences in mean school project area size, student capacity, or area per student ratios. As listed in Table 4, the mean project size of the DBB projects were shown to be 613.2 m2 larger with a student capacity 31.8 students greater than the CM at Risk projects. The project area per student of the DBB and CM at Risk projects differed by a mere 0.2 m2 . Additional testing by project type (elementary, middle school, and high school) 05016009-6 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. Table 5. Cost Comparison of Design-Bid-Build and CM at Risk Metric Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. Original construction ($) Final construct cost ($) Original project cost ($) Final project cost ($) Unit cost ($=m2 ) Student cost ($/student) Construction cost growth (%) Project cost growth (%) Design-Bid-Build CM at Risk Difference Percentage P-Value $20,960,467 $21,280,286 $22,023,229 $22,427,554 $148.80 $20,915.60 1.25% 1.45% $26,001,207 $26,101,221 $27,141,092 $27,436,298 $191.60 $27,057.00 0.32% 1.04% ($5,040,740) ($4,820,935) ($5,117,863) ($5,008,744) ($43) ($6,141) 0.93% 0.41% −24.05 −22.65 −23.24 −22.33 −28.76 −29.36 74.40 28.28 0.0148 0.0230 0.0180 0.0250 <0.0001 <0.0001 0.2249 0.6004 and by state (Georgia, North Carolina, and South Carolina) revealed no statistically significant project size, student capacity, or area per student differences among the public schools being compared. The Florida CM at Risk sample proved to be significantly larger in both size and student capacity because of the inclusion of four K-8 schools in the middle school category. The differences were not significant when the four oversized schools were removed. Cost Performance Findings Proponents of CM at Risk believe the collaborative properties and other attributes of this project delivery method enable the construction of projects with better cost performance than those produced using the DBB method. However, the analysis performed in terms of cost performance for construction of public schools revealed that projects constructed using the DBB method significantly outperformed those constructed with the CM at Risk method across all cost metrics. As presented in Table 5, comparisons were completed for the original and final construction costs and the original and final project costs. The computations for project costs include design fees. Comparisons of unit and student costs were computed using school project size and student capacity means and final project cost. Similar results were obtained when testing was conducted by individual state and school type with significant differences found in both North Carolina and Florida and for elementary and middle schools. Mean Design Costs of CM at Risk projects were shown to be 7.3% higher than fees for DBB projects, although the difference was not statistically significant. However, Design Costs for North Carolina CM at Risk projects were 51.5% higher, a significant difference with a P-Value of 0.0240. Furthermore, testing by delivery method, state, and type combined resulted in a significantly higher Design Cost mean for North Carolina CM at Risk elementary school projects with a difference of $405,062 (56.4%) higher than DBB with a P-Value of 0.0010. Unit and student costs of the North Carolina elementary schools constructed with CM at Risk were also shown to cost $503.77=m2 and $5,446.80=student higher than public schools constructed using the DBB method. The analysis indicated that the cost growth on CM at Risk school projects was not significantly different from the cost growth on DBB projects. Cost performance is not necessarily an indication of project value. Projects employing high-performance materials, systems, and methods may or may not achieve life-cycle cost benefits. Costs associated with these life-cycle issues were not measured or included within this study. Schedule Performance Findings Proponents of CM at Risk believe the collaborative properties and other attributes of this method, which include its ability to enhance the schedule through the use of fast-tracking, may enable the construction of projects with better schedule performance than those produced using the DBB method. However, the analysis indicated no significant schedule performance differences between the public school projects constructed with the CM at Risk and DBB project delivery methods as presented in Table 6. The observed differences in schedule growth appear to show that the CM at Risk delivery method provides a higher level of schedule predictability. Examination of the data revealed that differences are generated primarily within the original schedules and increase through use of the change order process. Project intensity (m2 =day) and project intensity ($=day) were used as measures of productivity to determine the amount of project work (by area and cost) put in place per school project (design and construction) schedule day. Primarily due to the previously confirmed similarity of size and schedule characteristics, the analysis indicated no statistically significant differences between CM at Risk and DBB school projects with regard to project intensity (m2 =day). Moreover, although significant differences were found when testing by project intensity ($=day), the usefulness of this metric as a measure of productivity performance was ineffectual, as the difference is predominantly attributable to the higher cost of the CM at Risk school projects. Additional tests of construction intensity, which use the construction schedules in lieu of the project schedules, were conducted and resulted in similar findings. Quality Performance Findings The results of the survey data analysis revealed that projects completed with CM at Risk significantly outperformed DBB projects in all areas of product quality. As presented in Table 7, significantly larger percentages of “Satisfied” and “Very Satisfied” responses were provided by CM at Risk district managers with P-Value ≤0.0105. Table 6. Schedule Comparison of Design-Bid-Build and CM at Risk Metric Actual construction (days) Actual project (days) Construction schedule growth (%) Project schedule growth (%) Project intensity (m2 =day) Project intensity ($=day) © ASCE Design-Bid-Build CM at Risk Difference Percentage P-Value 569.00 1,023.60 15.58% 8.11% 14.74 $22,924.90 564.70 1,008.30 12.41% 6.52% 14.18 $28,784.40 4.30 15.30 3.17% 1.59% 0.57 ($5,859.50) 0.76 1.49 20.35 19.61 3.84 −25.56 0.8930 0.8250 0.4868 0.4760 0.6084 0.0033 05016009-7 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. Table 7. Owner Satisfaction with Construction Product Quality Metric Table 11. Owner Reported (%) Disputes and Claims DesignCM Bid-Build at Risk Difference Percentage P-Value Overall quality Building exterior Building interior HVAC system Plumbing system Lighting system Warranty issues 93.02 86.05 90.70 96.52 93.02 93.03 93.02 100.00 100.00 100.00 98.00 100.00 98.00 95.74 6.98 13.95 9.30 1.48 6.98 4.97 2.72 7.50 16.21 10.25 1.53 7.50 5.34 2.92 0.0058 0.0018 0.0028 0.0105 0.0083 0.0017 0.0073 Metric DesignBid-Build CM at Risk 90.70 9.30 0.00 94.12 5.88 0.00 Zero 1–3 More than 3 Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. DesignCM Bid-Build at Risk Difference Percentage P-Value Metric Overall performance Planning Cost control Schedule control Quality control Communication Cooperation 90.48 89.29 91.67 83.34 90.48 92.86 91.67 96.00 98.00 98.00 94.00 100.00 98.00 96.00 5.52 8.71 6.33 10.66 9.52 5.14 4.33 6.10 9.75 6.91 12.79 10.52 5.54 4.72 0.0040 0.0029 0.0007 <0.0001 <0.0001 0.0047 0.0083 Table 9. Owner Satisfaction with Design Team Service Quality DesignCM Bid-Build at Risk Difference Percentage P-Value Metric Capture owner vision Complete documents Clearly defined documents Timely responses Communication Cooperation 95.35 88.37 88.37 100.00 100.00 96.00 4.65 11.63 7.63 4.88 13.16 8.63 0.0562 0.0004 0.0149 91.86 91.86 91.86 93.88 95.92 94.00 2.02 4.06 2.14 2.20 4.42 2.33 0.0084 0.0532 0.0036 Table 10. Owner Satisfaction with Project Team Service Quality Metric Overall service quality Communication Cooperation Collaboration DesignCM Bid-Build at Risk Difference Percentage P-Value 90.70 96.00 5.30 5.84 0.0025 91.86 90.70 88.37 98.00 96.00 98.00 6.14 5.30 9.63 6.68 5.84 10.90 0.0864 0.0038 0.0039 As given in Table 8, the results indicate that the performance of the CM at Risk method for construction of public schools was superior to that of DBB in all areas of construction team service quality, including overall service, planning, cost control, schedule control, quality control, communications, and cooperation with P-Value ≤0.0083. Analysis of district manager survey responses (Table 9) indicates that CM at Risk projects significantly outperformed DBB projects in all areas of design team service quality with the exception of capture owner vision and communication for which the P-Value were marginally above the 0.05 level at 0.0562 and 0.0532, respectively. As presented in Table 10, CM at Risk district managers responded with significantly higher satisfaction of project team service quality in the areas of overall quality, cooperation, and collaboration when compared with responses from DBB managers. Although an observable difference was obtained with project team © ASCE 3.42 −3.42 0.00 Percentage P-Value — — — 3.77 −36.77 0.00 Table 12. Owner Reported (%) Warranty and Callback Issues and Associated Costs Metric Table 8. Owner Satisfaction with Construction Team Service Quality Difference DesignCM Bid-Build at Risk Difference Percentage P-Value Zero to two 3–5 6 or more $0 $1 to $5,000 More than $5,000 21.43 40.00 38.57 0.54 0.21 0.24 14.63 36.59 48.78 0.76 0.20 0.05 −6.80 −3.41 10.21 21% −2% −19% −31.73 −8.52 26.47 39.27 −8.96 −79.91 0.5104 0.0405 Table 13. Service Quality and Readiness (Days) Metric Readiness (days) Design-Bid-Build CM at Risk Difference Percentage P-Value 347.40 312.90 −34.50 −9.93 0.4708 communication, the P-Value of 0.0864 was above the chi-square significance region of 0.05 set for the analysis. The analysis indicated there were not enough data to conclude a significant difference in the mean number of disputes and claims between CM at Risk and DBB school projects. CM at Risk district construction managers reported three projects (5.9%) with one to three disputes and claims, whereas DBB managers reported eight of their projects (9.3%) within the same range. Because of the relatively small number of disputes and claims reported, there were also not enough data to produce a viable analysis regarding the cost difference for this issue. The full range of responses is provided in Table 11. Analysis of warranty and callback issues and their associated costs were used as empirical measures of product quality readiness. The analysis indicated no significant difference in the mean number of issues occurring in CM at Risk and DBB school projects, although a significant difference was found in the costs of these issues. Even though the observed data show a higher number of occurrences for the CM at Risk projects in Table 12, the evidence indicates that the performance of the CM at Risk project delivery method was significantly better at controlling the cost impact of the warranty and callback issues that did occur. The final empirical measure of quality was computed using schedule performance data. As given in Table 13, no statistically significant differences were found between the mean service quality readiness values of the CM at Risk and DBB projects included in the study. Both methods required more than 300 days to complete all punch list and other miscellaneous items of work after the date of substantial completion. Conclusion and Recommendations The impetus behind this research was to address the ongoing construction industry question: Which project delivery method, CM at 05016009-8 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. Risk or DBB, performs at a higher level in terms of cost, time, quality, and claims? In so doing, this study focused on providing current, statistically significant, empirical evidence to define the comparative performance attributes of both methods when used for the construction of public schools. This research has revealed the following empirical evidence: • The cost performance of the DBB project delivery method was significantly superior to that of the CM at Risk method for the construction of public schools when comparisons were made across all cost metrics; • The product quality performance of the CM at Risk project delivery method was significantly superior to that of the DBB method for the construction of public schools when comparisons were made across all product quality metrics; • The service quality performance of the CM at Risk project delivery method was significantly superior to that of the DBB method for the construction of public schools when comparisons were made across all service quality metrics, with the exceptions of design team capture of owner vision, communication and project team communication, for which observable evidence was obtained indicating CM at Risk performance superiority; and • Evidence was not obtained to support the superiority of either of the two project delivery methods being compared in terms of cost growth, time (schedule duration), time variance (schedule growth), claims, or warranty and callback performance. some districts have implemented the use of prototypical designs, standardized equipment, and prequalification procedures to reduce unpredictable results (risk). These actions may have resolved many of the issues for which the collaborative properties of the CM at Risk method are purported to provide an advantage, thereby making the projects more suitable for the DBB method. Furthermore, the evidence indicated that CM at Risk contractors were sometimes not hired early enough in the process to provide preconstruction services of substantive value, which would have limited the effectiveness of CM at Risk. Additionally, although significant schedule performance differences were not found, observable evidence did indicate that the collaborative properties of the CM at Risk method may have improved original schedule predictability and increased the number of days awarded to contractors in the change order process. Finally, it is theorized that the lack of schedule differences found between the two methods may have had more to do with the nature of public school construction schedule requirements (that facilities be open and operable to meet a rigorous public participation schedule) than to the project delivery method being used. Additional possible explanations for the cost, schedule, quality, and risk performance differences experienced within this research may be related to the selection of an improper project delivery method, given the particular set of project design and construction circumstances. Evidence was obtained showing that incorrect owner perceptions and unrealistic expectations of project delivery method abilities influenced the delivery method decision-making process, which served to increase performance differences. Implications The literature has exposed many important issues within the construction industry including high levels of risk, project complexity, and a lack of trust among the participants for which the CM at Risk project delivery method is theoretically equipped to improve, alleviate, or control. District construction manager responses to survey questions focused specifically on collaboration and cooperation among the individual and collective construction, design, and project teams have provided evidence that the collaborative properties of CM at Risk have enabled this method to perform at significantly higher product and service quality levels than the DBB method. This evidence supports the foundational research and theoretical construct that collaborative environments have the ability to positively influence the product and service quality of construction projects. Conversely, the cost performance results indicate that the collaborative properties of the CM at Risk method were not capable of controlling or reducing costs, time, or risk (claims) on these projects. In fact, costs for the CM at Risk projects were significantly higher. Furthermore, virtually all survey responses to product and service related questions were in the “Satisfactory” and “Very Satisfactory” categories for both CM at Risk and DBB projects, indicating that almost all schools within the study met the required standards and expectations regardless of the delivery method being used. Essentially, public school administrators were paying a significant premium for superior, although perhaps superfluous, levels of service and product quality. The resulting performance differences from the foundational research and theoretical construct may be attributable in part to the fact that the current research was focused strictly on public sector projects that are similar in both size and type, as opposed to the wide varieties and sizes of both public and private projects included in the foundational research. Further explanation may be related to project complexity. Although public school projects appear to be a good fit for CM at Risk because of the number of involved parties and their complex nature, evidence was obtained showing that © ASCE Recommendations This research has shown that the collaborative properties of the CM at Risk method provide significant beneficial product and service quality performance improvements when used for constructing public school projects. However, the evidence indicates that these enhancements come with significant increases in the cost of construction. This study has also shown that the DBB method has the ability to provide public school projects with satisfactory quality levels, and with the added benefit of significantly lower costs. Furthermore, this research has cited previous studies confirming the inherent risks within the construction industry and specifically associated with public school projects that serve to complicate the construction process. Therefore, those in decision-making capacities will be required to make value-based decisions when selecting the most appropriate delivery methods for the construction of public schools within their districts, to balance budgetary concerns with their required product and service quality levels. These decisions require a wide body of knowledge of the beneficial and limiting attributes of the various project delivery methods, and an ability to select the most appropriate methods for the situational aspects of the particular projects in question. The results of this research have provided conclusive project delivery method performance information that can be used to assist decision-makers in the selection process. Thus, it is recommended that state and local district policies be aligned to allow for the widest possible selection of project delivery methods for the construction of public school projects, enabling those in decision-making capacities to make the proper choices to benefit the public. Future Research Future research related to public school construction should include the value analysis of high-performance designs, materials, and equipment suitable for energy-efficient and sustainable building approaches. Additionally, in light of the many variables that drive 05016009-9 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage. cost, schedule, quality, and claims issues during the construction of public schools, the researchers suggest a more in-depth study through use of a direct comparison case study approach, to limit the influences of outside effects. Research is forthcoming regarding the previously discussed influence that incorrect owner perceptions and unrealistic expectations combined with restrictive district policies are having on project delivery method selections. Downloaded from ascelibrary.org by Charlie Roberts on 11/17/16. Copyright ASCE. For personal use only; all rights reserved. References Abramson, P. (2012). “The 2012 school construction report.” CR1-16, School Planning and Management, Chatsworth, CA. Abramson, P. (2013). “The 2013 school construction report.” CR1-16, School Planning and Management, Chatsworth, CA. Addis, B. (2007). Building: 3000 years of design engineering and construction, Phaidon Press Limited, London. Akintoye, A. S., and MacLeod, M. J. (1997). “Risk analysis and management in construction.” Int. J. Project Manage., 15(1), 31–38. Atkinson, R. (1999). “Project management: Cost, time, and quality, two best guesses and a phenomenon, it’s time to accept other success criteria.” Int. J. Project Manage., 17(6), 337–342. Bennett, J., Pothecary, E., and Robinson, G. (1996). “Designing and building a world-class industry.” Centre for Strategic Studies in Construction, Univ. of Reading, Reading, U.K. Bullen, C. V., and Rockart, J. F. (1981). “A primer on critical success factors.” Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA. Carolinas AGC. (2009). Conf. on Alternative Project Delivery Methods, Charlotte, NC. Chan, A. P. C., and Chan, A. P. L. (2004). “Key performance indicators for measuring construction success.” Benchmarking: An Int. J., 11(2), 203–221. Cox, T., Kenig, M., Allison, M., Kelley, S. W., and Stark, M.. (2011). “Primer on project delivery.” American Institute of Architects (AIA), Washington, DC. D’Agostino, B., and Bridgers, M. (2010). Eleventh annual survey of owners: Rising from the ashes of recent economic woes, FMI/CMAA, Raleigh, NC. Demkin, J. A., and AIA (American Institute of Architects). (2008). The architect’s handbook of professional practice, Wiley, Hoboken, NJ. Fitchen, J. (1986). Building construction before mechanization, MIT Press, Cambridge, U.K. Forester, J. (1989). Planning in the face of power, University of California Press, Los Angeles. Ghavamifar, K., and Touran, A. T. (2008). “Alternative project delivery systems: Applications and legal limits in transportation projects.” J. Prof. Issues Eng. Educ. Pract., 10.1061/(ASCE)1052-3928(2008) 134:1(106), 106–111. Gordon, C. (1994). “Choosing appropriate construction contracting method.” J. Constr. Eng., 10.1061/(ASCE)0733-9364(1994)120:1(196), 196–210. © ASCE Heady, E. J. (2012). Construction Law; the History is Ancient! Construction Connection News Letter, 〈http://www.constructionconnection.com/ blog/tag/history-of-construction-law/〉 (Jun. 26, 2012). Kangari, R. (1995). “Risk management perceptions and trends of U.S. construction.” J. Constr. Eng. Manage., 10.1061/(ASCE)0733-9364(1995) 121:4(422), 422–429. Kenig, M. E. (2011). Project delivery systems for construction, Associated General Contractors of Americas, Arlington, VA. Konchar, M. (1997). “A comparison of united states project delivery systems.” Technical Rep. No. 38, Pennsylvania State Univ., University Park, PA. Konchar, M., and Sanvido, V. (1998). “Comparison of U.S. project delivery systems.” J. Constr. Eng. Manage., 10.1061/(ASCE)0733-9364(1998) 124:6(435), 435–444. McNichol, E., Oliff, P., and Johnson, N. (2011). States continue to feel recession’s impact, Center on Budget and Policy Priorities, Washington, DC. National Research Council. (2009). Advancing the competitiveness and efficiency of the U.S. construction industry, National Academies Press, Washington, DC. O’Connor, P. J. (2009). “Integrated project delivery: Collaboration through new contract forms.” Faegre & Benson LLP, Minneapolis. Oliff, P., Mai, C., and Palacios, V. (2012). States continue to feel recession’s impact, Center on Budget and Policy Priorities, Washington, DC. Rockart, J. F. (1979). “Chief executives define their own data needs.” Harvard Bus. Rev., 57(2), 81–93. Rojas, E. M., and Kell, I. (2008). “Comparative analysis of project delivery systems cost performance in Pacific Northwest public schools.” J. Constr. Eng. Manage., 10.1061/(ASCE)0733-9364(2008)134:6(387), 387–397. RS Means. (2013). RS means construction cost data, 71st Ed., Construction Publishers & Consultants, Norwell, MA. Sanvido, V., and Konchar, M. (1999). “Selecting project delivery systems: Comparing design-bid-build, design-build, and construction management at risk.” Project Delivery Institute, State College, PA. Saporita, R. (2006). Managing risk in design and construction projects, ASME Press, New York. SAS/STAT version 9.3 [Computer software]. SAS Institute, Cary, NC. U.S. Census Bureau. (2011). “Capital spending report 2011.pdf (application/pdf object).” Washington, DC. U.S. DOT (U.S. Department of Transportation). (2006). “Design-build effectiveness study.” Washington, DC. Vincent, J. M., and McKoy, D. L. (2008). The complex and multi-faceted nature of school construction costs: Factors affecting California, American Institute of Architects, Sacramento, CA. Williams, G. H. (2003). “An evaluation of public construction contracting methods for the public building sector in Oregon using data envelopment analysis.” Ph.D. dissertation, Portland State Univ., Portland, OR. Zaghloul, R., and Hartman, F. (2003). “Construction contracts: The cost of mistrust.” Int. J. Project Manage., 21(6), 419–424. 05016009-10 J. Constr. Eng. Manage., 2016, 142(10): 05016009 J. Constr. Eng. Manage.
© Copyright 2026 Paperzz