Answers to Selected Questions Barry Boehm CS 510, Pre Midterm 1 September 21, 2015 Risk and Time to Ship Risk exposure- is there anything else that can mitigate risk exposure besides time to ship? I noticed in the PowerPoint that as time to ship increases risk exposure decreases. Is there anything related to evidence besides time to ship that can cause risk exposure to fluctuate? (ICSM Principles 3 & 4; slide 20&21) In slide 21, Time to Ship is only addressing risk due to unacceptable dependability. As seen in slide 22, the risk of market share erosion increases with time to ship. Risk management is all about balancing the sources of risk, as seen in slide 23. Example RE Profile: Time to Ship - Loss due to unacceptable dependability Many defects: high P(L) Critical defects: high S(L) RE = P(L) * S(L) Few defects: low P(L) Minor defects: low S(L) Time to Ship (amount of testing) 8/31/2015 © USC-CSSE 3 Example RE Profile: Time to Ship - Loss due to unacceptable dependability - Loss due to market share erosion Many defects: high P(L) Critical defects: high S(L) Many rivals: high P(L) Strong rivals: high S(L) RE = P(L) * S(L) Few rivals: low P(L) Weak rivals: low S(L) Few defects: low P(L) Minor defects: low S(L) Time to Ship (amount of testing) 8/31/2015 © USC-CSSE 4 Example RE Profile: Time to Ship - Sum of Risk Exposures Many defects: high P(L) Critical defects: high S(L) RE = P(L) * S(L) Many rivals: high P(L) Strong rivals: high S(L) Sweet Spot Few rivals: low P(L) Weak rivals: low S(L) Few defects: low P(L) Minor defects: low S(L) Time to Ship (amount of testing) 8/31/2015 © USC-CSSE 5 Choosing Versions of COCOMO II With regard to the Early Design and Post-Architecture models in COCOMO II, when would choose one over the other/what is the appropriate point in the development lifecycle to transition from early design to post-architecture? What should you do if different parts are in different phases? In general, it is best to use the Applications Composition model in the Exploratory phase, the Early Design model in the late Exploratory or early Valuation phase, and the Post-architecture model in the late Valuation or Foundations phase. It is best to try to have a PostArchitecture estimate by the end of the valuation phase. If different parts are in different phases, it is best to follow the guidance above for each part. Reducing IDPD Is it possible to minimize the effects of Incremental Development Productivity Decline (IDPD) through design decisions? Many of the examples in the book demonstrate that platform flexibility and reuse across product lines can make a good business case for trading additional early complexity for a product ecosystem that reuses earlier development work. How does this balance with IDPD, when a software artifact must support multiple product lines? Does that have an unintended effect on the business case? Yes. When a product line is feasible, developing its architecture and reusable components will reduce the needed effort, particularly if the increments are subsequent releases of the product line. There will still be some IDPD for the earlier releases if they need to be kept up to date. Similar strategies will work for multiple product lines. Other strategies such as identifying and modularizing around major sources of change will also reduce IDPD. MedFRS Additional Personnel In HW3 table 7-3, discussed system upkeep and additional personnel figures, do we need to use this in calculations and if so how would they apply? The additional personnel are to be hired once the Initial Operational Capability is ready to be operated. Thus they are not covered in Homework 3. They could be continuing expenditures for a subsequent business case. Calibrating COCOMO II So we learned about the A, B, C, D variables in the COCOMO II model and Dr. Boehm mentioned that while the variables are based off empirical data, they can be custom calibrated. Let’s say I am a portfolio manager and I want to use my firm’s historical data to accomplish this. How would I go about doing this? You can get a free COCOMO II calibration package from Softstar Systems called Calico by Googling on calico costar and clicking on the Softstar Systems website url. Personnel and Team Factors Question I: In EP-2 page 47, about personnel factors section 2.3.2.1.3, it says “the personnel factors are for rating the development team’s capability and experience, not the individual.” In EP-2 page 34, about TEAM cohesion section 2.3.1.4, it says “The team cohesion scale factor accounts for the sources of project turbulence and entropy because of difficulties in synchronizing the project’s stakeholders.” The question is how to deal with these factors when their sources are overlapped by each other, when developers are also stakeholders. In other words, inexperienced developers are counted twice by TEAM factor and personnel factors. The TEAM factor adds effort caused by difficulties in getting all of the success-critical stakeholder teams (Developers, Customers, Maintainers, Users, Suppliers, etc.) to collaborate. This source of effort is largely independent of the level of experience of the members of the Development team. Highly experienced developers can sometimes be even less cooperative. Cost Effects of CMMI Levels - COCOMO II has the equation of calculating the schedule and cost for a software program development and maintenance, however in the Aerospace industry where they are CMMI compliant and requires activities such as Peer Reviewing, SCM maintenance, and other activities that are SLOCC dependent. Is there any cost estimation equation that links cost and schedule of software development and CMMI activities? - The COCOMO II Process Maturity (PMAT) scale factor covers the cost effects of CMMI levels (see Table 2.15 in EP-2, Chapter 2 of the COCOMO II book). COCOMO II was developed in 1995-2000, when the Software CMM was being used, but we have found that the CMMI scale is roughly the same. In general, improving by one CMMI level will decrese cost by about 10%. Overlaps Among the ICSM Principles As for ICSM Principle 1, in the book on page 44&45 discussing valuation and foundations phases, there were some example of prototyping and using it to test with stakeholders, to see if it is well enough to be build-to specific product in the near future. However, isn’t this kind of action more like Principle 4 standards? Evidence Decision, ex: providing prototypes for valued customer, to see the defects earlier and they could have caught the failure in early stage. Need some specific explanation in these overlapping confusions. The ICSM principles are not meant to be mutually exclusive. As with other good practices such as Teambuilding and Win-Win Negotiation, Prototyping supports all four principles. As above, it supports stakeholders’ mutual understanding for Principle 1. It supports stakeholders’ incrementally committing to partial versions of the system for Principle 2. It involves concurrently engineering the system’s operational concept, requirements, and user interface for Principle 3. And as above, it provides evidence of system feasibility for Principle 4. Alternatives to Competitive Prototyping In the RPV failure story from the ICSM book Ch. 3, some of its failure can be attributed to shortcomings with respect to Principle 2, incremental commitment and accountability, on both the part of the customer and the winning bidder. Would a better approach have been a) in order to facilitate incremental commitment, rather than publishing a comprehensive proposal that encapsulated all of their wishes for the future of RPV, the customer could have published a proposal prioritizing their requirements, or b) following one of the critical elements of effective commitment, the winning bidder should have been forthcoming about both their ability to fulfill some requirements, or c) both? If both, what’s a good approach to finding that “sweet spot”? Is it just a matter of trial and error? If neither, are there other better approaches other than concurrent competitive prototyping? For the 4:1 RPV, there were many serious uncertainties about the nature of the missions to be supported, the maturity of some of the technologies, and the performance characteristics of the RPVs and their controllers. Neither the customer nor the bidders had enough knowledge of these uncertainties to define or prioritize requirements. As discussed at the end of Section 3.2, competitive prototyping has a number of practical difficulties, but it fulfills some of the characteristics of prototyping as a way of buying information to reduce uncertainty and risk. There are other approaches, such as the DARPA approach of offering million-dollar prizes for such feats as producing the fastest unpiloted ground vehicle to reach Las Vegas. Overlaps Among the ICSM Principles - 2 I'm having a little bit of trouble discerning between incremental commitment and riskbased decisions, or at least they seem very strongly connected to me. It seems to me that if a team did well at making risk-based decisions, then they would be successful at making incremental commitments as well (since the risk would be too high to make the commitment if there was not enough evidence that the work could be done). Conversely, if they did poorly at making risk-based decisions, this would cause them to make bad commitments. From this, it seems there is a causal relationship between Principle 4 and Principle 2. Am I correct in inferring this, or am I getting confused? As above, the principles overlap considerably, and that’s OK. As seen in some of the stories, some of the failure stories fail at all 4 of the principles, and some of the success stories succeed at all 4 of the principles. Evaluating Competitive Prototyping In page 89 of the ICSM book, it discusses the potential risks in the success story of the concurrent competitive prototyping RPV systems development. In the solution for HW2 question 5, the second risk listed is from that page, which is an extra expense in keeping prototype teams together and productive during often-overlong evaluation and decision periods. The mitigation for this risk is to have contracts be for two phases, where the contractor’s second phase overlaps with evaluation of their first phase. I don’t understand how can the two phases be overlapped? If they didn’t finish the first phase, could they go to the second phase? Can you give an example for this? It is a nontrivial job to evaluate competitive prototypes. If it is not done thoroughly, the losing competitor can protest the decision, leading to many months delay in judging the protest. A thorough evaluation will take several months, and require a versatile infrastructure for exercising several independently-developed prototypes. The evaluations determine the next round of prototypers, and thus cannot overlap the previous round or the next round. The competitors can’t afford to keep their teams paid for several months while they wait to see if they are still in the competition. The best solution is for the customer to provide funds for keeping the core teams together, and to have them answer questions and perform followup exercises of their prototypes. The Second Cone of Uncertainty In page 89 of the ICSM book, it discusses the potential risks in the success story of the concurrent competitive prototyping RPV systems development. In the solution for HW2 question 5, the second risk listed is from that page, which is an extra expense in keeping prototype teams together and productive during often-overlong evaluation and decision periods. The mitigation for this risk is to have contracts be for two phases, where the contractor’s second phase overlaps with evaluation of their first phase. I don’t understand how can the two phases be overlapped? If they didn’t finish the first phase, could they go to the second phase? Can you give an example for this? As shown in the next chart, the increasingly rapid rate of change in a product’s competition, technology, organization, and mission priorities will make a project’s originally-determined requirements increasingly obsolete. Simply following the original requirements will miss both capitalizing on new technology and saving costs, and responding to competitive pressures and increasing costs. The longer the period of development, the wider will be the Second Cone of Uncertainty. The Cones of Uncertainty – Need incremental definition and development Uncertainties in competition, technology, organizations, mission priorities 8/26/2015 Copyright © USC-CSSE 17 Skipping the Valuation and Foundations Phases What if a project was started using one of the traditional software development models, leaving us with a good documentation about the scope and technology about the project, does skipping the valuation and exploration phase violate the ICSM principle? If yes, is there a way we can use the existing documentation without violating the ICSM principles? (Chapter 0) If the project is relatively small, there is little risk of skipping the Valuation and Foundations phases and doing short agile sprints and releases, as the Second Cone of Uncertainty can be rebaselined for the next sprint or release. For larger projects, using the existing documentation for the whole project will run the risk of becoming obsolete per the Second Cone of Uncertainty. However, proceeding incrementally (as with the MedFRS example) can reduce this risk. ICSM and Waterfall In the exam, whether we will be asked the question related to other software models such as waterfall model, and spiral model that has been covered in lecture 1? _For example, the question: The ICSM is better than Waterfall in saving money true or false. Since the waterfall model and most other models are special cases of the ICSM, that will not be an exam question. RPV Failure Story and Principle 2 REFERENCE: HW 2, RPV FAILURE STORY( FAILURE WITH RESPECT TO PRINCIPLE 2) Could you explain the failures with respect to Principle 2 which speaks about Incremental commitment and accountability since I lost points for that answer in my homework so I wondering if I got the concept wrong. The RPV failure story was a failure because it made a total commitment to a requirement, budget, and schedule without evidence that this approach was feasible. Thus it was an example of the need for Principles 2 and 4 as well as Principle 3. Since the RPV controllers would be stakeholders, it could also be called a failure with respect to Principle 1. TRW-SPS and the 4 ICSM Principles In Chapter 2 section 2.2, the book and slides mostly focus on the success with respect to Principle 2,(and also part of principle 4 I think),could you please determine how the success of TRW SPS is respecting to the other Principles of ICSM? TRW-SPS followed Principle 1 in asking stakeholders for their opinions about how best to improve productivity and including the results in the strategy. It followed Principle 3 in incrementally defining the system’s technology and architecture choices, and incrementally developing the early-adopter project’s high-priority requirements first. It followed Principle 4 in assessing evidence of feasibility and risks of making the wrong commitments. Risks of Proceeding on Small and Large Projects 1, Why the author says it is less risky to proceed with the evidence shortfall on a small project with a small cost of rework, than to forge ahead with a very large project with a very large cost of rework? Does he mean it is better to invest in evidence on small project than on large project because it can avoid the rework? I am confused with the meaning of this part. Reference: ICSM book Ch. 4 Page 98 A risk-based decision as in Principle 4 is one that avoids large risks. Since the risks are small on small projects it is less necessary to proceed with complete evidence of feasibility, as compared to large projects. Decision Reviews Without Evidence Please explain how evidence serve as a important criterion at milestone decision reviews with real time example and its relation with respect to traditional schedule-based or event based reviews? (Ref: ICSM p 97) One explanation is the failure story of the Unaffordable Requirement where the project customer decided to proceed without evidence of feasibility. There are numerous other examples, such as the Master Net example in Chapter 2, the failed RPV example in Chapter 3, and the healthcare.gov example on section 0.7. Exam Format What will be the exam format? i.e., will the questions be similar to the homework (short answer), will there be T/F or multiple choice sections, if there are multiple sections how will they be weighted... [Note: This may not be the type of question you were hoping for from this assignment, but it is one that I feel will help me best prepare for the exam.] Jim Alstad’s response is correct for Midterm 1, in that the format follows last year’s Midterm 1 format: 2 calculation questions worth 30 and 40 points, and 10 true-false questions worth 3 points each. The calculations will not involve exotic parts of EP-2, but may involve business case formulas. There is no guarantee that Midterm 2 and the Final Exam will follow last year’s formats.
© Copyright 2026 Paperzz