T-76.115 Ohjelmatyö

T-76.115 Project Review
Festival
I1 Iteration
Progress Report !PRELIMINARY!
29.11.2004
T-76.115 Project Review
Agenda

Project status




Used work practices
Work results



Achieving iteration goals
Project metrics
System Architecture
Testing Plan
Demo
2
T-76.115 Project Review
Introduction to the project

The application name has changed from m3Festival to
mGroup powered by DiMaS

We are developing an application called mGroup.

mGroup




is an application for mobile phones,
lets groups of people chat with text and images,
is intended to be used in festival-type events.
More about mGroup:



It features a server, the mGroup server, which relays the messages between
mGroup applications.
Information from mGroup is visible in the DiMaS peer network.
Has more advanced features, such as group management and some support
for commercial content creators.
3
T-76.115 Project Review
Status of the iteration’s goals

Goal 1: ”Get started with development”


Goal 2: ”Nightly builds (including tests) running”




OK
Goal 4: ”Immediate sharing: pictures and text distributed to a group,
and replying to shared items.”


Partially OK
Nightly builds run ok, and CruiseControl informs anyone who breaks the build.
Very few unit tests have been written.
Goal 3: ”Creating media: taking pictures and writing text”


OK
OK
Goal 5: ”Technical specification”


Partially OK
First draft has been written, but contents have not been discussed with
technical advisor
4
T-76.115 Project Review
Status of the iteration’s deliverables

Project Plan


Requirements Specification



OK
Partially OK
A list of 30 additions has been devised, but it was presented too late for the
customer to have time to approve.
Technical Specification


Partially OK
First draft has been written, but contents have not been discussed with
technical advisor.
5
T-76.115 Project Review
Realization of the tasks





All tasks were started and completed,
although the implemented features still
require attention.
Planned 344 hours for this iteration,
realized tasks (including meeting tasks)
produce 359 hours.
Individual tasks estimation error is pretty
large: 48% avg abs error
Total error is much smaller: 4%
Tasks in this phase were too fine-grained,
which made it hard to perform estimates.


E.g. The separation of protocol and UI parts
for individual features.
All in all, pretty well estimated.
6
T-76.115 Project Review
Previous Actions to Improve Task Planning

In the PP-iteration we had over-optimistic task estimates


In this iteration, the total estimates are in much better control
We specified these actions for improvement:




Estimates should not be optimistic
 The estimates were not optimistic in total, since we did not overshoot the
estimated amount of time.
 Making a fine-grained division for the tasks probably helped assure that
the initial estimate for the whole did not become too optimistic.
 But recording hours for tasks on this fine-grained level is difficult.
Hours will be estimated by those who actually execute the tasks
 This was done to some degree. One developer made all programming
estimates.
Better distribution of hour-reporting information
 Meeting memos contain Trapoli information, and
 Task distributions contain Trapoli information.
More frequent monitoring of task status
 This was done three or four times a week, and producing the work
burndown graph helped make sense of the information in the report.
7
T-76.115 Project Review
Actions to Improve Task Planning in I2

The planning game in the beginning of this iteration was successful, we
will do it again.


We will enter all tasks into Bugzilla



This time include documentation tasks to have them prioritized with the rest
One Bugzilla task per Trapoli task
Prioritizing and discussing
Trapoli tasks will not be as detailed as in the previous iteration

One task per feature.
8
T-76.115 Project Review
Working hours by person
Realized hours in this iteration

Real
Plan
Diff
Jouni
49,5
54
-4,5
Lauri
72
50
22
Manne
49,35
40
9,35
Sam
43,5
50
-6,5
Sami
26
50
-24
Tommi
68,75
55
13,75
Tuomas
45,5
50
-4,5
Total
354,6
349
5,6


Lauri and Tommi are putting in some
extra effort. We must save their hours
in future iterations.
Sami’s work with the requirements
specification was started too late in
this iteration, and so it was not fully
completed.
The data is one day old.
9
T-76.115 Project Review
Working hours by person




We got feedback that we had
scheduled much work for
iteration FD.
To lessen the amount of work
in iteration FD, we have
introduced an internal
christmas-iteration.
The pace in the christmas
iteration is 6
hours/person/week, so it can
almost be considered a
holiday.
The pace in the last two
iterations is 8
hours/week/person, which is
much less than the 13
hours/person/week we had
in this iteration.
PP
Sub
I1
I2
FD
Total
Jouni
31
31
52
56
51
190
Lauri
54
54
50
40
46
190
Manne
77
77
38
33
42
190
Sam
15
15
50
65
60
190
Sami
49
49
51
45
45
190
Tommi
32
32
53
55
50
190
Tuomas
40
40
50
50
50
190
Total
298
298
349
342
349
1330
PP
I1
Sub
Chr
I2
FD
Total
Jouni
31
50
80
30
56
24
190
Lauri
56
72
128
20
30
12
190
Manne
88
49
137
15
15
23
190
Sam
19
44
62
33
62
33
190
Sami
54
26
80
30
55
25
190
Tommi
37
69
105
10
50
25
190
Tuomas
47
46
93
30
47
20
190
Total
330
355
684
168
315
162
1329
10
T-76.115 Project Review
Quality Metrics





System tests last run: 29.11.2004, 22:20
Client revision 124
9 tests run, 2 failures
All tests are related to message creation and delivery
The failures:




1.2.4 Special Characters: Client hangs at message display
 Worked on second attempt. Stability problems?
1.3 Test: Content Required: It is possible to send messages without content
 Simple to fix
All unit tests pass for the server
Quality statement
Component
Coverage
Server
Small set of
automated tests
Client
Complete set of
system tests
Status
Some stability issues
11
T-76.115 Project Review
Software Size in Lines of Code


Client: 2656 lines
Server: 3744 lines
12
T-76.115 Project Review
Risks

We are monitoring 14 risks.
Three risks have become obsolete.




One risk realized



Risk 7.2.1: Delayed availability of
phones.
We managed with simulators.
The risk that worries project group:



Risk 7.2.1: Delayed availability of
phones
Risk 7.2.8: Unclarity about phone
model
Risk 7.2.9: Less than four phones
available
Risk matrix
Probability

2
7
5
14
High
0
1
1
2
Medium
1
5
0
6
Low
1
1
4
6
Low
Medium
High
Risk 7.2.5: The features on the phone
do not function properly or as expected
Some stability issues: Client works
better on 6670 than 6630?
Severity
PP Risk matrix
The risk that probably worries
customer:


Risk 7.1.1: Changing or misunderstood
requirements
Hopefully we can increase
communication to mitigate this risk.
3
8
5
16
H
1
2
0
3
M
1
5
2
8
L
1
1
3
5
L
M
H
13
T-76.115 Project Review
Results of the iteration






Project Plan and Requirements Specification updated
Technical Specification first draft
QA Plan
A functional first version of mGroup
Build system up and running
Testing started
14
T-76.115 Project Review
Project Plan Update

Verification criteria of goals have been made more precise







Not quite followed through yet.
Stakeholders information updated to include more parties from the
customer, and members of other groups.
Per-phase work plan updated.
Updated work practices, especially bug tracking, peer testing, and QA
Plan.
Iteration plans included properly for each iteration.
Updated resources and budget information.
Updated Risk Log.
15
T-76.115 Project Review
Used Work Practices










Time Reporting
Version Control
Coding Conventions
Risk Management
Meeting Memos
Planning Game (details)
Tasks in Forums (details)
Scrum-style meetings (details)
Work burndown graph (details)
Continuous integration (details)
16
T-76.115 Project Review
Previously Planned Work Practices


These practices were planned in the PP phase for I1. This is a follow-up:
Start SEPAs





Improvements in Trapoli monitoring: OK




Test-Driven Development: Started, not in full speed
Pair Programming: Started, not in full speed
Progress Monitoring and Control: Started, OK
System-Level Test Automation will be planned but likely not executed yet:
Research started
Work Burndown Graph
Daily Scrum-style meetings: OK
Coding Conventions: OK
Nightly builds and continuous integration: OK

(save for the client tests)
17
T-76.115 Project Review
Planning Game


1.
2.
XP-style planning game
Used to select tasks for the current iteration
We start with a budget of X hours for the iteration.
Development writes down a selection of most important use cases and
other tasks onto pieces of paper, based on the requirements spec

3.
4.
We wrote the use cases on A4-papers, and broke them down into tasks on
post-it notes.
During the meeting the customer selects those use cases and other tasks
he/she wishes to have implemented
Development estimates the time taken to complete each of the tasks the
customer has selected
1.
Customer can change her selection based on this information
5.
The process is continued until the X hours have been filled.

The order of the use cases selected indicates their priority.
18
T-76.115 Project Review
Work Burndown Graph
400
350
Work left
300
250
200
150
100
50
0
8.11.
13.11.
18.11.
23.11.
28.11.
Date
19
T-76.115 Project Review
Work Burndown Graph, continued





Iteration time on the x-axis
Estimated time left for this iteration on y-axis
Generated directly from the Trapoli report ”Tasks, realization”
Placed on the front page of the Wiki so it is visible to everyone
Pros:



Cons:



No automated tool for creating it.
Does not tell you which tasks are being overlooked.
We made some mistakes



Very easy to maintain
Communicates many variables: Time into iteration, Completion status, Working
velocity, Time Reporting activity
At first, there were problems with time reporting (estimating tasks to zerohours-left)
Later on, some Trapoli tasks were left dangling with a nonzero left-field even
after they were completed.
In the future, we will go through all started, non-completed tasks in
meetings

Make sure they are not dangling
20
T-76.115 Project Review
Work Burndown Graph, continued
Errors in time reporting
400
350
Work left
300
250
200
150
100
50
0
8.11.
13.11.
18.11.
23.11.
28.11.
Date
Problems in existing tasks
Re-scoping and cleaning
visible already here.
up of tasks done too late!
21
T-76.115 Project Review
Scrum-style meetings

Everybody answers three questions:




Pros:


Fast and straightforward format for task distribution
Cons:


1. What have I completed since the last meeting?
2. What will I do until the next meeting?
3. What hinders me from doing my work?
Due to problems with tasks in Trapoli, second question hard to answer.
 Tasks too fine-grained
 Lots of interdependent tasks.
On IRC, even though we try to keep the meeting short, it usually takes
almost an hour.

Scrum-style stuff is combined with a status report. The status report could be
written in advance.
22
T-76.115 Project Review
Tasks in Forums

By having a forum thread for each task




Forums will be insufficient for tracking bugs, we have to use Bugzilla
We will move all tasks to Bugzilla also




We don’t want to have tasks in three places (Bugzilla, Trapoli, Forums)
Bugzilla features discussions
We can prioritize bugs together with tasks.
After the next iteration planning, all iteration tasks are entered into
Bugzilla and Trapoli


Facilitate discussion around tasks, and
Have a place to post Trapoli-information.
Prioritized
In Scrum-meetings, project members pick their next task/bug from
Bugzilla


Preferably done in priority order
If the task needs clarification, this is done
23
T-76.115 Project Review
Continuous Integration





Installed a tool called CruiseControl
It monitors the code repository
When there are changes in source
control, it builds the code and runs
all the tests
If there are errors in build or tests,
it emails the person who performed
the check-in.
Pros:



Developers get instant feedback if
they break something
Easier to fix problems, since the
changes are fresh in memory
No broken code in source control
24
T-76.115 Project Review
Requirements Specification Update

30 new requirements proposed




These requirements were not new as such, but had been overlooked by the
project group.
Some were related to more advanced functionality, and were therefore put in
the ”let’s worry about this later” by the project group.
X Use Cases implemented
TODO: Sami?
25
T-76.115 Project Review
RS: Requirements Statistics



We have identified total 29 functional requirements, 8 technical
requirements and 7 usability requirements
Every functional requirement relates to a specific use case
Almost all technical requirements relate to a specific functional
requirement and use case
Functional Requirements
Must
Should
Nice to have
Total
Sharing of media items
3
2
6
11
Viewing of media content
2
5
3
10
Creating media content
1
1
2
Accessibility to media items
2
Group management
2
Researcher support
1
Total
5
13
2
1
3
1
11
29
26
T-76.115 Project Review
RS: Requirements Statistics



We have identified total 29 functional requirements, 8 technical
requirements and 7 usability requirements
Every functional requirement relates to a specific use case
Almost all technical requirements relate to a specific functional
requirement and use case
Other Requirements
Must
Should
Nice to have
Technical Requirements
1
2
5
Usability Requirements
3
4
27
T-76.115 Project Review
System Architecture

Very straightforward:
28
T-76.115 Project Review
QA Approach

We have System-level tests which are run manually until we can find a
way to automate them.


We have automated unit tests


Run weekly, and a test report produced in the wiki
Run every time someone checks in code
Sam, more?
29
T-76.115 Project Review
Implemented Use Cases

Sami
30
T-76.115 Project Review
Demo

Tuomas
31