Specifying and executing behavioral requirements: the play

Specifying and executing
behavioral requirements
The play-in/play-out approach
David Harel, Rami Marelly
Weizmann Institute of Science, Israel
An interpretation by
Eric Bodden
Outline
 Problem statement:
How to specify requirements?
 Easily and
 Consistently





Background (LSCs)
Play-in
Play-out
Prototype tool, challenges
Conclusion
Requirements specification
 Usually start from informal use case
specification by domain experts
 Then manual mapping to formal spec
 Error prone
 Tedious
 No direct link between original spec
and the actual implementation!
 Can we do any better?
Play-in / play-out
 Has similarities to machine learning
and…
 Actually also to the way one teaches
people!
Play-in
Play-out
Philosophy:
“I’ll show you how it works and then
you try yourself!
Workflow
1. Prototype a GUI or object model of
you app. – dummy only!
2. Play-in behavior into this prototype.
“Play engine” connects and automatically
generates Live Sequence Charts
3. Replay (play-out) behavior, modify
and generalize; verify
4. Give the debugged spec to developers
Live Sequence Charts (LSCs)
 Similar to Message Sequence Charts
and Sequence Diagrams but more
powerful…
 Universal charts: Specify behavior
w.r.t. a given “prechart” (preamble).
 Existential charts: Specify test cases
that must hold in the universal LSCs.
An example – quick dialing
The play engine
 Backend which communicates with
the prototype via Microsoft’s
Component Object Model (COM)
 Generates LSCs in play-in mode
 Can then be generalized / modified
 Are verified in play-out mode.
 Also: Recording, modification and
replay of traces.
Play-in – an example
Pre-chart
n1
“+”
n2
“=”
(n1+n2)
Body
Specifying the pre-chart
.. and the body
Final result
Another one – show number
A test case (existential LSC)
Play-out
“Now that I have shown you how it
works, try on your own!”
(and I will watch you meanwhile
and record what goes wrong)
Play-out by example
3
4
5
+
1
2
Situation after hit on plus
Situation at the end
… watch the test case!
Replaying recorded violations
Playing in internal objects
… and playing them out.
How does it work
 “Execution manager” schedules “live
copies” of universal LSCs -> behavior
 At the same times evaluates
existential LSCs -> testing
 “Run manager” records/replays traces
Life cycle of a “live copy”
Challenges and problems
 Need for symbolic evaluation
 Nondeterminism (?)
 SELECT function




Scalability (?)
Performance (?)
Deadlocks/livelocks (?)
Support for multiple platforms (?)
Conclusion
 Easy way to derive a “debugged”
requirements specification.
 Immediate link between use cases ,
specs and implementation.
 Performance and scalability issues not
discussed.