08DesignRequirements..

Design Requirements
IACT424/924 Corporate Network
Design and Implementation
Review

Requirements Analysis





Network Requirements
User Requirements
Application Requirements
Host Requirements
Determining New Customer
Requirements
Written by Gene Awyzio September 2002
Overview
Gathering and Listing Requirements
 Working With Users
 Service Metrics
 Characterising Behaviour
 Developing Performance Metrics
 Estimating Data Rates
 Comparing Application Characteristics

Written by Gene Awyzio September 2002
Gathering and Listing
Requirements

Determine Initial Conditions


These are the basis for the start off any
design project
Initial conditions include



Type of project
Initial design goals
Outside forces
Written by Gene Awyzio September 2002
Gathering and Listing
Requirements

Common initial constraints



Funding limitations
Organizational constraints
Existing components




User inertia
Customised applications
Performance and functional limitations
Knowing initial conditions allows us to make
informed design choices
Written by Gene Awyzio September 2002
Working With Users

This allows us to understand user
behaviour patterns and environments



Applications
Usage patterns
Requirements
Written by Gene Awyzio September 2002
Service Metrics

Measurable network variables

Availability


Recoverability




% uptime or downtime
MTBF
MTBSO
MTTR
Error and loss rates




BER
CLR
CMR
Frame and packet loss
Written by Gene Awyzio September 2002
Service Metrics

Capacity metrics

Data Rates
Peak Data Rates (PDR)
 Sustained Data Rate (SDR)
 Minimum data rate


Data size
Burst size
 Duration

Written by Gene Awyzio September 2002
Service Metrics

Delay metrics



End-to-end, round trip, system delay
Latency
Delay variation (jitter)
Written by Gene Awyzio September 2002
Service Metrics

These metrics are configured and measured
using network management platforms





SNMP
CMIP
PING
Pathchar
We also need to consider where in the
network we want to measure each metric and
potential mechanisms
Written by Gene Awyzio September 2002
Characterising Behaviour


Goal: To estimate network performance by
gaining understanding of how users and their
applications function across the network
Usage patterns




Total number of users
Frequency of use (sessions/day)
Average session length (seconds)
Estimated simultaneous sessions
Written by Gene Awyzio September 2002
Characterising Behaviour
Number of
Frequency
Simultaneous sessions
Session 1
Active
Duration
Active
Active
Application Sessions
Session 2 Active Active Active Active Active Active
Session 3 Active
Session 4
Active
Active
Active
Active
Time
Written by Gene Awyzio September 2002
Characterising Behaviour

Application behaviour considerations



Data sizes application will be processing
Frequency and time duration of data
passing
Traffic flow characteristics
Direction
 Flow pairs
 Multicasting

Written by Gene Awyzio September 2002
Developing Performance
Metrics

Reliability/availability
Availability
(% uptime)
AMOUNT OF ALLOWED DOWNTIME
Yearly
Monthly
Weekly
Daily
95%
438h
36.5h
8.4h
1.2h
99.5%
43.8h
3.7h
50.5m
7.2m
99.95%
4.38h
21.9m
5.05m
43.2s
99.98%
1.75h
8.75m
2.0m
17.3s
99.99%
0.88h
4.4m
1.0m
8.7s
Written by Gene Awyzio September 2002
Developing Performance
Metrics

Most systems operate at 99.95%

5 minutes downtime per week


Transients (a few seconds) such as rerouting or
congestion
One minor interruption per month
Written by Gene Awyzio September 2002
Developing Performance
Metrics

Effort and costs to support higher availability
can skyrocket

Some applications cannot tolerate any downtime
during session



Remote control of vehicles
Times of high availability are known and planned
for in advance
Many system outages are brief


Applications stall for a few seconds
These still must be accounted for in overall
availability
Written by Gene Awyzio September 2002
Developing Performance
Metrics

Two guidelines for availability
measurements



Availability is measured end-to-end
A loss of availability in any part of the
system is counted in overall availability
Availability may be measured selectively
between particular users, hosts or
networks
Written by Gene Awyzio September 2002
Developing Performance
Metrics

General reference thresholds
High Performance
Low Performance
Testbed
99.98
99.95
99.9
99.5
99.0
Written by Gene Awyzio September 2002
Developing Performance
Metrics

Thresholds for delay

Interaction Delay (INTD)



Human Response Time (HRT)




How long is the user willing to wait for a response
Aim for 10 –30 seconds
Time boundary when users begin to perceive delay
INTD < HRT : Users do not perceive delay
Approximately 100ms
Network Propagation Delay

Depends on distance and technology
Written by Gene Awyzio September 2002
Developing Performance
Metrics

Thresholds for delay
Human Response Time
Interaction Delay
Network Propagation Delay
0.01
0.1
1.0
10
100
Delay (Seconds)
Written by Gene Awyzio September 2002
Estimating Data Rates

Based upon



How much you know about the
transmission characteristics of application
Accuracy of estimation
Types of estimations



Peak data rate
Minimum data rate
Sustained data rate
Written by Gene Awyzio September 2002
Estimating Data Rates

Consideration must be given to
applications with



Large capacity requirements
Specific capacity requirements
Task completion times (TCT) for
applications

May be based upon user expectations or be set
by the application
Written by Gene Awyzio September 2002
Comparing Application
Characteristics

If application characteristics can be
grouped then we can compare to
determine thresholds
High Performance
Low Performance
Written by Gene Awyzio September 2002
Comparing Application
Characteristics

The threshold settings may be arbitrary

Particularly if applications form a
continuous range of delay
Written by Gene Awyzio September 2002