152-background

CMPE 252A:
SET 4:
Medium Access Control
Protocols
1
Persistence after Carrier
Sensing



Problem: Non-persistent carrier sensing forces
stations to back off too much when the channel is
lightly loaded.
Approach: Have stations persist with their
transmissions immediately after detecting that the
channel is busy, and deal with collisions after they
occur.
Examples: CSMA, CSMA/CD, CSMA/CA, Ethernet
adopts 1-persistent CSMA/CD.
2
1-Persistent Carrier Sensing

Definition: Any node that has a local DATA
packet to send will persist to sense the
channel until no carrier is detected. At that
time, with probability 1 the node will transmit
the packet.
ongoing transmission
collision
3
Markov Chains


A random process is a Markov process if the
future of the process given the present is
independent of the past.
For arbitrary times t1  t2  ...  tk  tk 1 we have
P{ X (tk 1 )  xk 1 | X (tk )  xk ,..., X (t1 )  x1}
 P{ X (tk 1 )  xk 1 | X (tk )  xk }

The above is called the “Markov property,” and the
pmf or pdf of a Markov process that is
conditioned on several instants of time always
reduces to a pmf or pdf conditioned only on the
most recent time instant.
4
Markov Chains

Example:
The Poisson process is a continuous-time Markov
process, because:
P{N (t k 1 )  j | N (tk )  i, N (t k 1 )  xk 1 ,..., N (t1 )  x1}
 P{N (t k 1 )  j | N (tk )  i}
5
Markov Chains




An integer-valued Markov process is a Markov
chain.
The value of X(t) at time t is called the state of
the process at time t.
The probability Pij  P{ X (t k 1 )  j | X (t k )  i} is
called a (state) transition probability.
Transition probabilities must satisfy:
Pij  0;

P
ij
1
j0
6
Markov Chains





The probability that X(t) assumes a given value at time t
is called a state probability.
In all our cases, the probability of state j approaches a
constant independent of time and the initial state
probabilities, i.e.,
P{ X n  j}  p j (n)   j for all j
State probabilities become limiting probabilities, steadystate probabilities, or stationary probabilities.
Steady-state probabilities can be interpreted as the
proportion of time the system visits a given state.
It must be true that

 j  0;

j
1
j0
7
Markov Chains

In practice,
 We can obtain transition probabilities.
 From them we obtain steady-state probabilities
by means of a system of simultaneous
equations that include the sum to 1 of steadystate or transition probabilities.
8
1-Persistent Carrier Sensing

Definition: Any node that has a local DATA
packet to send will persist to sense the
channel until no carrier is detected. At that
time, with probability 1 the node will transmit
the packet.
ongoing transmission
collision
9
1-persistent CSMA



TP 0
All the assumptions made for the analysis of non persistent
protocols are valid here as well.
Divide time into transmission periods (TP)
The type of a TP that follows another TP depends on
the number of those persistent users waiting for the
current TP to end!
TP 1
TP 2
Nodes detect carrier
TP 1
TP 0
TP 1
TP 2
time
10
Throughput Analysis



The state of the system at the beginning of a transmission
period is the type of that transmission period,
The knowledge of the system at the beginning of a
transmission period and the scheduling points of packets
during the transmission period determines the state of
the system at the beginning of the next transmission
period.
We have three states:



Transmission period of type 0 (TP 0): No packets are
transmitted at the beginning of the TP.
TP 1: Only one packet is transmitted at the beginning of the TP,
and it succeeds if there are no arrivals within tau sec after that.
TP 2: Two or more packets are transmitted at the beginning of
the TP.
11
Throughput Analysis


The states correspond to a three-state Markov chain
embedded at the beginning of the TP.
The throughput of the system is given by the
proportion of time spent in state 1 having a success
divided by the average time spent in all three states:
P 01
P 00
1
0
P 20
P 22
P 10
P 21
2
P 12
P 11
S
Ps 1P
(a)

T 
j
j
j0
12
Throughput Analysis


The average length of TP 0 is the same as in nonpersistent CSMA: T0  1 / 
A transmission period of type 1 and 2 have the same
average length, which is determined by the last packet
arriving in the TP, just as in np-CSMA: T  T  Y  P  
1

The value of Y is: Y    (1  e  )

From the state diagram we have:
1
2

 0 P01   1 P10   2 P20
 1 P12   2 ( P21  P20 )
 0  1   2  1
(b)
P10  P11  P12  1
13
Throughput Analysis





Once in TP0, the system leaves to TP1 with
probability 1, because we consider all the idle period
as a single TP0 (and because arrivals are Poisson!)
If no packets arrive during a TP 1 or TP 2, the next
transmission period is TP 0.
If only one packet arrives τ seconds after a TP 1 or
TP 2 starts, the next TP is a TP 1.
If two or more packets arrive τ seconds after the
start of a TP 1 or TP 2, the next period is a TP 2.
This implies:
P01  1; P1 j  P2 j
j  0, 1, 2
(c)
14
Throughput Analysis
 0 P01   1 P10   2 P20
 1 P12   2 ( P21  P20 )
 0  1   2  1
(b)
P01  1; P1 j  P2 j
j  0, 1, 2 (c)
P10  P11  P12  1

Using Eqs. (b) and (c) we can obtain:
P10
0 
;
1  P10
P10  P11
1 
;
1  P10
1  ( P10  P11 )
2 
1  P10
15
Throughput Analysis
START
τ
Y= y
TP 1:
τ
NEXT TP
LAST
FIRST
time
P Y
0 arrivals => TP 0 next
1 arrival => TP 1 next
Arrivals after the vulnerability period of first packet in TP
determine the type of the next TP.
Let Y = y, then: P{transition from state 1 to 0 | Y  y}  e
 ( P y )
We know the CDF of Y, and taking the derivative:
fY ( y)  e   ( y)  e  (  y )
16
Throughput Analysis
START
TP 1:
τ
τ
NEXT TP
LAST
Y= y
FIRST
time
P Y
0 arrivals => TP 0 next
1 arrival => TP 1 next
Unconditioning we obtain:
P10 


0
e
  (Py)
fY (y)dy 


0
e  (Py) e   (y)  e  (   y) dy
 (1  )e  (P )
17
Throughput Analysis
START
TP 1:
Y= y
LAST
τ
τ
NEXT TP
FIRST
time
P Y
0 arrivals => TP 0 next
1 arrival => TP 1 next
Let Y = y, then:
P{transition from state 1 to 1| Y  y}   ( P  y)e  ( P  y )
Unconditioning we obtain:
P11 


0
 (P  y)e
 e
  (P )
  (Py)
fY (y)dy 


0
 (P  y)e  (Py) e  (y)  e  (   y) dy

  
P   P  
 2 

18
Throughput Analysis

Substituting in Eq. (a) we obtain:
S
Ps 1 P

T 
j 0
j
j
Pe  ( P  2 ) 1  P   (1  P   / 2)

 ( P  2 )  (1  e  )  (1   )e  ( P  )
19