Queue Overflow Queues will always sometimes overflow Can reduce chances by allocating more queue memory But Cause more variation in delay (jitter) So Often want only short queues Just enough to cope with common demand bursts Not enough to pretend to cope with overload Dropping Packets Queue fills, some packet needs to be dropped. Which? Tail Drop Drop the packet which didn’t fit Simple, efficient, easy to understand Tail Drop (2) Packets are from different sources Most come from one source Arriving packet is from other source Drop Tail is not fair Random Drop Pick a random packet from queue Drop it Packets from aggressive source have higher probability of being dropped More fair Random Drop (2) Better result More costly Head Drop If we assume packets are being continually transmitted Head of queue is essentially random (hard to predict what packet will be there) So, picking head of queue is almost picking random packet Head Drop (2) Seems as good as random drop Much cheaper to implement (but still not as cheap as tail drop) Head Drop - Ugly Case Packet burst can flush queue of other packets Packet burst is "normal" sometimes Sender doesn’t know packets are being dropped until one of its packets is dropped Drop Head isn’t often used Early Drop Queue Almost full Many packets from one source Drop packet before queue fills Leaves room for other packets Packet Marking Queue Almost full Don’t drop packet Instead mark packet as having experienced congestion DEC bit Multiple Queues New packet is for the low queue What to do? Multiple Queues (2) Can drop packet when space on its queue is exhausted Fails to fully utilize queue space Can take space from another queue Might cause other queue to run out of space with few packets Can mark packet Queue it now, drop it later if space required Multiple Queues (3) Use random drop Queue with many packets more likely to have packet dropped Use fair drop rank packets (WFQ) drop packet which has lowest priority the packet that would be transmitted last Milk or Wine ? Some packets are like milk Want to keep the newest ones discard anything old Some packets are like wine The older stuff is generally better discard new packets Data transfer (most TCP) is wine new packets likely to be retransmit of older keep older so RTT measured more accurately Real Time packets more like milk old packets often delayed too much to be useful want to send new packets as quickly as possible avoid delay Original Networks Direct wire connections Made by plugging connectors together Delays Transmission Speed of Light No queueing No quantisation No jitter Service Quality Once connected, very predictable Note: transmission errors can still occur Getting connected difficult and slow Circuit idle during gaps in transmission But reserved Improvements Electric switching No more plugs Otherwise little difference Multiplexing Minor increased delay Better use of bandwidth No real difference to QoS Packet Switching Allows more of bandwidth to be used Makes delays unpredictable Introduces jitter Phone company networks: Emulate phone calls Connection oriented networking Resource reservation (perhaps) Establishing Connections A to B 3Connect OK (circuit Data 3) Connect 5 to B (circuit number 3) 2 Data From A1 3 (circuit 1) (circuit 1) 5 Connection identifiers are local Vary on each segment Path remains constant for life of connection Connections and QoS Connection request can contain QoS information Bandwidth required == 1 Mbps Maximum delay == 120 ms Parameters can direct choice of path If no suitable path connection is refused Requirements kept in switching nodes associated with circuits packets can be treated properly B Reserving QoS A B Bandwidth reserved as connection made If available - if not, refused Delay bounds set Calculating QoS Reservations Bandwidth Total Link Bandwidth Minus Reservations made already If big enough OK Calculating QoS (2) Delay Arrange queue delay limit Include transmission & speed of light Compute total delay through node Note: depends upon packet size If remaining delay available big enough OK Controlling User Demand A B Host sending traffic into network Agreement exists: Network will provide some guaranteed service Host will send no more than X bytes/second Entry point controls A B Network must monitor traffic at entry point Classification Packets classified into streams (flows) A flow has QoS characteristics How classification is done considered later For now: All packets from a source IP address or from a source network number Or - assume connection based network classification known from connection identifier Policing Enforcing limits on stream Classify packets Measure Data Rate Compare rate to stream limits Drop excess packets Or mark to be dropped later Policing (2) Actual policy will depend upon QoS agreement CBR stream might drop packets CBR = Constant Bit Rate ABR stream might ABR = Available Bit Rate mark packets above MCR Minimum Cell Rate drop packets above PCR Peak Cell Rate VBR methods need more complex measurements VBR = Variable Bit Rate Shaping More complex method than policing Gives better results Instead of simply dropping packets that exceed limits Attempt to alter characteristics of flow to make it fit Leaky Bucket Leaky Bucket Smooth fluctuations in input rate Get output rate to match desired rate Implementable using Token Bucket Queue Timer Classification Queueing Scheduling CQS More realistic case A B Several networks between source and destination More realistic case (2) A B For path used by packets Several entry points One at each network Interior Nodes Inside a network (cloud) routers can assume that traffic has been policed or shaped no need to repeat that But QoS properties must be implemented Must not delay packets that require low delay (etc) Interior Nodes (2) Packet classification must be available Passed from entry point Reclassified Packets then queued according to classification To meet required delay characteristics Packets requiring lowest delay highest priority queue Queueing packets Less queues than possible delays Need to merge streams together Streams with similar requirements Use queueing methods discussed earlier QoS While Forwarding Connection based protocols Receive packet Identify which connection Map that to outgoing connection ID and QoS parameters That selects outgoing interface Queue appropriately Choose a queue so packet can meet delay requirements QoS During Connection Setup Connection based protocols Choose route Needs adequate available bandwidth Needs to satisfy delay requirements Reject connection if no suitable path Save state of connection if path exists QoS While Forwarding Connectionless protocols IP Receive Packet Determine QoS requirements (packet classification) Select outgoing path Must satisfy delay & bandwidth requirements Queue appropriately Choose queue... Handle congestion Route Selection Bandwidth must be available Now and for life of flow Delay must be acceptable Should not put low delay traffic on satellite hop Connections: assign connection some of unallocated bandwidth stable thereafter Without connections: Nothing is allocated available bandwidth can alter from packet to packet Classification Dilemma Classifying packets takes resources Router connected to high speed link Receives very many packets / second Time to classify can mean packets not all able to be processed Congestion at router CPU! Dealing with low bandwidth Admission delay: 10 Mbps link 1500 byte packet 1.2ms transmit delay Next packet in queue must wait regardless of priority 1.2ms max - OK, should not cause a problem 56 Kbps link 1500 byte packet 214ms transmit delay Long queueing delay for next packet Link Fragmentation On slow link Break packet into pieces Send each individually But always in order (no re-ordering) Not necessarily consecutively 100 byte packet just 14 ms at 56Kbps Tolerable But adds overheads each fragment needs a header NB: Not IP fragmentation QoS Environment Communicating End Users Predictable Net Service Guarantees Value for Money Network Operators (Providers) Happy Customers Minimum Costs Good Revenue Stream QoS Requirement Provides benefits for consumer Must be some benefit for provider Otherwise, providers will not bother Happier customers More customers, more revenue Higher revenue per customer Must not increase costs too much Internet Network Model End to End networking End nodes contain state Interior nodes (routers) just forward packets Makes network robust end node dies, communication naturally dies too router/link dies, nothing important is lost Implications for QoS End node specifies QoS required easy Network must provide that without state each packet an individual Impractical Provider packet classification Any policy provider likes Internal traffic vs external Customer traffic vs transit Good customers vs others TCP (or UDP) port number guessing This is all that really exists today Integrated Services Desire to allow users to specify service levels Must get provider agreement ensure resources are available enable billing (or ...) Want connections Connection setup achieves all that Don’t want connections Lose all benefits of datagram network Integrated Services Soft State A B Routers remember resource request And get a chance to say NO Hope future packets follow same path Repeat request periodically Integrated Services Flows One to One unicast traffic One to Many multicast traffic Unidirectional (Simplex) Any classifiable set of packets from a source to ... QoS request made for a packet flow IS: Node Types A B Network Elements (nodes, routers) QoS Enabled QoS Aware non-QoS Application Characteristics Real Time Applications Assume play out buffer to handle (some) jitter Tolerant can handle buffer emptying occasionally Intolerant cannot... Intolerant need absolute upper bound on delay Tolerant can handle desired delay occasionally being exceeded Application Characteristics (2) Elastic Applications Can hand any delay any reasonable delay Prefer short delay As soon as possible IS: Service Classes Controlled Load deliver most packets as if network is not congested very little loss delay for most packets close to minimum possible Meets needs of both tolerant real time applications elastic applications IS: Service Classes (2) Guaranteed Service no delay outside bounds no packets discarded provided application stays within agreed traffic profile Meets needs of intolerant real time applications Service Parameters Token bucket rate & size (r and b) bytes/second rate that tokens are added bytes that bucket can hold before overflow specify sustained rate, and maximum burst Peak data rate (p) maximum data rate of a burst Maximum packet size (M) used in queue & delay calculations Minimum policed unit (packet size) (m) Not a lower bound on packet size Smaller packets assumed to be this big Allows overheads to be calculated Packet Rates Expressed in bytes/second Values from 1 byte/sec to 40 TB/sec 32 bit unsigned value counting bytes/sec If peak rate is not defined expressed as outgoing interface data rate 10 Mb/sec (Ethernet) 100 Mb/sec (FDDI, fast Ethernet) 1000 Mb/sec (1Tb/sec) (Giga Eth) or as infinity Packet sizes Packets smaller than m use m tokens many small packets will reduce permitted data rate Allows routers to adjust for link level overhead Maximum packet size (M) Affects admission delays Guaranteed Service Parameters All the above Rate R Slack S R bytes/second R >= r S microseconds R represents ideal delay bound S represents allowable deviation from R Implementing GS Two additional parameters (per Network Element) D Intrinsic Delay Processing, ... C Rate Dependent Delay Transmission time, ... NE’s accumulate delays Dtot accumulated D Ctot accumulated C S = Dreq - (b/R + Ctot/R + Dtot) Where Dreq was original delay bound Path Setup Not specified by Integrated Services Manual path setup OK network operators SNMP path setup OK Simple Network Management Protocol signalling protocol for path setup OK This is the expected method RSVP Respondez Sil Vous Plais (excuse my French...) "Please Answer" resource ReSerVation Protocol Observation: It isn’t sender who cares about QoS, it is recipient RSVP (2) Can be used for other than QoS will ignore that for now assume QoS application of RSVP Sender initiates RSVP path setup source of flow Sends to recipient(s) special IP packet PATH message Contains flow characteristics SENDER_TSPEC Contains space for routers to calculate ADSPEC RSVP (3) Routers process PATH message Examine SENDER_TSPEC Adjust ADSPEC Recipient receives PATH message Returns a RESV message Containing a FLOWSPEC The desired QoS parameters RESV travels back along path travelled by PATH packets in reverse naturally Routers agree to provide service in FLOWSPEC Or do not... RSVP - Router Router RFC 2205 RSVP Routing Process Process Policy Control Admission Control Classifier Packet Scheduler RSVP Process Router RFC 2205 RSVP Routing Process Process Policy Control Admission Control Classifier Packet Scheduler Processes RESV messages Policy Control Router RFC 2205 RSVP Routing Process Process Policy Control Admission Control Classifier Determines whether this reservation request is authorised or not Packet Scheduler Admission Control Router RFC 2205 RSVP Routing Process Process Policy Control Admission Control Classifier Packet Scheduler Determines whether router has resources to handle the request Link Level setup Classifier Router RFC 2205 RSVP Routing Process Process Policy Control Admission Control Classifier Packet Scheduler Examines each data packet classifies it into appropriate flow Packet Scheduler Router RFC 2205 RSVP Routing Process Process Policy Control Admission Control Classifier Packet Scheduler Arranges for data packets to be transmitted Queueing Link Level control Routing Process Router RFC 2205 RSVP Routing Process Process Policy Control Admission Control Classifier Packet Scheduler Part of normal router function Can interact with RSVP RSVP Objects RSVP message contains header Then one or more OBJECTS Object is just some data TLV format Type Length Value Common Internet Packet structure RSVP Object Format Length Class Data Length Multiple of 4 Minimum Value 4 Class Type of Object this is Type Specific instance of that Object Type Integrated Services Processing Data Processing Rules Data exchange method RSVP Others IS Defined Services IS - Controlled Load Net under low load no delay guarantees best effort delivery for packets that exceed specs IS - Guaranteed service Fluid model (water in a pipe) as ideal GS specified by error bounds from ideal model best effort delivery (marked packets) if possible for those outside specs Data Packet Flow S A B R S sends packets src: S dst: R unicast or multicast prot: UDP srcport: 1234 dstport: 9876 Data Packet @ A Notice that <R,UDP,9876> matches existing session Determine that <S,1234> is allowed sender for reservation that exists Police (or shape) traffic according to reservation flowspec Put packet on high priority queue Repeat at B Using Integrated Services Scaling is a problem Data structure for every flow requesting QoS Could be millions of flows internet phone calls... Many of those might pass through any backbone Huge data requirement Currently no reason for ISPs to deploy IS/RSPV Currently no (notable) deployment of IS/RSVP anywhere Mostly unused protocols Still have potential ISP Practices Implement QoS for customers that demand it Implemented locally Inside one ISP network only Basic methods much as has been seen before Packet Classification Enhanced priority for packets that are identified No dynamic setup Configured policy (& flows) only ISP Methods Could process this way independently at each router But most ISPs networks are like Only green nodes connect to customers Classification only needs to be done there Classification Classify packets at customer connection point Only need to configure for local customers Scales much better Classify packets at interconnect points Often simply so all foreign traffic is best effort Other nodes (backbone nodes) just route Use classification done by edge nodes Classification must be passed along Use the IPv4 TOS field Was unused in practice Effectiveness of Intra-ISP QoS Benefits Can achieve better QoS within an ISP network Gives ISP competitive advantage over others Locks in clients unless all peers also move or enhanced QoS will be lost Drawbacks Much manual configuration needed Not easy to determine if QoS is actually better Locks out customers unless all their peers also come Revelation TOS at classification point is unused Can use that field to signal QoS desired From customer to ISP From one ISP to another Need standard set of values Differentiated Services Still do classification QoS only enhanced where ISP desires TOS field now input to that classification Need agreements (Service Level Agreement) Customer - ISP SLA ISP - ISP Or no extra QoS will be provided Differentiated Services Classification at ingress points Classification distributed with packets Distributed Services Code Point old TOS byte At each interior node Per Hop Behaviour Per Hop Behaviour PHB externally visible characteristics at each hop queuing scheduling drop probability (etc) micro-flows aggregated micro-flow DS name for IS packet flow packets sharing addresses & ports aggregated flows receive specific PHB intended to realize some specific QoS DS Code Point DSCP (also known as Traffic Class) Label on each packet Selects a PHB Assigned at ingress node Constant through DS Domain 6 bits replaces TOS byte in IP header rightmost two bits unused now used for ECN bits PHB & DSCP PHB is the definition of the behaviour at a router DSCP is the selector of a PHB Each PHB has a preferred DSCP but multiple DSCP values may map to one PHB routers are not required to implement lots of PHB DSCP assignments Precedence D Service Selection T R C DSCP DSCP defined to be roughly compatible with old TOS Default PHB DSCP Precedence D T 0 0 0 0 0 R 0 Used for all unclassified traffic Traditional Best Effort Routing Class Selector PHB DSCP Precedence D T 0 0 R 0 Traffic assigned to classes "Higher" class traffic delivered no later than lower class Different DSCP values should be treated differently 111000 and 110000 must be given precedence Many possible implementations Expedited Forwarding PHB DSCP 1 Low Delay Low Jitter Low Latency Assured Bandwidth 0 1 1 1 0 EF - PHB Specified bandwidth Enforced at ingress node Net will not receive "too much" EF traffic If packets arrive at correct rate must be forwarded at correct rate Not delayed behind other traffic If packets arrive before scheduled time ingress node should police or shape interior node should forward EF - PHB (2) EF models a leased line Bandwidth perhaps smaller Loss & Delay characteristics known A B Effective leased line between nodes Entry bandwidth policed by ingress router EF - PHB - drawbacks A B Likely to have multiple EF flows A B Not necessarily using diverse paths EF - PHB - drawbacks (2) Bandwidth allocation is difficult Must restrict EF usage per client Or cannot guarantee leased line properties Need to allow for multiple usage of same path A B Also like to avoid unnecessary limitations Ingress routers lack knowledge Assured Forwarding PHB DSCP m m m n n 0 001 <= mmm <= 100 Class 01 <= nn <= 11 Drop precedence Class forwarding characteristics buffer space Drop precedence when packets need to be discarded higher precedence => more likely Other PHBs Dynamic Real-Time / Non-Real-Time (12) Limited Effect Per Domain Behaviour (PDB) Extension of PHB to obtain QoS through a domain network Implemented by choosing a suitable PHB to use inside the domain Different PHB in different domains for same PDB Standard or Local PHB Packet Classification Before DSCP added Or in order to add DSCP Have Packet Data Nothing else Must determine requirements of packet Addresses Source & Destination Very crude classification Protocol Adds a little (TCP vs UDP) For more, must look in protocol header The 5-Tuple Source Address IP header Destination Address IP header Protocol IP header Source Port TCP/UDP hdr Destination Port TCP/UDP hdr Defines exactly what packet is being used for Port usage really known only at endpoints Endpoint classification works But don’t need ports there Port usage often standard Network classification usually works
© Copyright 2026 Paperzz