Free Riding Multicast
Berkeley SysLunch (10/10/06)
Sylvia Ratnasamy (Intel Research)
Andrey Ermolinskiy (U.C. Berkeley)
Scott Shenker
(U.C. Berkeley and ICSI)
ACM SIGCOMM 2006
Talk Outline
Introduction
Overview of the IP Multicast service model
Challenges of Multicast routing
Free Riding Multicast (FRM)
Approach overview
Overhead evaluation
Design tradeoffs
Implementation
Talk Outline
Introduction
Overview of the IP Multicast service model
Challenges of Multicast routing
Free Riding Multicast (FRM)
Approach overview
Overhead evaluation
Design tradeoffs
Implementation
Internet Routing – a High-Level View
Internet is a packet-switched network
Each routable entity is assigned an IP address
C1: Send(Packet, C2Addr);
C1
C2
Routers forward packets
towards their recipients
Routing protocols (BGP,
OSPF) establish
forwarding state in
routers
C3
C4
Internet Routing – a High-Level View
Traditionally, Internet routing infrastructure offers a
one-to-one (unicast) packet delivery service
S
G
G
Problem: Some applications
require one-to-many packet
delivery
Streaming media delivery
Digital conferencing
Online multiplayer games
G
G
IP Multicast Service Model
In 1990, Steve Deering proposed IP Multicast
extension to the IP service model for efficient one-to-many
packet delivery
S
Group-based communication:
Join (IPAddr, GrpAddr);
Leave (IPAddr, GrpAddr);
Send (Packet, GrpAddr);
G
G
G
Multicast routing problem:
Set up a dissemination tree rooted
at the source with group members
as leaves
G
IP Multicast Routing
S
G
G
G
G
IP Multicast Routing
New members must find
tree
S
G
G
G
?
join G?
?
G
IP Multicast Routing
New members must find
tree
S
Tree changes with new
members, sources
G
G
G
?
join G?
?
G
IP Multicast Routing
New members must find
tree
Tree changes with new
members, sources
Tree changes with
network failures
S
G
G
G
?
join G?
?
G
IP Multicast Routing
New members must find
tree
Tree changes with new
members, sources
Tree changes with
network failure
Admin. boundaries and
policies matter
S
G
G
G
?
join G?
?
G
IP Multicast Routing
New members must find
tree
Tree changes with new
members, sources
Tree changes with
network failure
Admin. boundaries and
policies matter
Forwarding state grows
with number of groups,
sources
S
G
G
G
?
join G?
?
G
IP Multicast – a Brief History
Extensively researched, limited deployment
Implemented in routers, supported by OS vendors
Some intra-domain/enterprise usage
Virtually no inter-domain deployment
Why?
Too complex? PIM-SM, PIM-DM, MBGP, MSDP,
BGMP, IGMP, etc.
FRM goal: make inter-domain multicast simple
Talk Outline
Introduction
Overview of the IP Multicast service model
Challenges of Multicast routing
Free Riding Multicast (FRM)
Approach overview
Overhead evaluation
Design tradeoffs
Implementation
FRM Overview
Free Riding Multicast: radical restructuring of interdomain multicast
Key design choice: decouple group membership
discovery from multicast route construction
Principal trade-off: avoidance of distributed route
computation at the expense of optimal efficiency
FRM Approach
Group membership discovery
Extension to BGP - augment route advertisements with
group membership information
FRM Approach
Group membership discovery
Extension to BGP - augment route advertisements with
group membership information
Multicast route construction
Centralized computation at the origin border router
Exploit knowledge of unicast BGP routes
Eliminate the need for a separate routing algorithm
Group Membership Discovery
Augment BGP with per-prefix
group membership
information
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
AS V
Group Membership Discovery
Augment BGP with per-prefix
group membership
information
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
Domain X joins G1
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
AS V
Group Membership Discovery
a.b*.*
Augment BGP with per-prefix
group membership
information
{G1}
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
Domain X joins G1
AS T
AS Z
h.i.*.*
f.g.*.*
Border router at X re-advertises
its prefix, attaches encoding of
active groups
BGP UPDATE
Dest
AS Path
FRM group membership
a.b.*.*
X
{G1 }
AS Q
AS R
AS V
Group Membership Discovery
a.b*.*
BGP disseminates
membership change
Border routers maintain
membership info. as
part of per-prefix state
in BGP RIB
Prefix
AS Path
a.b.*.*
V Q P X
c.d.e.*
V Q P Y
f.g.*.*
V R Z
h.i.*.*
V Q T
{G1}
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
AS T
AS Z
h.i.*.*
f.g.*.*
Active Groups
AS Q
AS R
AS V
Group Membership Discovery
a.b*.*
BGP disseminates
membership change
Border routers maintain
membership info. as
part of per-prefix state
in BGP RIB
Prefix
AS Path
a.b.*.*
V Q P X
c.d.e.*
V Q P Y
f.g.*.*
V R Z
h.i.*.*
V Q T
{G1}
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
AS T
AS Z
h.i.*.*
f.g.*.*
Active Groups
AS Q
AS R
AS V
Group Membership Discovery
BGP disseminates
membership change
Border routers maintain
membership info. as
part of per-prefix state
in BGP RIB
Prefix
AS Path
a.b.*.*
V Q P X
c.d.e.*
V Q P Y
f.g.*.*
V R Z
h.i.*.*
V Q T
AS X
AS Y
a.b.*.*
a.b*.*
c.d.e.*
G1}
AS{P
AS T
AS Z
h.i.*.*
f.g.*.*
Active Groups
AS Q
AS R
AS V
Group Membership Discovery
BGP disseminates
membership change
Border routers maintain
membership info. as
part of per-prefix state
in BGP RIB
AS X
a.b*.* AS{G
Y1}
a.b.*.*
c.d.e.*
a.b*.*
AS T
AS Z
h.i.*.*
f.g.*.*
a.b*.*
Prefix
AS Path
a.b.*.*
V Q P X
c.d.e.*
V Q P Y
f.g.*.*
V R Z
h.i.*.*
V Q T
G1}
AS{P
Active Groups
AS{GQ1}
a.b*.*
AS R
AS V
{G1}
Group Membership Discovery
BGP disseminates
membership change
Border routers maintain
membership info. as
part of per-prefix state
in BGP RIB
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
AS T
AS Z
h.i.*.*
f.g.*.*
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
f.g.*.*
V R Z
h.i.*.*
V Q T
AS Q
AS R
a.b*.* AS V{G1}
Group Membership Discovery
Domains Y and Z join G1
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
AS T
AS Z
h.i.*.*
f.g.*.*
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
f.g.*.*
V R Z
h.i.*.*
V Q T
AS Q
AS R
AS V
Group Membership Discovery
Domains Y and Z join G1
AS X
c.d.e.*AS Y
{G1}
a.b.*.*
c.d.e.*
AS P
f.g.*.* AS {ZG1}
AS T
h.i.*.*
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
f.g.*.*
V R Z
h.i.*.*
V Q T
AS Q
AS R
AS V
f.g.*.*
Group Membership Discovery
Domains Y and Z join G1
AS X
c.d.e.*AS Y
{G1}
a.b.*.*
c.d.e.*
AS P
f.g.*.* AS {ZG1}
AS T
h.i.*.*
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
{G1}
f.g.*.*
V R Z
{G1}
h.i.*.*
V Q T
AS Q
AS R
AS V
f.g.*.*
Packet Forwarding
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
Dissemination tree
AS Y
a.b.*.*
c.d.e.*
AS P
{G1 }
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
Lookup
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
{G1}
f.g.*.*
V R Z
{G1}
h.i.*.*
V Q T
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
Dissemination tree
a.b.*.*
V
c.d.e.*
AS P
Q
P
X
AS Y
{G1 }
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
Lookup
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
{G1}
f.g.*.*
V R Z
{G1}
h.i.*.*
V Q T
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
Dissemination tree
a.b.*.*
V
c.d.e.*
AS P
Q
P
X
AS Y
Y
{G1 }
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
Lookup
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
{G1}
f.g.*.*
V R Z
{G1}
h.i.*.*
V Q T
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
Dissemination tree
a.b.*.*
V
Q
X
Z
Y
c.d.e.*
AS P
R
P
AS Y
{G1 }
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
Lookup
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
{G1}
f.g.*.*
V R Z
{G1}
h.i.*.*
V Q T
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
Dissemination tree
a.b.*.*
V
Q
X
Z
Y
c.d.e.*
AS P
R
P
AS Y
{G1 }
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
Lookup
Prefix
AS Path
Active Groups
a.b.*.*
V Q P X
{G1}
c.d.e.*
V Q P Y
{G1}
f.g.*.*
V R Z
{G1}
h.i.*.*
V Q T
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
a.b.*.*
V
Q
Z
AS P
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
Y
SubtreeQ
c.d.e.*
R
P
X
AS Y
SubtreeR
V forwards packet to its children on
the tree, attaches encoding the
subtree in a “shim” header
G1 SubtreeQ
AS V
G1 Subtree
R
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
a.b.*.*
V
Q
c.d.e.*
AS P
R
P
X
AS Y
Z
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
Y
G1 SubtreeQ
AS V
G1 Subtree
R
V forwards packet to its children on
the tree, attaches encoding the
subtree in a “shim” header
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
a.b.*.*
V
Q
X
Z
Y
c.d.e.*
AS P
R
P
AS Y
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
G1 SubtreeQ
Transit routers inspect FRM header,
forward packet to their children on the
tree
G1 SubtreeR
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
a.b.*.*
V
Q
X
Z
Y
c.d.e.*
AS P
R
P
AS Y
AS T
AS Z
h.i.*.*No
f.g.*.*
AS Q
AS R
G1 SubtreeQ
Transit routers inspect FRM header,
forward packet to their children on the
tree
G1 SubtreeR
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
a.b.*.*
V
Q
Z
Y
c.d.e.*
AS P
R
P
X
AS Y
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
G1 SubtreeQ
G1 SubtreeR
No
Transit routers inspect FRM header,
forward packet to their children on the
tree
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
a.b.*.*
V
Q
Yes
AS T
X
Z
Y
c.d.e.*
AS P
R
P
AS Y
h.i.*.*
AS Z
AS Q
AS R
G1 SubtreeQ
Transit routers inspect FRM header,
forward packet to their children on the
tree
f.g.*.*
G1 SubtreeR
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
AS Y
a.b.*.*
V
c.d.e.*
SubtreeQQ
G11 TREE_BF
Q
P
X
AS P
R
Z
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
G1 SubtreeR
Y
Transit routers inspect FRM header,
forward packet to their children on the
tree
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
a.b.*.*
V
Q
X
Z
c.d.e.*
AS P
R
P
AS Y
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
G1 SubtreeR
Y
Transit routers inspect FRM header,
forward packet to their children on the
tree
AS V
Domain V:
Send(G1, Pkt)
Packet Forwarding
AS X
a.b.*.*
V
Q
X
Z
c.d.e.*
AS P
R
P
AS Y
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
G1 SubtreeR
Y
Transit routers inspect FRM header,
forward packet to their children on the
tree
AS V
Domain V:
Send(G1, Pkt)
FRM Details
Encoding group membership
Encoding the dissemination tree
Simple enumeration is hard to scale
Border routers encode locally active groups using a Bloom
filter
Transmit encoding using a new path attribute in BGP
UPDATE message
Encode edges into a shim header using a Bloom filter
Tree computation is expensive Border routers maintain
shim header cache
Talk Outline
Introduction
Free Riding Multicast (FRM)
Approach overview
Overhead evaluation
Router storage requirements
Forwarding bandwidth overhead (in paper)
Design tradeoffs
Implementation
FRM Overhead – Router Storage
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
Transit router
Transit forwarding state
(per-neighbor, line card
memory)
Origin border router
1. Source forwarding state
(per-group, line card memory)
2. Group membership state
(per-prefix, BGP RIB)
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
AS V
FRM Overhead – Router Storage
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
Transit router
Transit forwarding state
(per-neighbor, line card
memory)
Origin border router
1. Source forwarding state
(per-group, line card memory)
2. Group membership state
(per-prefix, BGP RIB)
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
AS V
Forwarding State (Source Border Router)
900
A -- number of groups with
sources in the local domain
Zipfian group popularity with
a minimum of 8 domains per
group
25 groups have members in
every domain (global
broadcast)
Cache size (MB)
800
700
600
500
400
300
200
100
0
100
1000
10000
100000
1M
Number of groups with active sources (A)
256 MB of line card memory enables fast-path forwarding
for ~200000 active groups
FRM Overhead – Router Storage
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
Transit router
Transit forwarding state
(per-neighbor, line card
memory)
Origin border router
1. Source forwarding state
(per-group, line card memory)
2. Group membership state
(per-prefix, BGP RIB)
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
AS V
Group Membership State Requirements
Total of A multicast
groups
Domains of prefix
length p have 232-p
users
Each user chooses
and joins k distinct
groups from A
10 false positives
per prefix allowed
1M simultaneously active groups and 10 groups per user require
~3GB of route processor memory (not on the fast path)
FRM Overhead – Router Storage
AS X
AS Y
a.b.*.*
c.d.e.*
AS P
Transit router
Transit forwarding state
(per-neighbor, line card
memory)
Origin border router
1. Source forwarding state
(per-group, line card memory)
2. Group membership state
(per-prefix, BGP RIB)
AS T
AS Z
h.i.*.*
f.g.*.*
AS Q
AS R
AS V
Forwarding State (Transit Router)
Number of forwarding entries = number of
neighbor ASes
Independent of number of groups!
AS P
AS T
90% of ASes: 10 forwarding entries
99% of ASes: 100 forwarding entries
Worst case: 2400 forwarding entries
?
?
AS Q
?
AS V
Talk Outline
Introduction
Free Riding Multicast (FRM)
Approach overview
Overhead evaluation
Design tradeoffs
Implementation
FRM Design Tradeoffs
Protocol simplicity
Can be implemented as a straightforward extension to BGP
Centralized route construction (tree is computed at source
border router from existing unicast routes)
FRM Design Tradeoffs
Protocol simplicity
Can be implemented as a straightforward extension to BGP
Centralized route construction (tree is computed at source
border router from existing unicast routes)
Ease of configuration
Management within familiar BGP framework
Avoid rendezvous point selection
FRM Design Tradeoffs
Protocol simplicity
Can be implemented as a straightforward extension to BGP
Centralized route construction (tree is computed at source
border router from existing unicast routes)
Ease of configuration
Management within familiar BGP framework
Avoid rendezvous point selection
Enables ISP control over sources/subscribers
To block traffic for an undesired group, drop it from BGP
advertisement
Source controls dissemination tree facilitates source-based
charging [Express].
FRM Design Tradeoffs
FRM Design Tradeoffs
Group membership state maintenance
Membership information disseminated more widely
FRM Design Tradeoffs
Group membership state maintenance
Membership information disseminated more widely
Nontrivial bandwidth overhead (see paper for results)
Per-packet shim header
Redundant packet transmissions
FRM Design Tradeoffs
Group membership state maintenance
Membership information disseminated more widely
Nontrivial bandwidth overhead (see paper for results)
Per-packet shim header
Redundant packet transmissions
New packet forwarding techniques
Full scan of the BGP RIB at source border router
Bloom filter lookups at transit routers
FRM Implementation
A proof-of-concept prototype on top of Linux 2.4 and
the eXtensible Open Router Platform
(http://www.xorp.org).
Functional components:
FRM kernel module (3.5 KLOC of new Linux kernel code)
FRM user-level component (1.9 KLOC of new code)
Interfaces with the Linux kernel IP layer and implements the
packet forwarding plane
Extension to the XORP BGP daemon
Implements tree construction and group membership state
dissemination
Configuration and management tools (1.4 KLOC of new code)
Summary
Free Riding Multicast is a very different approach to
inter-domain multicast routing
FRM makes use of existing unicast routing
infrastructure for group membership discovery and
route construction
Reduce protocol complexity via aggressive use of
router resources
Thank you
Challenges and Future Work
Incremental Deployment
Legacy BGP routers rate-limit their path advertisements
(30 seconds), thus delaying dissemination of group
membership state.
Large group Bloom filters that exceed maximum BGP
UPDATE message size (4KB) require fragmentation and
reassembly.
Explore alternative tree encoding techniques to
reduce per-packet bandwidth overhead
Backup Slides
FRM Overhead – Redundant Transmissions
Total number of transmissions required to transfer a single packet to all
group members (FRM header size = 100 bytes)
Number of packet transmissions
45000
40000
35000
Ideal Mcast –
precisely 1 packet is
transmitted along
each edge
30000
25000
20000
15000
Per-AS Unicast –
source unicasts to
each members AS
individually
10000
5000
0
1000
10000
100000
1M
10M
Group Size
Per-AS Unicast
FRM
Ideal Mcast
For all group sizes,
the overall bandwidth
consumed by FRM is
close to that of Ideal
Mcast (within 2.4%).
FRM Overhead – Redundant Transmissions
Number of transmissions per AS-level link required to transfer a single
packet to all group members (FRM header size = 100 bytes)
Per-AS Unicast with 10M users:
• 6% of links see redundant
transmissions.
• Worst case: 6950
transmissions per link.
FRM with 10M users:
• Less than 0.5% of links see
redundant transmissions.
• Worst case: 157
transmissions per link
• Worst case with optimization
(see paper): 2 transmissions
per link
Encoding Group Membership State
Simple enumeration is hard to scale.
Border routers encode the set of locally active groups
using a constant-size Bloom filter (GRP_BF) of length L.
{G1, G2, G3, G4, …}
K hash
functions
011011011010…
GRP_BF
BGP speakers communicate their GRP_BF state as
part of their regular route advertisements (BGP
UPDATE message) using a new path attribute.
Encoding Group Membership State
Use of Bloom filters introduces possibility of false
positives – a domain may on occasion receive traffic
for a group it has no interest in.
To deal with unwanted traffic, recipient domain can
install an explicit filter rule at the upstream provider’s
network.
For a given number of available upstream filters f,
the recipient computes the maximum tolerable false
positive rate r and chooses its filter length L
accordingly.
r = Min(1, f / (A – G))
A = size of the group address space
G = number of groups to be encoded
Summary
Free Riding Multicast is a very different approach to
inter-domain multicast routing
FRM makes use of existing unicast routing
infrastructure for group membership discovery and
route construction
Reduce protocol complexity via aggressive use of
router resources
Might be interesting to consider the viability of this
approach in broader context
Group Membership Bandwidth Overhead
For GRP_BFs with 5 hash functions and bit
positions represented by 24-bit values, the payload
of a membership update message for a single group
join/leave event is approx. 15 bytes.
Assuming 200000 prefixes in the BGP RIB and
1
group membership event per second per prefix, the
aggregate rate of incoming GRP_BF update traffic
at a border router is approx. 3MBps.
Why IP Multicast?
Technical feasibility aside, now might be a good time
to revisit the desirability question
Multicast applications now more widespread
IP-TV, MMORPG, digital conferencing
Better understanding of ISP requirements
Bottom line: simple multicast design might open the
door to more widespread adoption
© Copyright 2026 Paperzz