Scalable Peer-to-Peer Networked Virtual Environment

VON:
A Scalable Peer-to-Peer Network
for Virtual Environments
Shun-Yun Hu (胡舜元)
([email protected])
CSIE, National Central University, Taiwan
2007/02/15
1
Outline





Introduction
Voronoi-based Overlay Network (VON)
Evaluation
Application for physical simulation
Conclusion
2
What is
Networked Virtual Environment?

Virtual Reality + Internet

3D worlds with people, agents, objects, terrain

Military simulations (’80) 
Massively Multiplayer Online Games (mid-‘90)

Trends: larger scale, more realistic simulation
3
4
NVE: A Shared Space
5
The Scalability Problem



Many nodes on a 2D plane ( > 1,000)
Message exchange with those within Area of Interest (AOI)
How does each node receive the relevant messages?
Area of Interest
6
A simple solution (point-to-point)
Source: [Funkhouser95]
But…too much irrelevant message
N * (N-1) connections ≈ O(N2)  Not scalable!
7
A better solution (client-server)
Source: [Funkhouser95]
Message filtering at server to reduce traffic
N connections = O(N)  server is bottleneck
8
Current solution (server-cluster)
Source: [Funkhouser95]
Still limited by servers.
Expensive to deploy & maintain.
9
Scalability Analysis

Scalability constrains


Computing resource
Network resource
Non-scalable system
(CPU)
(Bandwidth)
vs.
Scalable system
Resource limit
x: number of entities
y: resource consumption at the limiting system component
10
What Next?

Strategies



Increase resource
Decrease consumption
Architectures




Point-to-point (LAN)
Client-server
Server-cluster
?
Peer-to-Peer


More servers
Message filtering
Scale
tens
hundreds
thousands
millions
10^1
10^2
10^3
10^6 …
11
What is Peer-to-Peer (P2P)?
[Stoica et al. 2003]
 Distributed systems without any centralized control
or hierarchical organization
 Runs software with equivalent functionality

Examples



File-sharing:
Distributed Search:
VoIP:
Napster, Gnutella, Kazza, eDonkey
Chord, CAN, Tapestry, Pastry
Skype
12
Peer-to-Peer Overlay
A P2P overlay network
source: [Keller & Simon 2003]
13
Promise & Challenge of P2P

Promises



Growing resource, decentralized  Scalable
Commodity hardware
 Affordable
Challenges


Topology maintenance
 dynamic join/leave
Efficient content retrieval  no global knowledge
14
Outline





Introduction
Voronoi-based Overlay Network (VON)
Evaluation
Application for physical simulation
Conclusion
15
Design Goals

Observation:


for virtual environment applications, the contents we want
are messages from AOI neighbors
Content discovery is a neighbor discovery problem

Solve the Neighbor Discovery Problem in a fullydistributed, message-efficient manner.

Specific goals:


Scalable
Responsive
 Limit per-node message traffic
 Direct connection with AOI neighbors
16
Voronoi Diagram


2D Plane partitioned into regions by sites, each
region contains all the points closest to its site
Can be used to find k-nearest neighbor easily
Neighbors
Region
Site
17
Design Concepts
Use Voronoi to solve the neighbor discovery problem




Identify enclosing and boundary neighbors
Each node constructs a Voronoi of its neighbors
Enclosing neighbors are minimally maintained
Mutual collaboration in neighbor discovery
Circle
Area of Interest (AOI)
White
self
Yellow
enclosing neighbor (E.N.)
L. Blue
boundary neighbor (B.N.)
Pink
E.N. & B.N.
Green
AOI neighbor
L. Green unknown neighbor
18
Procedure (JOIN)
1) Joining node sends coordinates to any existing node
Join request is forwarded to acceptor
2) Acceptor sends back its own neighbor list
joining node connects with other nodes on the list
Acceptor’s region
Joining node
19
Procedure (MOVE)
1) Positions sent to all neighbors, mark messages to B.N.
B.N. checks for overlaps between mover’s AOI and its E.N.
2) Connect to new nodes upon notification by B.N.
Disconnect any non-overlapped neighbor
Non-overlapped
neighbors
Boundary
neighbors
New
neighbors
20
Procedure (LEAVE)
1) Simply disconnect
2) Others then update their Voronoi
new B.N. is discovered via existing B.N.
Leaving node
(also a B.N.)
New boundary neighbor
21
Dynamic AOI
Crowding within AOI can overload a particular node
It’s better if AOI-radius can be adjusted in real time
22
Adjustment Conditions

AOI-radius decrease


AOI-radius increase



Number of connections > connection limit
Maximum connections not exceeded
Current AOI-radius < preferred AOI-radius
Mutual awareness rule

Do not disconnect a neighbor who sees me
23
Demonstration
Simulation demo



Random movements (100 nodes, 1200x700 world)
Local vs. global view
Dynamic AOI adjustment
24
Outline





Introduction
Voronoi-based Overlay Network (VON)
Evaluation
Application for physical simulation
Conclusion
25
Simulation Method

C++ implementation of VON (open source VAST library)

World size:
Trials from
Connection limit:
3000 time-steps



1200 x 1200
200 – 2000 nodes
20
(AOI: 100)
(~ 300 simulated seconds, assuming 10 updates/seconds)

Behavior model



Random movement:
Constant velocity:
Movement duration:
random destination
5 units/step
random (until destination is reached)
26
Scalability: Avg. Transmission / sec
30
basic
dAOI
basic (fixed density after 1000 nodes)
dAOI (fixed density after 1000 nodes)
Size (kb /
25
20
15
10
5
0
0
400
800
1200
Number of Nodes
1600
2000
27
Scalability: Max. Transmission / sec
70
basic
dAOI
basic (fixed density after 1000 nodes)
dAOI (fixed density after 1000 nodes)
Size (kb /
60
50
40
30
20
10
0
0
400
800
1200
Number of Nodes
1600
2000
28
Scalability: Avg. Neighbor Size
90
connected neighbors (basic)
80
AOI neighbors (basic)
connected neighbors (dAOI)
Neighbor Size
70
AOI neighbors (dAOI)
60
50
40
30
20
10
0
0
400
800
1200
Number of Nodes
1600
2000
29
Reliability: Effects of Packet Loss
100
90
Units
80
70
Topology Consistency (%)
60
Recovery Steps
50
40
30
20
10
0
0%
20%
40%
60%
Loss Rate
80%
100%
30
Analysis of Design
Scalability

Bounded resource consumption  dynamic AOI
Consistency (Topology)

Topology is fully connected  enclosing neighbors
Reliability

Self-organizing  distributed neighbor discovery
Responsiveness

Lowest latency  direct connection, no relay
31
Outline





Introduction
Voronoi-based Overlay Network (VON)
Simulation results
Application for physical simulation
Conclusion
32
A look at simulations

Important tools in scientific research



Larger scale and higher resolution are constantly sought
However, computational resource can be limited
An Untapped Potential



300 Million PCs on the Internet (2000 est.)
Up to 80% to 90% of CPU is wasted
Large supply of computing resource, growing rapidly
33
Examples

SETI@Home (UC Berkeley – space radio analysis)




5.3 M world-wide participants
2.2 M years of single-processor CPU
54 teraflop machine (top 3 in 2005: 70.72, 51.87, 35.86)
Folding@Home (Stanford – protein’s 3D structure)



30,000 volunteers
1 M days of single-processor CPU
Published 23 papers in: Science, Nature, Nature
Structural Biology, PNAS, JMB, etc.
34
The Grand Question

Can we build the ultimate simulator for large-scale
simulation utilizing millions of computers world-wide?

Potential applications:





Nuclear reaction
Star clusters
Atomic-scale modeling in material science
Weather, earthquakes
Biology (protein, ecosystem, brain, ...)
35
Current Limitations

Current methodology





Issues:



Centralized server + many clients
Client requests “work unit” to process
Communication is minimized
Clients do not communicate
[Hori et al. 2001]
Only suitable for “embarrassingly parallel” simulations
Sophisticated server-side algorithm and management required
An alternative: peer-to-peer (P2P) computing
36
A Simulation Scenario

How can we utilize P2P for simulation-purpose?
Answer: depends on what you want to simulate

We observe that many simulations…




are spatially-oriented (i.e. based on coordinate systems)
run in discrete time-steps
exhibit localized interaction (i.e. short-range interaction)
example: molecular dynamics (MD) simulation

Protein folding?
37
Outline





Introduction
Voronoi-based Overlay Network (VON)
Simulation results
Application for physical simulation
Conclusion
38
Summary

NVE scalability is achievable with P2P architecture
and is a neighbor discovery problem

A promising solution: Voronoi-based P2P Overlay


Leverage knowledge of each peer to maintain topology
Properties



Scalable: fully-distributed, dynamic AOI
Efficient: low irrelevant messages, zero relay
Simple:
simple protocol and procedure
39
Potential Applications

Online games
Position updates in current MMOGs, Voice-chats

Military
Enable large-scale, affordable military training simulation

3D Web
Provide multi-user interactivity to static 3D world

Scientific simulations
Distribute spatial simulation requiring frequent synchronization
40
Acknowledgements









Dr. Jui-Fa Chen
(陳瑞發老師)
Tsu-Han Chen
(鄭子涵)
Members of the Alpha Lab, TKU CS
Dr. Chin-Kun Hu
(胡進錕老師)
Guan-Ming Liao
(廖冠名)
LSCP, Institute of Physics, Academia Sinica
Joaquin Keller
Jon Watte
Kuan-Ta Chen
(France Tele. R&D, Solipsis)
(there.com)
(陳寬達, NTU)
41
Q&A
VON: A Scalable Peer-to-Peer Network for Virtual Environments
IEEE Network, vol. 20, no. 4, Jul./Aug. 2006
Thank you!
[email protected]
http://vast.sourceforget.net
(http://vast.sf.net)
42
Issues for Creating NVE






Consistency (events/states)
Responsiveness
Security
multiplayer
Scalability
Persistency
Reliability (Fault-tolerance)
massively multiplayer
43
Issues for Creating P2P NVE
Consistency (events/states)
Responsiveness
Security
multiplayer
massively multiplayer

Scalability
Persistency
Reliability (Fault-tolerance)

Consistency (topology)





 P2P NVE
44
Server-cluster issues

Insufficient total resource
Hardware provisioning

over-provision!
High user density (crowding)
User limits



limits scale & realism!
Excessive inter-server communications
Less load balancing

difficult balance!
45
Related Work (1):
DHT-based: SimMUD
[Knutsson et al. 2004] (UPenn)





Pastry + Scribe
Regions
Coordinators
(super-nodes)
Fixed-size region
Relay overhead
46
Related Work (2):
Neighbor-list Exchange
[Kawahara et al. 2004] (Univ. of Tokyo)





Fully-distributed
Nearest-neighbors
List exchange
High transmission
Overlay partition
47
Related Work (3):
Mutual Notification: Solipsis
[Keller & Simon 2003] (France Telecomm R&D)





Links with AOI neighbor
Mutual cooperation
Inside convex hull
Potentially slow discovery
Inconsistent topology
48
Consistency Metrics

Topology Consistency [Kawahara, 2004]
observed AOI neighbors
actual AOI neighbors

Drift Distance [Diot, 1999]
Distance between observed position and actual position
(average over all nodes)
49
Consistency: Topology Consistency
Topology Consistency (%)
100.00
99.99
99.98
99.97
99.96
99.95
99.94
99.93
99.92
basic
99.91
dAOI
99.90
0
500
1000
Number of Nodes
1500
2000
50
Consistency: Drift Distance
0.50
Average Drift Distance
0.45
basic
0.40
dAOI
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
0
400
800
1200
Number of Nodes
1600
2000
51
Problems of Voronoi Approach

Message traffic


Circular round-up of nodes
Redundant message sending
(inherent to fully-distributed design)

Incomplete neighbor discovery



Can happen with inconsistent / incorrect neighbor list
Fast moving node
Limited AOI

Direct connections
52
P2P NVE Comparisons
DHTbased
Consistency
(topology)
Neighbor-list Solipsis
exchange
DHT &
Neighbor listSupernode exchange
(consistent) (partitioning)
Responsive- two to
ness
many
One hop
VON
Neighbor
Neighbor
notify&query notify
(undiscovery) (consistent)
One hop
One hop
Scalability
O(n) on
Constant in
supernode crowding
Constant if
fixed density
Constant in
crowding
Con
Latency
too high
Occasional
undiscovery
Circular
node line-up
Overlay
partitioning
53
Future Perspectives

Short-term




Distributed event/state consistency
Customizable AOI (Heterogeneity in P2P)
Recovery from overlay partition and fast-moving nodes
Long-term



Persistency issue
(P2P-based database)
Security issue
(protection from malicious nodes)
3D content distribution (3D streaming on P2P)
Massive, persistent 3D environment sharable by all!
54