Document

VICCI: Programmable Cloud
Computing Research Testbed
Andy Bavier
Princeton University
November 3, 2011
VICCI Overview
• Support research in:
– Design, provisioning, and management of a global,
multi-datacenter infrastructure (the Cloud)
– Design and deployment of large-scale distributed
services in a Cloud environment
• Compute clusters and networking hardware
• Bootstrapped using MyPLC software
• Project begun late 2010
November 3, 2011
GEC12
2
Enabling Research
• A realistic environment for deployment studies
• Building block services
– Replication, consistency, fault-tolerance, scalable
performance, object location, migration
• New Cloud programming models
– Targeted application domains, e.g., virtual worlds or
managing personal data
• Cross-cutting foundational issues
– Managing the network within/between data centers
– Trusted cloud platform to ensure confidentiality
November 3, 2011
GEC12
3
Building Block Services
• Harmony
– Consistent DHT for Cloud applications
• Syndicate
– Global, content-oriented filesystem
• CRAQ
– Key-value store with linearalizable operations
• Prophecy
– Byzantine fault-tolerant replicated state machines
• Serval
– Dynamic service-centric network routing
November 3, 2011
GEC12
4
Cloud Programming Models
• Virtual Worlds
– Issues: federation, expansibility, scalability,
migration, security
– Cooperative but not necessarily collaborative
– Application: Meru
• Rhizoma
– A Cloud for personal applications
– Issues: resource acquisition, maintaining interdevice connectivity
November 3, 2011
GEC12
5
Cross-Cutting Issues
• Tolerating and detecting faults
– Zeno: BFT protocol with high availability
– Accountable virtual machines
• Networking issues
– Simple datacenter networks with static multipath
routing
– Multipath routing to improve reliability and load
balance
– Peering on demand
• Trusted Cloud Computing Platform
– Confidentiality and integrity of Cloud computations
November 3, 2011
GEC12
6
VICCI Facility
• Hardware
– 7 geographically dispersed compute clusters
• US: Seattle WA, Palo Alto CA, Princeton NJ, Atlanta GA
• Europe: Saarbrucken (Germany), Zurich (Switzerland)
• Asia: Tokyo (Japan)
– 70 x 12-core Intel Xeon servers w/48GB RAM
– 4 OpenFlow-enabled switches
– 1Gbps connectivity between clusters, 10Mbps to Internet
• Software
– Lightweight virtual machines
– Remote management software for creating, provisioning,
and controlling distributed VMs
November 3, 2011
GEC12
7
Developing VICCI
Extension
PlanetLab
VICCI
Node Virtualization
Only container-based
VMs (Vserver)
Support Xen, KVM, OpenVZ, Linux
Containers
Network
Virtualization
IP connectivity; does not Use OpenFlow switches to manage
manage local network
intra-cluster traffic on a perservice/application basis
Bandwidth
Management
Limits bandwidth on a
per-node basis
Limit bandwidth on a per-cluster
basis using distributed rate limiting
Resource Allocation
Best-effort sharing of
available resources
Resource guarantees (e.g., reserve
CPU cores to VMs)
Cluster Support
All nodes talk to
PlanetLab Central
Site Manager will configure and
manage a cluster of nodes as a unit
November 3, 2011
GEC12
8
Questions
• Does VICCI have value to the GENI research
community?
– Resources: PlanetLab + OpenFlow clusters
– Experimental building block services
• VICCI URL: http://www.vicci.org
November 3, 2011
GEC12
9