Distributed Systems
CS 15-440
Programming Models- Part IV
Lecture 17, Oct 27, 2014
Mohammad Hammoud
1
Today…
Last Session:
Programming Models – Part III: MapReduce
Today’s Session:
Programming Models – Part IV: Pregel
Announcements:
Project 3 will be posted today. It is due on Wednesday Nov 12,
2014 by midnight
PS4 will be posted today. It is due on Saturday Nov 15, 2014
by midnight
2
Objectives
Discussion on Programming Models
Why
parallelizing our
programs?
Parallel
computer
architectures
Traditional
Models of
parallel
programming
Last 3 Sessions
Types of
Parallel
Programs
Message
Passing
Interface
(MPI)
MapReduce,
Pregel and
GraphLab
Cont’d
The Pregel Analytics Engine
Pregel
Motivation &
Definition
The Computation
& Programming
Models
Input and
Output
Architecture &
Execution Flow
4
FaultTolerance
Motivation for Pregel
How to implement algorithms to process Big Graphs?
Create a custom distributed Difficult!
infrastructure for each new algorithm
Inefficient
and Cumbersome!
Rely on existing distributed
analytics
engines like MapReduce
Use a single-computer graph algorithm library like BGL, LEDA,
Big Graphs might be too large to fit on a single machine!
NetworkX etc.
Use a parallel
systemDistributed
like ParallelSystems!
BGL or CGMGraph
Not graph
suitedprocessing
for Large-Scale
5
What is Pregel?
Pregel is a large-scale graph-parallel distributed analytics engine
Some Characteristics:
•
•
•
•
•
•
•
In-Memory (opposite to MapReduce)
High scalability
Automatic fault-tolerance
Flexibility in expressing graph algorithms
Message-Passing programming model
Tree-style, master-slave architecture
Synchronous
Pregel is inspired by Valiant’s Bulk Synchronous Parallel
(BSP) model
6
The Pregel Analytics Engine
Pregel
Motivation &
Definition
The Computation
& Programming
Models
Input and
Output
Architecture &
Execution Flow
7
FaultTolerance
The BSP Model
Iterations
Data
Data
CPU 1
Data
CPU 1
Data
CPU 1
Data
Data
Data
Data
Data
Data
Data
Data
CPU 2
CPU 2
CPU 2
Data
Data
Data
Data
Data
Data
Data
Data
CPU 3
CPU 3
CPU 3
Data
Data
Data
Data
Data
Super-Step 1
Super-Step 2
8
Super-Step 3
Barrier
Data
Barrier
Data
Barrier
Data
Entities and Super-Steps
The computation is described in terms of vertices, edges and a
sequence of super-steps
You give Pregel a directed graph consisting of
vertices and edges
Each vertex is associated with a modifiable
user-defined value
Each edge is associated with a source vertex, value
and a destination vertex
During a super-step:
A user-defined function F is executed at each vertex V
F can read messages sent to V in superset S – 1 and send messages to other
vertices that will be received at superset S + 1
F can modify the state of V and its outgoing edges
9
F can alter the topology of the graph
Topology Mutations
The graph structure can be modified during any super-step
Vertices and edges can be added or deleted
Mutating graphs can create conflicting requests where multiple
vertices at a super-step might try to alter the same edge/vertex
Conflicts are avoided using partial ordering and handlers
Partial orderings:
Edges are removed before vertices
Vertices are added before edges
Mutations performed at super-step S are only
super-step S + 1
All mutations precede calls to actual computations
effective at
Handlers:
Among multiple conflicting requests, one request is selected arbitrarily
10
Algorithm Termination
Algorithm termination is based on every vertex voting to halt
In super-step 0, every vertex is active
All active vertices participate in the computation of any given super-step
A vertex deactivates itself by voting
Vote to Halt
to halt and enters an inactive state
A vertex can return to active state
Active
Inactive
if it receives an external message
Message Received
Vertex State Machine
A Pregel program terminates when all vertices
are simultaneously inactive and there are no messages in transit
11
Finding the Max Value in a Graph
S:
3
6
2
1
Blue Arrows
are messages
S + 1:
3
6
6
2
1
6
Blue vertices
have voted to
halt
S + 2:
6
6
2
6
6
S + 3:
6
6
6
6
The Programming Model
Pregel adopts the message-passing programming model
Messages
can
be
passed
any other vertex in the graph
from
any
vertex
to
Any number of messages can be passed
The message order is not guaranteed
Messages will not be duplicated
Combiners can be used to reduce
the number of messages passed
between super-steps
Aggregators are available for reduction operations (e.g., sum, min,
and max)
13
The Pregel API in C++
A Pregel program is written by sub-classing the Vertex class:
template <typename VertexValue,
typename EdgeValue,
typename MessageValue>
To define the types for vertices,
edges and messages
class Vertex {
public:
virtual void Compute(MessageIterator* msgs) = 0;
const string& vertex_id() const;
int64 superstep() const;
const VertexValue& GetValue();
VertexValue* MutableValue();
OutEdgeIterator GetOutEdgeIterator();
Override the
compute function to
define the
computation at
each superstep
To get the value of the
current vertex
To modify the value of
the vertex
void SendMessageTo(const string& dest_vertex,
const MessageValue& message);
To pass messages
to other vertices
void VoteToHalt();
};
14
Pregel Code for Finding the Max
Value
Class MaxFindVertex
: public Vertex<double, void, double> {
public:
virtual void Compute(MessageIterator* msgs) {
int currMax = GetValue();
SendMessageToAllNeighbors(currMax);
for ( ; !msgs->Done(); msgs->Next()) {
if (msgs->Value() > currMax)
currMax = msgs->Value();
}
if (currMax > GetValue())
*MutableValue() = currMax;
else VoteToHalt();
}
};
The Pregel Analytics Engine
Pregel
Motivation &
Definition
The Computation
& Programming
Models
Input and
Output
Architecture &
Execution Flow
16
FaultTolerance
Input, Graph Flow and Output
The input graph in Pregel is stored in a distributed storage layer
(e.g., GFS or Bigtable)
The input graph is divided into partitions consisting of vertices and
outgoing edges
Default partitioning function is hash(ID) mod N, where N is the # of partitions
Partitions are stored at node memories for the duration of
computations (hence, an in-memory model & not a disk-based one)
Outputs in Pregel are typically graphs isomorphic (or mutated) to
input graphs
Yet, outputs can be also aggregated statistics mined from input graphs
(depends on the graph algorithms)
The Pregel Analytics Engine
Pregel
Motivation &
Definition
The Computation
& Programming
Models
Input and
Output
Architecture &
Execution Flow
18
FaultTolerance
The Architectural Model
Pregel assumes a tree-style
master-slave architecture
network
topology
and
Core Switch
Rack Switch
Rack Switch
Worker1
Worker2
Worker3
Worker4
Worker5
Master
Push work (i.e., partitions) to all workers
19
Send Completion Signals
When the master receives the completion signal from every worker in
super-step S, it starts super-step S + 1
a
The Execution Flow
Steps of Program Execution in Pregel:
1. Copies of the program code are distributed across all machines
1.1 One copy is designated as the master and every other copy is
deemed as a worker/slave
2. The master partitions the graph and assigns workers partition(s),
along with portions of input “graph data”
3. Every worker executes the user-defined function on each vertex
4. Workers can communicate among each others
20
The Execution Flow
Steps of Program Execution in Pregel:
5. The master coordinates the execution of super-steps
6. The master calculates the number of inactive vertices after
each super-step and signals workers to terminate if all vertices
are inactive (and no messages are in transit)
7. Each worker may be instructed to save its portion of the graph
21
The Pregel Analytics Engine
Pregel
Motivation &
Definition
The Computation
& Programming
Models
Input and
Output
Architecture &
Execution Flow
22
FaultTolerance
Fault Tolerance in Pregel
Fault-tolerance is achieved through checkpointing
At the start of every super-step the master may instruct the
workers to save the states of their partitions in a stable storage
Master uses “ping” messages to detect worker failures
If a worker fails, the master re-assigns corresponding vertices
and input graph data to another available worker,
and restarts the super-step
The available worker re-loads the partition state of the failed
worker from the most recent available checkpoint
23
How Does Pregel Compare to
MapReduce?
24
Pregel versus MapReduce
Aspect
Hadoop MapReduce
Pregel
Programming Model
Shared-Memory
(abstraction)
Message-Passing
Computation Model
Synchronous
Synchronous
Parallelism Model
Data-Parallel
Graph-Parallel
Architectural Model
Master-Slave
Master-Slave
Task/Vertex Scheduling
Model
Pull-Based
Push-Based
Application Suitability
LooselyConnected/Embarrassingly
Parallel Applications
Strongly-Connected
Applications
25
Next Class
GraphLab
Back-up Slides
27
PageRank
PageRank is a link analysis algorithm
The rank value indicates an importance of a particular web page
A hyperlink to a page counts
as a vote of support
A page that is linked to by many pages
with high PageRank receives a
high rank itself
A PageRank of 0.5 means there is a 50% chance that a person
clicking on a random link will be directed to the document with the
0.5 PageRank
PageRank (Cont’d)
Iterate:
Where:
α is the random reset probability
L[j] is the number of links on page j
1
2
3
4
5
6
© Copyright 2026 Paperzz