leader

Architecture and Design of
Distributed Dependable
Systems TI-ARDI
POSA2: Leader/Followers
Architectural Pattern
Version: 21.09.2014
Abstract
The Leader/Followers architectural pattern
provides an efficient concurrency model
where multiple threads take turns sharing a set
of event sources in order to detect,
demultiplex, dispatch, and process service
requests that occur on the event sources
2
Context
• An event-driven application where multiple
service request arriving on a set of event
sources must be processed efficiently by
multiple threads that share the event sources
3
Example: On-Line Transaction
Processing (OLTP)
LAN
Clients
WAN
tr
tr
tr
tr
response
tr= transaction
request
Front-End
Communication
Servers
response
Multi-tier architecture
4
Back-End
Database
Servers
OLTP Server - first possible solution
A common strategy for improving OLTP server
performance is to use a multi-threading concurrency model
that process request from different clients and corresponding
results simultaneously
Worker
Thread Pool
Half-Sync/
Half-Reactive
Architecture Pattern
tr. request
Monitor
Object
tr. request
I/O
Thread
Network
5
Request
Queue
uses
select
Problem
• It is hard to implement high-performance
multi-threaded server applications
• These applications process high volumes of
multiple event types, such as CONNECT,
READ and WRITE
• Context switching overhead
• Dynamic memory allocation
• Multiple threads calling select
6
Solution
• Structure a pool of threads to share a set of
event sources efficiently by taking turns
demultiplexing events and synchronously
dispatching the events to application services
that process them
– allow one leader thread to wait for an event
– other follower threads can queue up waiting for
their turn
– after detecting an event – the leader promotes a
follower to leader. It then plays the role of a
processing thread
7
Leader/Followers Structure
Thread
joins
*
1
Thread Pool
demultiplexes
synchronizer
+join()
+promote_new_leader()
Event Handler
1
Handle Set
*
*
uses
Handle
+handle_events()
+deactivate_handle()
+reactivate_handle()
+select()
8
+handle_event()
+get_handle()
Concrete Event
Handler A
Concrete Event
Handler B
+handle_event()
+get_handle()
+handle_event()
+get_handle()
Thread State Chart
[no current leader]
[else]
LEADING
processing
completed
[no current leader]
new
event
promoted
to
new leader
event
PROCESSING
processing
completed
[there is a current leader]
FOLLOWING
9
Leader/Followers Pattern Dynamics
Thread 1
Thread 2
:Thread
Pool
:Handle
Set
:Concrete
Event Handler
join()
handle_events()
event
join()
handle_event()
deactivate_
handle()
thread 2 sleeps
until it becomes
the leader
thread 2
waits for a
new event,
thread 1
processes
current
event
promote_
new_leader()
handle_events()
reactivate_
handle()
event
handle_event()
thread 1 sleeps
until it becomes
the leader
deactivate_
handle()
10
Implementation Steps
1. Choose the handle and handle set
mechanisms
2. Implement a protocol for temporarily
(de)activating handles in a handle set
3. Implement a thread pool
4. Implement a protocol to allow threads to
initially join (and later rejoin) the thread pool
5. Implement the follower promotion protocol
6. Implement the event handlers
11
2. Handle deactivating/reactivating
class Reactor {
public:
// Additions required by L/F pattern
// temporarily deactivate the <HANDLE> from the handle set
void deactivate_handle(HANDLE, Event_Type et);
void reactivate_handle(HANDLE, Event_Type et);
//
};
Deactivating the handle from the handle set
avoids race conditions
• that can occur between the time when a
new leader is selected and the event is
processed
12
3. Implement the Thread Pool
class LF_Thread_Pool {
public:
LF_Thread_Pool(Reactor *r) : reactor_(r), followers_condition_(mutex_)
{ leader_thread= NO_CURRENT_LEADER; }
// Threads calls <join> to wait on a handle set and demultiplex
// events to their event handlers
void join(Time_Value *timeout =0);
// Promote a follower thread to become the new leader
void promote_new_leader();
void deactivate_handle(HANDLE, Event_Type et);
void reactivate_handle(HANDLE, Event_Type et);
private:
Reactor *reactor_;
Thread_Id leader_thread_;
Thread_Condition followers_condition_;
Thread_Mutex mutex_;
13
};
4. Implement a protocol to join
the thread pool
void LF_Thread_Pool::join(Time_Value *timeout)
{
Guard<Thread_Mutex> guard(mutex_); // POSA2: Scoped Locking Idiom
for ( ; ; )
{
while (leader_thread_ != NO_CURRENT_LEADER)
followers_condition_.wait(timeout); // = follower role
leader_thread_= Thead::self();
// Assume the leader role
guard.release();
// Leave the monitor temporarily
// After becoming the leader,
// the tread uses the reactor to wait for events
reactor->handle_events();
guard.acquire(); // processing finished =>reenter the monitor
}
}
14
Decorator – GoF Structure
Component
Operation()
component
1
Decorator
ConcreteComponent
component->Operation()
Operation()
Operation()
ConcreteDecoratorB
ConcreteDecoratorA
addedState
Operation()
AddedBehavior()
Operation()
AddedBehavior()
Decorator::Operation();
AddedBehavior();
15
LF_Event_Handler Decorator
Event_Handler
+handle_event()
Concrete
Event_HandlerA
+handle_event()
concrete_event_handler_
1
Concrete
Event_HandlerB
LF_Event_Handler
+handle_event()
+handle_event()
1
Adds behavior both
before and after the
ConcreteEvent_HandlerX::
handle_event() operation
thread_pool_
LF_Thread_Pool
16
LF_Event_Handler Class
class LF_Event_Handler : public Event_Handler {
public:
LF_Event_Handler(Event_Handler *eh, LF_Thread_Pool *tp) :
concrete_event_handler_(eh), tread_pool_(tp) { }
virtual void handle_event(HANDLE h, Event_Type et)
{ // decorator operations
thread_pool_->deactivate_handle(h, et);
thread_pool_->promote_new_leader();
// Dispatch application-specific event processing code
concrete_event_handler_->handle_event(h,et); // Processing
thread_pool_->reactivate_handle(h, et); // decorator op.
}
private:
Event_Handler *concrete_event_handler_;
LF_Thread_Pool *thread_pool_;
};
17
Promotion of a new leader
class LF_Thread_Pool::promote_new_leader()
{
Guard<Thread_Mutex> guard(mutex_);
if (leader_thread_ != Thread::self())
throw; // Exception: only leader can promote !
// indicate that we are no longer the leader
leader_thread_= NO_CURRENT_LEADER;
// notify a join method to promote the next follower to leader
followers_condition_.notify();
}
18
Main Program
const int MAX_THREADS = 20;
void *worker_threads(void *); // Forward declaration
int main() {
LF_Thread_Pool thread_pool(Reactor::instance());
// code to set up a passive acceptor omitted
//
for (int i=0; i < MAX_THREADS-1; i++)
Thread_Manager::instance()->
spawn(worker_threads, &thread_pool);
// The main thread participates in the thread pool
thread_pool.join();
};
void *worker_threads(void *arg) {
LF_Thread_Pool *thread_pool= static_cast<LF_Thread_Pool *> (arg);
thread_pool->join();
};
19
Example: JAWS Web Server
Thread Pool
demultiplexes
synchronizer
join()
promote_new_leader()
Event Handler
1
Reactor
*
*
uses
Handle
handle_events()
deactivate_handle()
reactivate_handle()
select()
handle_event()
get_handle()
HTTP
Acceptor
handle_event()
get_handle()
HTTP
Handler
handle_event()
get_handle()
This pattern eliminates the need for and the overhead
of a separate Reactor thread & synchronized request
queue used in the Half-Sync/Half-Async pattern
20
Applying the Leader/Followers
Pattern in JAWS
Two options:
1. If platform supports accept() optimization then the
Leader/Followers pattern can be implemented by the OS
2. Otherwise, this pattern can be implemented as a reusable
framework
Although Leader/Followers thread pool design is highly efficient
the Half-Sync/Half-Async design may be more appropriate for
certain types of servers, e.g.:
• The Half-Sync/Half-Async design can reorder & prioritize client
requests more flexibly, because it has a synchronized request
queue implemented using the Monitor Object pattern
• It may be more scalable, because it queues requests in Web
server virtual memory, rather than the OS kernel
21
Leader/Followers Benefits
• Performance enhancements
– It enhances CPU cache affinity and eliminates the need for
dynamic memory allocation & data buffer sharing between
threads
– It minimizes locking overhead by not exchanging data
between threads, thereby reducing thread synchronization
– It can minimize priority inversion because no extra queuing
is introduced in the server
– It doesn’t require a context switch to handle each event,
reducing dispatching latency
• Programming simplicity
– The Leader/Follower pattern simplifies the programming of
concurrency models where multiple threads can receive
requests, process responses & demultiplex connections
using a shared handle set
22
Leader/Followers Liabilities
• Implementation complexity
– The advanced variants of the Leader/Followers pattern are
hard to implement
• Lack of flexibility
– In the Leader/Followers model it is hard to discard or reorder
events because there is no explicit queue
• Network I/O bottlenecks
– The Leader/Followers pattern serializes processing by
allowing only a single thread at a time to wait on the handle
set, which could become a bottleneck because only one
thread at a time can demultiplex I/O events
23
Known Uses
• ACE Thread Pool Reactor framework
• CORBA ORB’s and Web Servers
• Transaction monitors
24
See Also
• Alternatives to Leader/Followers:
– Use Reactor when each event requires a short
amount of time to process
• Reactor is also a part of Leader/Followers
– Use Proactor when OS supports asynchronous
I/O effectively
– Use Half-Sync/Half-Async or Active object
• when there are additional synchronization or ordering
constraints
• when event sources cannot be waited for by a single
event demultiplexer
25
Relation to other POSA2 Patterns
26