Chapter 7 - USC Upstate: Faculty

Chapter 7
Multicores,
Multiprocessors, and
Clusters
7.1 Introduction

Computer Architects have long sought the
“El Dorado” of computer design:

To create powerful computers simply by
connecting many existing smaller ones.
Chapter 7 — Multicores, Multiprocessors, and Clusters — 2

Goal: connecting multiple computers
to get higher performance




§9.1 Introduction
7.1 Introduction
Multiprocessors
Scalability, availability, power efficiency
Multiprocessor software must be designed
to work with a variable number of
processors.
This software must be able to scale


Some designs support operation in the
presence of broken hardware
Hence, multiprocessors can also improve
availability.
Chapter 7 — Multicores, Multiprocessors, and Clusters — 3

Job-level (process-level) parallelism


Parallel processing program


Single program run on multiple processors
Multicore microprocessors


High throughput for independent jobs
§9.1 Introduction
Introduction
Chips with multiple processors (cores)
Number of cores expected to double every
two years.


Sequential programs = slow
So if you care about performance, parallel is
the way to go!
Chapter 7 — Multicores, Multiprocessors, and Clusters — 4
Hardware and Software

Hardware



Software



Serial: e.g., Pentium 4
Parallel: e.g., quad-core Xeon e5345
Sequential: e.g., matrix multiplication
Concurrent: e.g., operating system
Sequential/concurrent software can run on
serial/parallel hardware

Challenge: making effective use of parallel
hardware
Chapter 7 — Multicores, Multiprocessors, and Clusters — 5
What We’ve Already Covered

§2.11: Parallelism and Instructions


§3.6: Parallelism and Computer Arithmetic



Associativity
§4.10: Parallelism and Advanced
Instruction-Level Parallelism
§5.8: Parallelism and Memory Hierarchies


Synchronization
Cache Coherence
§6.9: Parallelism and I/O:

Redundant Arrays of Inexpensive Disks
Chapter 7 — Multicores, Multiprocessors, and Clusters — 6
7.2 The Difficulty of creating Parallel
Processing Programs


Hardware is not the problem
Problem: too few important application
programs have been rewritten to complete
tasks sooner on multiprocessors.
Chapter 7 — Multicores, Multiprocessors, and Clusters — 7
7.2 The Difficulty of creating Parallel
Processing Programs

So,

Why has this been so?

Why have parallel processing programs
been so much harder to develop than
sequential programs?
Chapter 7 — Multicores, Multiprocessors, and Clusters — 8
7.2 The Difficulty of creating Parallel
Processing Programs






So,
Why has this been so?
Why have parallel processing programs been so much
harder to develop than sequential programs?
Reason: You MUST get better
performance and efficiency*** <Here>
Difficult to write software that uses multiple
processors to complete one task faster
Problem gets worse as the number of
processors increases.
Chapter 7 — Multicores, Multiprocessors, and Clusters — 9


Where have we seen instruction-level
parallelism before in this course?
Out-of-order execution!

Chapter 4 
§7.2 The Difficulty of Creating Parallel Processing Programs
Parallel Programming
Chapter 7 — Multicores, Multiprocessors, and Clusters — 10

Why is it difficult to write parallel
processing programs that are fast,
especially as the number of processors
increases?





Task must be partitioned equally (as much as
it can be)
Communication time
Scheduling
Synchronization
Overhead communication b/w parties
§7.2 The Difficulty of Creating Parallel Processing Programs
Parallel Programming
Chapter 7 — Multicores, Multiprocessors, and Clusters — 11


Parallel software is the problem
Need to get significant performance
improvement


Difficulties




Otherwise, just use a faster uniprocessor,
since it’s easier!
Partitioning
Coordination
Communications overhead
Amdahl’s Law (yep, it’s back!)
§7.2 The Difficulty of Creating Parallel Processing Programs
Parallel Programming
Chapter 7 — Multicores, Multiprocessors, and Clusters — 12
Amdahl’s Law

Suppose you want to achieve a speed-up
of 90 times faster with 100 processors.
What percentage of the original
computation can be sequential?

Use the formula for calculating speed-up
from Chapter 1:
Chapter 7 — Multicores, Multiprocessors, and Clusters — 13
Amdahl’s Law



Sequential part can limit speedup
Example: 100 processors, 90× speedup?

Tnew = Tparallelizable/100 + Tsequential

1
Speedup 
 90
(1 Fparallelizable )  Fparallelizable/100

Solving: Fparallelizable = 0.999
Need sequential part to be 0.1% of original
time
Chapter 7 — Multicores, Multiprocessors, and Clusters — 14
Scaling Example

Suppose you want to perform two sums: one is
a sum of 10 scalar variables, and one is a matrix
sum of a pair of two-dimensional arrays, with
dimensions 10 x 10.

What speed-up do you get with 10 vs 100
processors?

Next, Calculate the speed-ups assuming that
matrices grow to 100 by 100.
Chapter 7 — Multicores, Multiprocessors, and Clusters — 15
Scaling Example


If we assume performance is a function of time
for an addition, t, then there are 10 additions that
do not benefit from parallel processors and 100
that do…
Single processor is 110t, the execution time for
10 processors is:
Chapter 7 — Multicores, Multiprocessors, and Clusters — 16
Scaling Example

Workload: sum of 10 scalars, and 10 × 10 matrix
sum



Single processor: Time = (10 + 100) × tadd
10 processors



Time = 10 × tadd + 100/10 × tadd = 20 × tadd
Speedup = 110/20 = 5.5 (55% of potential)
100 processors



Speed up from 10 to 100 processors
Time = 10 × tadd + 100/100 × tadd = 11 × tadd
Speedup = 110/11 = 10 (10% of potential)
Assumes load can be balanced across
processors
Chapter 7 — Multicores, Multiprocessors, and Clusters — 17
Scaling Example (cont)



What if matrix size is 100 × 100?
Single processor: Time = (10 + 10000) × tadd
10 processors



100 processors



Time = 10 × tadd + 10000/10 × tadd = 1010 × tadd
Speedup = 10010/1010 = 9.9 (99% of potential)
Time = 10 × tadd + 10000/100 × tadd = 110 × tadd
Speedup = 10010/110 = 91 (91% of potential)
Assuming load balanced
Chapter 7 — Multicores, Multiprocessors, and Clusters — 18
Scaling Example (cont)

Examples show


Getting good speed-up while keeping the problem
size fixed is harder than
Getting good speed-up by increasing the size of the
problem (pg 637)
Chapter 7 — Multicores, Multiprocessors, and Clusters — 19
Strong vs Weak Scaling

Strong scaling: problem size fixed

As in example

Weak scaling: problem size proportional to
number of processors

Assume that the size of the problem, M, is the
working set in main memory, and we have P
processors.
Memory per processor



Strong scaling is M/P
Weak scaling is M
Chapter 7 — Multicores, Multiprocessors, and Clusters — 20
Strong vs Weak Scaling

Strong scaling: problem size fixed


As in example
Weak scaling: problem size proportional to
number of processors

10 processors, 10 × 10 matrix


100 processors, 32 × 32 matrix


Time = 20 × tadd
Time = 10 × tadd + 1000/100 × tadd = 20 × tadd
Constant performance in this example
Chapter 7 — Multicores, Multiprocessors, and Clusters — 21

Since it is so difficult to rewrite programs on
parallel hardware,


What can computer designers do to simplify the task?
One solution:



Provide a single physical address space that
all processors can share
Programs don’t care where they run – merely
that they may be executed in parallel.
Usually the case for multicore chips

§7.3 Shared Memory Multiprocessors
7.3 Shared Memory
Hardware provides cache coherence. (5.8)
Chapter 7 — Multicores, Multiprocessors, and Clusters — 22

SMP: shared memory multiprocessor




§7.3 Shared Memory Multiprocessors
7.3 Shared Memory
Offers single physical address space across all processors
Hardware provides single physical
address space for all processors
Synchronize shared variables using locks
Memory access time

UMA (uniform) vs. NUMA (nonuniform)
< Classic organization of a shared
memory multiprocessor.
Chapter 7 — Multicores, Multiprocessors, and Clusters — 23

When sharing data,





Processors operating will have to coordinate
Why?
One processor could start working on data
before another is finished with it.
Coordination = synchronization
Use locks (one approach)
§7.3 Shared Memory Multiprocessors
7.3 Shared Memory
Chapter 7 — Multicores, Multiprocessors, and Clusters — 24
Example: Sum Reduction

Sum 100,000 numbers on 100 processor UMA




Each processor has ID: 0 ≤ Pn ≤ 99
Partition 1000 numbers per processor
Initial summation on each processor
sum[Pn] = 0;
for (i = 1000*Pn;
i < 1000*(Pn+1); i = i + 1)
sum[Pn] = sum[Pn] + A[i];
Now need to add these partial sums



Reduction: divide and conquer
Half the processors add pairs, then quarter, …
Need to synchronize between reduction steps
Chapter 7 — Multicores, Multiprocessors, and Clusters — 25
Example: Sum Reduction
half = 100;
repeat
synch();
if (half%2 != 0 && Pn == 0)
sum[0] = sum[0] + sum[half-1];
/* Conditional sum needed when half is odd;
Processor0 gets missing element */
half = half/2; /* dividing line on who sums */
if (Pn < half) sum[Pn] = sum[Pn] + sum[Pn+half];
until (half == 1);
Chapter 7 — Multicores, Multiprocessors, and Clusters — 26
7.4 Clusters and Other Message-Passing
Multiprocessors

The alternate approach to sharing an
address space is for the processors to
each have their own private physical
address space.
Chapter 7 — Multicores, Multiprocessors, and Clusters — 27


Each processor has private physical
address space
Hardware sends/receives messages
between processors
§7.4 Clusters and Other Message-Passing Multiprocessors
Message Passing
Chapter 7 — Multicores, Multiprocessors, and Clusters — 28
Loosely Coupled Clusters

Network of independent computers


Each has private memory and OS
Connected using I/O system


Suitable for applications with independent tasks



E.g., Ethernet/switch, Internet
Web servers, databases, simulations, …
High availability, scalable, affordable
Problems


Administration cost (prefer virtual machines)
Low interconnect bandwidth

c.f. processor/memory bandwidth on an SMP
Chapter 7 — Multicores, Multiprocessors, and Clusters — 29
Sum Reduction (Again)


Sum 100,000 on 100 processors
First distribute 100 numbers to each


The do partial sums
sum = 0;
for (i = 0; i<1000; i = i + 1)
sum = sum + AN[i];
Reduction


Half the processors send, other half receive
and add
The quarter send, quarter receive and add, …
Chapter 7 — Multicores, Multiprocessors, and Clusters — 30
Sum Reduction (Again)

Given send() and receive() operations
limit = 100; half = 100;/* 100 processors */
repeat
half = (half+1)/2; /* send vs. receive
dividing line */
if (Pn >= half && Pn < limit)
send(Pn - half, sum);
if (Pn < (limit/2))
sum = sum + receive();
limit = half; /* upper limit of senders */
until (half == 1); /* exit with final sum */


Send/receive also provide synchronization
Assumes send/receive take similar time to addition
Chapter 7 — Multicores, Multiprocessors, and Clusters — 31
Grid Computing

Separate computers interconnected by
long-haul networks



E.g., Internet connections
Work units farmed out, results sent back
Can make use of idle time on PCs

E.g., SETI@home, World Community Grid
Chapter 7 — Multicores, Multiprocessors, and Clusters — 32

Performing multiple threads of execution in
parallel



Fine-grain multithreading




Replicate registers, PC, etc.
Fast switching between threads
§7.5 Hardware Multithreading
Multithreading
Switch threads after each cycle
Interleave instruction execution
If one thread stalls, others are executed
Coarse-grain multithreading


Only switch on long stall (e.g., L2-cache miss)
Simplifies hardware, but doesn’t hide short stalls
(eg, data hazards)
Chapter 7 — Multicores, Multiprocessors, and Clusters — 33
Simultaneous Multithreading

In multiple-issue dynamically scheduled
processor




Schedule instructions from multiple threads
Instructions from independent threads execute
when function units are available
Within threads, dependencies handled by
scheduling and register renaming
Example: Intel Pentium-4 HT

Two threads: duplicated registers, shared
function units and caches
Chapter 7 — Multicores, Multiprocessors, and Clusters — 34
Multithreading Example
Chapter 7 — Multicores, Multiprocessors, and Clusters — 35
Future of Multithreading


Will it survive? In what form?
Power considerations  simplified
microarchitectures


Tolerating cache-miss latency


Simpler forms of multithreading
Thread switch may be most effective
Multiple simple cores might share
resources more effectively
Chapter 7 — Multicores, Multiprocessors, and Clusters — 36

An alternate classification
Data Streams
Single
Instruction Single
Streams
Multiple

Multiple
SISD:
Intel Pentium 4
SIMD: SSE
instructions of x86
MISD:
No examples today
MIMD:
Intel Xeon e5345
SPMD: Single Program Multiple Data


§7.6 SISD, MIMD, SIMD, SPMD, and Vector
Instruction and Data Streams
A parallel program on a MIMD computer
Conditional code for different processors
Chapter 7 — Multicores, Multiprocessors, and Clusters — 37
SIMD

Operate elementwise on vectors of data

E.g., MMX and SSE instructions in x86


All processors execute the same
instruction at the same time




Multiple data elements in 128-bit wide registers
Each with different data address, etc.
Simplifies synchronization
Reduced instruction control hardware
Works best for highly data-parallel
applications
Chapter 7 — Multicores, Multiprocessors, and Clusters — 38
Vector Processors


Highly pipelined function units
Stream data from/to vector registers to units



Data collected from memory into registers
Results stored from registers to memory
Example: Vector extension to MIPS


32 × 64-element registers (64-bit elements)
Vector instructions




lv, sv: load/store vector
addv.d: add vectors of double
addvs.d: add scalar to each element of vector of double
Significantly reduces instruction-fetch bandwidth
Chapter 7 — Multicores, Multiprocessors, and Clusters — 39
Example: DAXPY (Y = a × X + Y)
Conventional MIPS code
l.d
$f0,a($sp)
addiu r4,$s0,#512
loop: l.d
$f2,0($s0)
mul.d $f2,$f2,$f0
l.d
$f4,0($s1)
add.d $f4,$f4,$f2
s.d
$f4,0($s1)
addiu $s0,$s0,#8
addiu $s1,$s1,#8
subu $t0,r4,$s0
bne
$t0,$zero,loop
 Vector MIPS code
l.d
$f0,a($sp)
lv
$v1,0($s0)
mulvs.d $v2,$v1,$f0
lv
$v3,0($s1)
addv.d $v4,$v2,$v3
sv
$v4,0($s1)

;load scalar a
;upper bound of what to load
;load x(i)
;a × x(i)
;load y(i)
;a × x(i) + y(i)
;store into y(i)
;increment index to x
;increment index to y
;compute bound
;check if done
;load scalar a
;load vector x
;vector-scalar multiply
;load vector y
;add y to product
;store the result
Chapter 7 — Multicores, Multiprocessors, and Clusters — 40
Vector vs. Scalar

Vector architectures and compilers


Simplify data-parallel programming
Explicit statement of absence of loop-carried
dependences




Reduced checking in hardware
Regular access patterns benefit from
interleaved and burst memory
Avoid control hazards by avoiding loops
More general than ad-hoc media
extensions (such as MMX, SSE)

Better match with compiler technology
Chapter 7 — Multicores, Multiprocessors, and Clusters — 41

Early video cards


3D graphics processing




Frame buffer memory with address generation for
video output
Originally high-end computers (e.g., SGI)
Moore’s Law  lower cost, higher density
3D graphics cards for PCs and game consoles
Graphics Processing Units


§7.7 Introduction to Graphics Processing Units
History of GPUs
Processors oriented to 3D graphics tasks
Vertex/pixel processing, shading, texture mapping,
rasterization
Chapter 7 — Multicores, Multiprocessors, and Clusters — 42
Graphics in the System
Chapter 7 — Multicores, Multiprocessors, and Clusters — 43
GPU Architectures

Processing is highly data-parallel


GPUs are highly multithreaded
Use thread switching to hide memory latency



Graphics memory is wide and high-bandwidth
Trend toward general purpose GPUs



Less reliance on multi-level caches
Heterogeneous CPU/GPU systems
CPU for sequential code, GPU for parallel code
Programming languages/APIs



DirectX, OpenGL
C for Graphics (Cg), High Level Shader Language
(HLSL)
Compute Unified Device Architecture (CUDA)
Chapter 7 — Multicores, Multiprocessors, and Clusters — 44
Example: NVIDIA Tesla
Streaming
multiprocessor
8 × Streaming
processors
Chapter 7 — Multicores, Multiprocessors, and Clusters — 45
Example: NVIDIA Tesla

Streaming Processors



Single-precision FP and integer units
Each SP is fine-grained multithreaded
Warp: group of 32 threads

Executed in parallel,
SIMD style


8 SPs
× 4 clock cycles
Hardware contexts
for 24 warps

Registers, PCs, …
Chapter 7 — Multicores, Multiprocessors, and Clusters — 46
Classifying GPUs

Don’t fit nicely into SIMD/MIMD model

Conditional execution in a thread allows an
illusion of MIMD


But with performance degredation
Need to write general purpose code with care
Instruction-Level
Parallelism
Data-Level
Parallelism
Static: Discovered
at Compile Time
Dynamic: Discovered
at Runtime
VLIW
Superscalar
SIMD or Vector
Tesla Multiprocessor
Chapter 7 — Multicores, Multiprocessors, and Clusters — 47

Network topologies

Arrangements of processors, switches, and links
Bus
Ring
N-cube (N = 3)
2D Mesh
§7.8 Introduction to Multiprocessor Network Topologies
Interconnection Networks
Fully connected
Chapter 7 — Multicores, Multiprocessors, and Clusters — 48
Multistage Networks
Chapter 7 — Multicores, Multiprocessors, and Clusters — 49
Network Characteristics

Performance


Latency per message (unloaded network)
Throughput







Link bandwidth
Total network bandwidth
Bisection bandwidth
Congestion delays (depending on traffic)
Cost
Power
Routability in silicon
Chapter 7 — Multicores, Multiprocessors, and Clusters — 50


Linpack: matrix linear algebra
SPECrate: parallel run of SPEC CPU programs


SPLASH: Stanford Parallel Applications for
Shared Memory


Mix of kernels and applications, strong scaling
NAS (NASA Advanced Supercomputing) suite


Job-level parallelism
§7.9 Multiprocessor Benchmarks
Parallel Benchmarks
computational fluid dynamics kernels
PARSEC (Princeton Application Repository for
Shared Memory Computers) suite

Multithreaded applications using Pthreads and
OpenMP
Chapter 7 — Multicores, Multiprocessors, and Clusters — 51
Code or Applications?

Traditional benchmarks


Parallel programming is evolving




Fixed code and data sets
Should algorithms, programming languages,
and tools be part of the system?
Compare systems, provided they implement a
given application
E.g., Linpack, Berkeley Design Patterns
Would foster innovation in approaches to
parallelism
Chapter 7 — Multicores, Multiprocessors, and Clusters — 52

Assume performance metric of interest is
achievable GFLOPs/sec


Arithmetic intensity of a kernel


Measured using computational kernels from
Berkeley Design Patterns
FLOPs per byte of memory accessed
For a given computer, determine


Peak GFLOPS (from data sheet)
Peak memory bytes/sec (using Stream
benchmark)
§7.10 Roofline: A Simple Performance Model
Modeling Performance
Chapter 7 — Multicores, Multiprocessors, and Clusters — 53
Roofline Diagram
Attainable GPLOPs/sec
= Max ( Peak Memory BW × Arithmetic Intensity, Peak FP Performance )
Chapter 7 — Multicores, Multiprocessors, and Clusters — 54
Comparing Systems

Example: Opteron X2 vs. Opteron X4


2-core vs. 4-core, 2× FP performance/core, 2.2GHz
vs. 2.3GHz
Same memory system

To get higher performance
on X4 than X2


Need high arithmetic intensity
Or working set must fit in X4’s
2MB L-3 cache
Chapter 7 — Multicores, Multiprocessors, and Clusters — 55
Optimizing Performance

Optimize FP performance



Balance adds & multiplies
Improve superscalar ILP
and use of SIMD
instructions
Optimize memory usage

Software prefetch


Avoid load stalls
Memory affinity

Avoid non-local data
accesses
Chapter 7 — Multicores, Multiprocessors, and Clusters — 56
Optimizing Performance

Choice of optimization depends on
arithmetic intensity of code

Arithmetic intensity is
not always fixed


May scale with
problem size
Caching reduces
memory accesses

Increases arithmetic
intensity
Chapter 7 — Multicores, Multiprocessors, and Clusters — 57
2 × quad-core
Intel Xeon e5345
(Clovertown)
2 × quad-core
AMD Opteron X4 2356
(Barcelona)
§7.11 Real Stuff: Benchmarking Four Multicores …
Four Example Systems
Chapter 7 — Multicores, Multiprocessors, and Clusters — 58
Four Example Systems
2 × oct-core
Sun UltraSPARC
T2 5140 (Niagara 2)
2 × oct-core
IBM Cell QS20
Chapter 7 — Multicores, Multiprocessors, and Clusters — 59
And Their Rooflines

Kernels
SpMV (left)
 LBHMD (right)

Some optimizations
change arithmetic
intensity
 x86 systems have
higher peak GFLOPs


But harder to achieve,
given memory
bandwidth
Chapter 7 — Multicores, Multiprocessors, and Clusters — 60
Performance on SpMV

Sparse matrix/vector multiply


Irregular memory accesses, memory bound
Arithmetic intensity

0.166 before memory optimization, 0.25 after

Xeon vs. Opteron



Similar peak FLOPS
Xeon limited by shared FSBs
and chipset
UltraSPARC/Cell vs. x86


20 – 30 vs. 75 peak GFLOPs
More cores and memory
bandwidth
Chapter 7 — Multicores, Multiprocessors, and Clusters — 61
Performance on LBMHD

Fluid dynamics: structured grid over time steps


Each point: 75 FP read/write, 1300 FP ops
Arithmetic intensity

0.70 before optimization, 1.07 after

Opteron vs. UltraSPARC


More powerful cores, not
limited by memory bandwidth
Xeon vs. others

Still suffers from memory
bottlenecks
Chapter 7 — Multicores, Multiprocessors, and Clusters — 62
Achieving Performance

Compare naïve vs. optimized code

If naïve code performs well, it’s easier to write
high performance code for the system
System
Kernel
Naïve
GFLOPs/sec
Optimized
GFLOPs/sec
Naïve as % of
optimized
Intel Xeon
SpMV
LBMHD
1.0
4.6
1.5
5.6
64%
82%
AMD
Opteron X4
SpMV
LBMHD
1.4
7.1
3.6
14.1
38%
50%
Sun UltraSPARC
T2
SpMV
LBMHD
3.5
9.7
4.1
10.5
86%
93%
IBM Cell QS20
SpMV
LBMHD
Naïve code
not feasible
6.4
16.7
0%
0%
Chapter 7 — Multicores, Multiprocessors, and Clusters — 63

Amdahl’s Law doesn’t apply to parallel
computers



Since we can achieve linear speedup
But only on applications with weak scaling
§7.12 Fallacies and Pitfalls
Fallacies
Peak performance tracks observed
performance



Marketers like this approach!
But compare Xeon with others in example
Need to be aware of bottlenecks
Chapter 7 — Multicores, Multiprocessors, and Clusters — 64
Pitfalls

Not developing the software to take
account of a multiprocessor architecture

Example: using a single lock for a shared
composite resource


Serializes accesses, even if they could be done in
parallel
Use finer-granularity locking
Chapter 7 — Multicores, Multiprocessors, and Clusters — 65


Goal: higher performance by using multiple
processors
Difficulties



Many reasons for optimism



Developing parallel software
Devising appropriate architectures
§7.13 Concluding Remarks
Concluding Remarks
Changing software and application environment
Chip-level multiprocessors with lower latency,
higher bandwidth interconnect
An ongoing challenge for computer architects!
Chapter 7 — Multicores, Multiprocessors, and Clusters — 66