MASSIVE Overview and Architecture

HPC GPU Workshop 13th July 2011
MASSIVE
Overview and Architecture
Paul McIntosh
[email protected]
Before we start…
MASSIVE is …
• Data processing, in particular image processing and
analysis.
• Interactive visualisation where users will pre-book nodes
to use scientific visualisation packages.
• Non-interactive visualisation, allowing users to render
large datasets or simulations.
• Large-scale visualisation problems requiring multiple
nodes and GPUs for data processing and rendering.
• Modelling and simulation, in particular problems suited to
GPU parallelisation.
Facilities
• MASSIVE1
– Real-time Computer Tomography at the Imaging
Beamline at the Australian Synchrotron
• MASSIVE2
– General facility for image processing, data
processing, simulation and analysis, GPU computing
– Specialised fat nodes for visualisation
EXPERTISE
PRE-PROCESSING
SOFTWARE
HARDWARE
CAPTURE
RECONSTRUCTION
ANALYSIS &
SIMULATION
INTERACTIVE
VISUALISATION
PRESENTATION &
PUBLICATION
User Activities
• Parallelising / GPUing code
– Bioengineering (Synchrotron
data)
– Underworld (Geoscience –
Simulated data)
• Visualisation expertise
– Materials science (Electron
microscopy data)
– Geoscience (Synchrotron data)
– Bioengineering & fluid dynamics
(Simulated and CT data)
– Medicine (Synchrotron and OPT
data)
• CT Reconstruction
– CSIRO X-ray Science and
NeAT CT
– Synchrotron CT reconstruction
code
MASSIVE High Level View
•
•
•
•
•
People
Timeline
Hardware
Configuration
Software
MASSIVE People
•
•
•
•
•
•
•
Wojtek James Goscinski (MASSIVE Coordinator, Monash)
Craig West (MOSP Systems Manager, VPAC)
Paul McIntosh (MOSP Project Leader, VPAC)
Enzo Reyes (HPC Systems Analyst, VPAC)
Matt Wallis (Systems Administrator, VPAC)
Dylan Graham (Systems Administrator, VPAC)
Sam Morrison (Systems Administrator/Programmer, VPAC)…
MASSIVE Timeline
2010: A prototyping stage (hardware/software) and tender
2011/01: MASSIVE stage 1 systems delivered
2011/03: MASSIVE early adopters
2011/05: MASSIVE2 goes in production*
2011/?? MASSIVE1 and final configuration
2012: MASSIVE2 stage 2 – doubling M2 processing
capability
MASSIVE
M2
US
M1
Source: Google Maps
MASSIVE
Photo: Steve Morton
MASSIVE
MASSIVE1
• 504 CPU-cores
• 84 GPU-coprocessors
• 58TB of performance
file system space
• Computer is connected
via a high performance
network
42 nodes, each with:
2 x 6-core X5650 CPUs
48 GB of RAM
2 x nVidia M2070 GPUs
58TB GPFS file system
capable of 2GB/s+
sustained write
4X QDR Mellanox IS5200
InfiniBand switch
MASSIVE2 stage 1
• 504 CPU-cores
• 84 GPU-coprocessors
• 10 specialised visualisation
nodes with:
– High memory
– Specialised graphics GPU
• 250TB of performance file
system space
• Computer is connected via a
high performance network
42 nodes, each with:
2 x 6-core X5650 CPUs
48 GB of RAM
2 x nVidia M2070 GPUs
Vis nodes upgraded to:
192 GB or RAM
2 x nVidia M2070Q
250TB GPFS file system
capable of 3GB/s+
sustained write
4X QDR Mellanox IS5200
InfiniBand switch
M1 and M2 Combined
• Combined
– 1008 compute cores
– 168 GPU co-processors
– 300TB of performance file system space
– High performance network
– Facilities are connected by multiple fast fiber connections
– 10 nodes (120 cores) have a specialty visualisation capability
and 192 Gig of RAM)
Technical Attributes
1. GPU co-processors for data processing,
computation and visualisation
2. Fast data processing (for a facility of it’s scale)
3. Frontend desktop capability (remote
visualisation)
4. Large-ish memory nodes
/home/researcher/
|-- myProject001 -> /home/projects/myProject001
|-- myProject001_scratch -> /scratch/myProject001
/home/researcher/
|-- myProject001 -> /home/projects/myProject001
|-- myProject001_scratch -> /scratch/myProject001
|-- Mx -> <left|right hemisphere>
Software
• Torque/PBS
– compute
– vis
•
•
•
•
MOAB
Linux Centos 5.6
Windows HPC R2 (M1)
Ever growing list…
Interactive Remote Desktop
Environment
•
Characterisation
and Visualisation
tools:
•
Paraview
•
Amira
•
Drishti
•
ImageJ
•
ITKSnap
•
Etc.
Access
• Access
– Monash and CSIRO have a partner share
– Victorian researchers through VPAC
– Australian researchers through NCI Merit Allocation
• www.massive.org.au
www.massive.org.au
www.massive.org.au
http://www.vpac.org/services/training/tutorials
www.massive.org.au
Help Desk
For any issues with using MASSIVE or the documentation on this site
please contact the Help Desk.
Email: [email protected]
Phone: 03 99024845
Consulting
For general enquires and enquires about value added services such
as help with porting code to GPUs or using MASSIVE for Imaging and
Visualisation, use the following:
Email: [email protected]
Phone: 03 99024845
Other
For other enquiries please contact the MASSIVE Coordinator:
Email: [email protected]
Computer Lab Session
•There will be more!
•Main Monash Campus
•Parking in blue zone
•Need to think about lunch
•VPAC Training Material in Intro to
HPC
•MASSIVE User Guide
•Chance to try what you’ve learned in
this workshop 