Parallel Fluent Processing

Parallel Fluent Processing
Wang Junhong
SVU/Academic Computing, Computer Centre
1.
What is parallel processing of Fluent?
−
Parallel processing of Fluent means to run Fluent solver on two or more CPUs
simultaneously to calculate a computational fluid dynamics (CFD) job.
2. Why need to use parallel processing of Fluent?
−
−
What are the computing resources and how about the parallel performance?
−
Todate, there have 3 different parallel computing resources and 5 batch queues in SVU
for users to run parallel processing of Fluent:
Compute Server
Compaq
(cheetah, cheetah2)
Linux Cluster
(atlas)
Itanium2 Linux
(cougar1~5)
−
Batch Queue
Max. # of CPU
MEM Limit
cpq_3p
3
2 GB
cpq_8p
cpq_8p_8gb
8
4 GB
8 GB
linux_4p
4
4 GB
ia64_4p
4
4 GB
Benchmark performance for parallel processing of Fluent on the above servers and batch
queues is illustrated as below:
1000
900
Computing Time (min)
3.
Reduce computing time: splitting a job to two or more small partitions, hence to take less
time to complete and thus cut down the time to solution.
Make large scale job doable: large scale job that has no way or, if not, impossible to be
processed on a single CPU due to the restriction of hardware (i.e., RAM , disk space) and
time, can become doable after being segmented to many small partitions which could then
be handled by many CPUs.
857.0
800
700
600
500
364.8
400
300
225.3
200
100
0
98.8
47.7
1p
3p
112.3
34.9
8p
1p
Compaq (ch, ch2)
4p
Linux Cluster (atlas)
1
1p
4p
Itanium2 (cougar1-5)
o
o
o
o
4.
The computing time in the above chart is the elapse time (or wall clock time)
needed for the job to complete in the individual queue.
The benchmark performance very clearly indicates that you can cut down the
computing time tremendously if run Fluent in parallel.
However, the real speedup rate of your simulation may be different from what is
listed here. It will be much depended on the system load, status and other factors
during the specific period your jobs are being processed.
The test case used for the benchmarking is a mixing problem, with k-epsilon
turbulence model, heat transfer and about 800,000 mesh cells in the computational
domain, takes about 1 GB of memory.
How and where to submit/run parallel Fluent jobs?
−
−
−
Fluent is a well parallised CFD solver. Users don’t have to know parallel programming
knowledge and don’t need to write parallel codes in order to run parallel Fluent
simulation. Fluent will automatically partition the job for users, using defaults that should
be close to optimal.
Parallel Fluent jobs need to be submitted to batch compute queues and run in batch mode
in SVU. All jobs are managed by LSF manager
The submission command to 5 parallel queues are:
Queue
Sample Command (for 3d case)
cpq_8p
bsub –q cpq_8p –o job1.out “mpiclean; fluent 3d –t8 -pvmpi –g –i job1.script; mpiclean”
linux_4p
bsub –q linux_4p –o job2.out –n 4 fluent 3d –t0 –pnet –g –i job2.script –lsf
ia64_4p
bsub –q ia64_4p –o job3.out “fluent 3d –t4 –psmpi –g –i job4.script”
* note: the command for cpq_3p and cpq_8p[_8gb] is similar.
−
−
−
5.
The 3 sample commands will submit a parallel Fluent job to run on 8, 4 and 4 CPUs at
queues cpq_8p, linux_4p and ia64_4p, respectively.
You can submit batch Fluent job from any host in SVU, except for linux_4p that requests
to logon to atlas.
Please visit SVU page http://www.nus.edu.sg/comcen/svu/techinfo/fluent_cpqpll.html
and http://www.nus.edu.sg/comcen/svu/techinfo/faq_cfd.html for more information.
Tips
−
−
−
−
−
Check queue status (i.e., how many pending jobs) before submit job to the queue. To
minimize the waiting time, don’t stick to one queue if many jobs are pending in the queue,
even it is faster than others.
Parallel processing of Fluent is not recommended to use for the initial test of problem
setup or small jobs that only runs a few minutes.
Enable checkpoint or set auto backup of the intermediate solution to minimize the impact
in case the server is down due to some unforeseen reason.
Watch out SVU notice closely of any changes/updates of the systems.
Don’t hesitate to email [email protected] to get help and support from us.
2