Running MPI Applications with the Windows Azure HPC Scheduler

|
Hands-On Lab
Running MPI Applications with the Windows
Azure HPC Scheduler
Lab version:
2.0.0
Last updated: 7/14/2017
CONTENTS
OVERVIEW ................................................................................................................................................... 3
EXERCISE 1: CREATING, DEPLOYING, AND RUNNING THE PI CALCULATION MPI APPLICATION
IN WINDOWS AZURE .................................................................................................................................. 6
Task 1 - Creating the Pi Calculator MPI Application.............................................................................. 6
Task 2 - Deploying the Windows Azure HPC Scheduler and the MPI Application to Windows Azure 12
Task 3 - Configuring Firewall Rules for MPI Communication .............................................................. 19
Task 4 - Submitting an MPI Job from a Command Prompt ................................................................. 20
Task 5 - Submitting an MPI Job from the Windows Azure HPC Scheduler Web Portal ...................... 21
Verification .......................................................................................................................................... 24
EXERCISE 2: DEPLOYING AND RUNNING THE TACHYON MPI APPLICATION TO AN EXISTING
WINDOWS AZURE HPC SCHEDULER .................................................................................................... 26
Task 1 - Inspecting the Tachyon MPI Application Folder Contents .................................................... 26
Task 2 - Configure the AzureBlobCopy Utility ..................................................................................... 27
Task 3 - Deploy the Tachyon Application to Windows Azure Compute Nodes .................................. 28
Task 4 – Execute the Tachyon Application and View its Result .......................................................... 29
Verification .......................................................................................................................................... 30
SUMMARY .................................................................................................................................................. 31
Overview
Message passing interface (MPI) is a portable, platform-independent industry standard for messaging
between compute nodes running intrinsically parallel applications. Intrinsically parallel applications are
executable files that run on multiple cores or nodes that have dependencies between them and need to
communicate with each other. This is used, for instance, to pass intermediate results. Such applications
can use MPI as a fast, powerful inter-process communication mechanism.
Figure 1
Intrinsically parallel application model
MPI applications can be written in many programming languages / platforms, such as C/C++, Fortran 90,
and Microsoft .NET (using MPI.NET). The Windows Azure HPC Scheduler supports running both 32-bit
and 64-bit MPI applications.
Note: For detailed information on using MS-MPI in HPC clusters, refer to the paper Windows HPC
Server 2008 - Using MS-MPI.
In this lab, you will examine how to create MPI applications using C++ and the MS-MPI library, how to
deploy the MPI application with a Windows Azure HPC Scheduler, and how to submit MPI jobs to the
scheduler.
During this lab, you will deploy applications to Windows Azure and use the Windows Azure
Management Portal. To learn more about deploying applications to Windows Azure and using the
Management Portal, refer to the Windows Azure Training Course on MSDN, and run the first lab in the
course, the Introduction to Windows Azure lab.
Objectives
In this hands-on lab, you will learn how to:

Create an MPI application using C++.

Deploy the Windows Azure HPC Scheduler.

Deploy the MPI application to Windows Azure compute nodes.

Submit an MPI job from a command prompt.

Use the Windows Azure HPC Scheduler Web Portal to submit an MPI job.

Use the Windows Azure HPC Scheduler Web Portal to check the status of jobs and tasks.
Prerequisites
The following is required to complete this hands-on lab:

Visual Studio 2010 Professional or higher

Windows Azure Tools for Microsoft Visual Studio 2010 1.6

Windows Azure HPC Scheduler SDK

HPC Pack 2008 R2 MS-MPI Redistributable Package with Service Pack 3
Setup
In order to execute the exercise two of the lab you need to download the Tachyon MPI application.
1. Download the Tachyon MPI application.
2. Open the compressed tachyon.zip file and extract the tachyon.exe to the Source\Ex2TachyonMPI folder.
Note: Make sure you have checked all the dependencies for this lab before running the setup.
Exercises
This hands-on lab includes the following exercise:
1. Creating, deploying, and running the pi calculation MPI application in Windows Azure.
2. Deploying and running the Tachyon MPI application to an existing Windows Azure HPC
Scheduler.
Estimated time to complete this lab: 90 minutes.
Exercise 1: Creating, deploying, and
running the pi calculation MPI
application in Windows Azure
In this exercise, you will create the pi calculator MPI application, deploy it and the Windows Azure HPC
Scheduler, and submit an MPI job using two techniques - from a command prompt, and from the
Windows Azure HPC Scheduler Web Portal. When you complete the exercise, you will validate the jobs
completed successfully, and observe the calculated value of pi.
Task 1 - Creating the Pi Calculator MPI Application
Intrinsically (non-embarrassingly) parallel applications are not that easy to create. When creating such
applications you need to plan how to decompose your algorithm, how to spread the data between
processes, and how to gather the partial calculations. In this task, you will create the pi calculator MPI
application by decomposing the pi calculation algorithm and running it on multiple processes, by using a
small set of the MS-MPI APIs.
Note: You can find a comprehensive list of MPI commands in the Message Passing Interface (MPI)
tutorial.
1. Open Microsoft Visual Studio 2010 from Start | All Programs | Microsoft Visual Studio 2010 |
Microsoft Visual Studio 2010.
2. Open the begin.sln solution file located in the Source\Ex1-PiCalculatorMPI\begin folder.
3. Verify that you are referencing the right HPC PACK Libraries path for your current operating
system. To do this, in the Solution Explorer, right click PiCalculator project within MPI folder
and select Properties. Expand Configuration Properties node and select VC++ Directories.
Locate the Library Directories field and make sure its pointing to %ProgramFiles%\Microsoft
HPC Pack 2008 R2\Lib\amd64, if you are using a 64-bit Operating System or
%ProgramFiles%\Microsoft HPC Pack 2008 R2\Lib\i386 if you are using 32-bit.
Figure 2
HPC Pack Libreries
4. In the Solution Explorer window, expand the MPI solution folder, expand the PiCalculator
project node, and then expand the Source Files folder.
Figure 3
The PiCalculator project
5. Open the PiCalculator.cpp file. You should see the following code:
CPP
#include "stdafx.h"
#include <string>
int main(int argc, char* argv[])
{
return 0;
}
6. To begin writing MPI applications, you need to include the MPI header file. Add the following
include statement before the main function:
CPP
#include "stdafx.h"
#include <string>
#include "mpi.h"
int main(int argc, char* argv[])
{
return 0;
}
7. Add the following variable declarations that will be used in the application:
CPP
#include "stdafx.h"
#include <string>
#include "mpi.h"
int main(int argc, char* argv[])
{
int numOfIntervals, rank, size;
double mypi, pi, width, sum, midPoint;
return 0;
}
8. The first step when creating an MPI application is to initialize the MPI environment, after which
it is possible to get information about the number of nodes in the cluster and the identity of the
current node. This is done by calling the MPI_Init, MPI_Comm_size, and MPI_Comm_rank
functions. Add the following code to accomplish these tasks:
CPP
#include "stdafx.h"
#include <string>
#include "mpi.h"
int main(int argc, char* argv[])
{
int numOfIntervals, rank, size;
double mypi, pi, width, sum, midPoint;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
return 0;
}
9. The first node in the cluster (the one with rank zero) will be the initiator; it will broadcast the
information to the other nodes by using the MPI_Bcast function, and afterwards collect the
results of their calculations. Add the following code so the initiator can broadcast to each of the
other nodes the number of iterations to run:
CPP
#include "stdafx.h"
#include <string>
#include "mpi.h"
int main(int argc, char* argv[])
{
int numOfIntervals, rank, size;
double mypi, pi, width, sum, midPoint;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
if (rank == 0) {
// Validate that arguments were received
if (argc != 2)
return 0;
numOfIntervals = atoi(argv[1]);
}
// Broadcast the interval size to the other nodes
MPI_Bcast(&numOfIntervals, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (numOfIntervals == 0)
return 0;
return 0;
}
10. Each of the nodes that run the code will wait for the MPI_Bcast function to finish either
receiving or sending the interval size (the initiator will send, and all other nodes will receive).
Once all the nodes have the interval size, you can start the partial calculation in each node. The
application calculates the value of pi by using the integral formula of 4/(1+x^2), which is broken
down between nodes using Riemann sums. Add the following code to perform the calculation:
CPP
#include "stdafx.h"
#include <string>
#include "mpi.h"
int main(int argc, char* argv[])
{
int numOfIntervals, rank, size;
double mypi, pi, width, sum, midPoint;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
if (rank == 0) {
// Validate that arguments were received
if (argc != 2)
return 0;
numOfIntervals = atoi(argv[1]);
}
// Broadcast the number of intervals to the other nodes
MPI_Bcast(&numOfIntervals, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (numOfIntervals == 0)
return 0;
// Calculate the size of the interval
width = 1.0 / (double) numOfIntervals;
sum = 0.0;
for (int i = rank + 1; i <= numOfIntervals; i += size) {
midPoint = width * ((double)i - 0.5);
sum += (4.0 / (1.0 + midPoint*midPoint));
}
mypi = width * sum;
return 0;
}
11. Once each node has calculated its partial information, you need to collect all the information
back at the initiator and complete the calculation. The application sums the partial data from
each node to node zero (the initiator) by calling the MPI_Reduce function with the MPI_SUM
aggregator parameter. Once you sum the results, you can print the calculated value of pi, and
terminate the MPI environment. Add the last piece of code to the application:
CPP
#include "stdafx.h"
#include <string>
#include "mpi.h"
int main(int argc, char* argv[])
{
int numOfIntervals, rank, size;
double mypi, pi, width, sum, midPoint;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
if (rank == 0) {
// Validate that arguments were received
if (argc != 2)
return 0;
numOfIntervals = atoi(argv[1]);
}
// Broadcast the number of intervals to the other nodes
MPI_Bcast(&numOfIntervals, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (numOfIntervals == 0)
return 0;
// Calculate the size of the interval
width = 1.0 / (double) numOfIntervals;
sum = 0.0;
for (int i = rank + 1; i <= numOfIntervals; i += size) {
midPoint = width * ((double)i - 0.5);
sum += (4.0 / (1.0 + midPoint*midPoint));
}
mypi = width * sum;
// Collect the partial sums
MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM,0, MPI_COMM_WORLD);
if (rank == 0)
printf ("pi is approximately %1.30f\n",pi);
MPI_Finalize();
return 0;
}
12. Save the PiCalculator.cpp file.
Task 2 - Deploying the Windows Azure HPC Scheduler and the MPI Application to Windows Azure
Deploying the Windows Azure HPC Scheduler is a task that involves the creation of many Windows
Azure components: a hosted service, an Azure storage account, a SQL Azure server and database
instance, and a set of worker roles for the head node, compute nodes, and frontend node. All of these
steps are performed for you by the AppConfigure application, and all you need to do is provide it with
the information it requires, such as your subscription ID and usernames and passwords for the database
and the hosted services.
You can read more about the deployment tool in the Getting Started with Application Deployment with
the Windows Azure HPC Scheduler article on MSDN.
If you have an MPI application that you want to run using the Windows Azure HPC Scheduler, you will
also need to upload it to the compute nodes. One way to do this, demonstrated in this task, is to include
the executable file in the compute node role.
1. From the Build menu, select Configuration Manager.
2. In the Configuration Manager dialog, change the configuration of the PiCalculator project from
debug to release.
Figure 4
The solution’s configuration manager
3. In the Solution Explorer window, expand the MPI solution folder, right-click the PiCalculator
project, and select Properties.
4. In the PiCalculator Property Pages dialog, under General, set the Output Directory to the path
$(SolutionDir)\AzureServices\ComputeNode\bin\debug\.
Figure 5
PiCalculator Property Pages dialog
Note: Make sure the output directory path you enter has no trailing space after the last
backslash (\).
5. In the Solution Explorer window, expand the Deployment solution folder, right-click the
AppConfigure project, and select Set as StartUp Project.
6. Build the solution and start it without debugging (Ctrl+F5). After a couple of seconds you should
see the deployment application.
Figure 6
The Windows Azure HPC Scheduler deployment application
7. In the Subscription ID field, type the subscription ID of your Windows Azure account. You can
find the subscription ID in the Management Portal.
Figure 7
Locating the subscription ID of a Windows Azure subscription
8. If you already have a management certificate installed on your machine, click Browse and
select it. If you do not have a management certificate, click the Create button. If you have
selected an existing certificate, skip to Step 11.
Note: If you choose to use an existing certificate, and that certificate is not properly installed in
the Windows Azure subscription, you may receive error messages about failed SSL/TLS
connections. If you see these kinds of errors, try creating a new management certificate and
adding it to your subscription.
9. If you have chosen to create a new management certificate, a new certificate will be created
and installed on your machine; this procedure may cause a Security Warning dialog box to
appear. If the warning appears, click Yes.
Figure 8
The Security Warning dialog box
10. After the management certificate is created, follow the instructions for adding the new
certificate in the How to Add a Management Certificate to a Windows Azure Subscription article
on MSDN. You can locate the certificate (.cer) file in the bin\debug folder of the AppConfigure
application; a message box will appear, specifying the exact path of the certificate.
11. Continue filling in the information as follows:
a. Service Name - the name of your hosted service, which will also become the DNS
name. This will also be the name of your HPC cluster, which is used to set the
CCP_SCHEDULER environment variable.
b. Location - the region of the Windows Azure storage account, hosted service, and
SQL Azure server to be used. If any of these do no currently exist in that region, new
ones will be created.
c. Administrator’s Name - the name of the scheduler and SQL Azure server
administrator. This username will also be the administrator’s name when you
remotely connect to the Windows Azure instances.
d. Administrator’s Password - the password for the administrator user. If you try to
enter a password that does not meet the requirements, you will see an error
message.
e. Number of Nodes - the number of instances of a head node role, a compute node
role, and a web frontend node role. The default is one head node, one web frontend
node, and three compute nodes.
Note: The default VM size of each instance is set to small. You can change the VM size
of each role by editing the ServiceDefinition.csdef file in the AppConfigure application
folder.
12. After filling in the missing information, click the Configure button. The application will create
the required configuration files, verify the existence of the storage account, and create the SQL
Azure server if it does not yet exist.
Note: This process may take several minutes to complete, depending on the existence of the
storage account and the SQL Azure server.
As this is for demo purposes, the deployment application will create a new SQL Azure Server
with the given credentials in the selected geographical location if there are no servers
available. If your subscription already has SQL Azure servers in the selected location, the
deployment application will use the first one that appears in the SQL Azure Servers list within
the Management Portal and authenticate with the given credentials. Make sure you are using
the right credentials and geographical location in case you use an existing server.
13. After the configuration process completes, click the Publish button. The application will add a
certificate to the hosted service, create the database instance in the SQL Azure server, upload
the scheduler package to storage, and deploy the different roles to Windows Azure.
Note: This process requires uploading a large file to a Windows Azure storage container and
creating several role instances in Windows Azure. It may take more than 10 minutes to
complete this process, depending on your Internet bandwidth.
14. After the publishing process completes, click the Close button to close the AppConfigure
application.
Note: The information you write down in the AppConfigure application is stored in the
application’s configuration file and will be available the next time you start the application. The
information is stored as plain text, including the passwords, so keep that file safe.
15. In a browser, open the Management Portal, and click Hosted Services, Storage Accounts &
CDN.
16. Click Hosted Services.
17. In the main window, locate your hosted service, expand the ComputeNode role, and select one
of the ComputeNode_IN_XX instances. Make sure the instance’s status is set to Ready.
Note: It may take another couple of minutes from the time the deployment application
completes the publishing process until all the instances are created.
18. On the ribbon, in the Remote Access area, click Connect.
19. When prompted to download the .rdp file, open it, type in the administrator’s password you
configured in Step 11, and complete the login process. If a warning displays indicating that the
identity of the remote computer cannot be verified, click Yes to connect to the machine despite
the certificate errors.
Note: The certificate warning that may appear is due to the fact that the certificate used to
identify the server was issued by the AppConfigure application and not by a trusted certifying
authority.
20. After connecting to the compute node instance, open an explorer window and verify the
existence of the following application folders:
a. E:\approot
b. E:\plugins\HpcComputeNode\HPCPack
21. Leave the remote desktop connection opened, as you will use it in the next task.
Task 3 - Configuring Firewall Rules for MPI Communication
Running MPI applications in Windows Azure requires opening the firewall for communication between
compute nodes. In this task, you will open the required ports in the Windows firewall by running the
hpcfwutil command.
1. If you have closed the remote desktop connection to the compute node, reconnect to it by
performing steps 15 - 19 in Task 2.
2. Open a command prompt window, and type the following command to run the hpcfwutil
command on all compute nodes:
CMD
clusrun /nodegroup:computenode hpcfwutil register PiCalculator.exe
e:\approot\PiCalculator.exe
3. Wait for the command to finish, and verify that none of the nodes have failed.
Figure 9
Running the hpcfwutil command
4. Leave the remote desktop connection opened, as you will use it in the next task.
Task 4 - Submitting an MPI Job from a Command Prompt
There are several means by which you can submit new jobs to the Windows Azure HPC Scheduler: you
can remotely connect to the head node and submit a new job from the HPC Job Manager tool, you can
browse to the Windows Azure HPC Scheduler Web Portal and submit a job through its interface; or you
can remotely connect to the head node or compute node and submit a job from the command prompt
using the job submit command.
In this task, you will submit a new job by using the job submit command. Other submission techniques
will be demonstrated in the next task.
1. If you have closed the remote desktop connection to the compute node, reconnect to it by
performing steps 15 - 19 of Task 2.
2. Open a command prompt window and type in the following command to submit the MPI job:
CMD
job submit /jobname:PiCalculator /nodegroup:computenode /numcores:1-9999
mpiexec e:\approot\piCalculator.exe 10000000
3. Once the job starts, you will see the job ID. Write down the job ID.
Figure 10
Submitting an MPI job
Note: To get a more accurate value of pi, run the job with more iterations. You can change the
command-line parameter in the basic task from its current value of 10 million to 50 million, 100
million, or even 1 billion iterations.
Task 5 - Submitting an MPI Job from the Windows Azure HPC Scheduler Web Portal
You can use the Windows Azure HPC Scheduler Web Portal to manage and submit jobs to the scheduler.
The portal allows for the creation of new jobs according to templates, display information about running
and completed jobs, and shows a heat map of the cluster’s compute nodes.
In this task, you will use the portal to create a new submission page, and use that to start a new MPI job.
1. Open a web browser and browse to your Windows Azure HPC Scheduler Web Portal, which
should be located at https://service_name.cloudapp.net/portal, where service_name is the
name of the hosted service you entered in Task 2.
2. If you are prompted with a warning message regarding the security certificate, click the option
Continue to this website (not recommended). The reason for the possible warning is that an
untrusted certification authority (that is, the AppConfigure application) issued the certificate
used to identify the server.
3. If a Windows Security dialog box opens, enter the username and password of the
administrator account you entered in Task 2.
4. Once logged in, select Submission pages from the side menu, and click New.
Figure 11
Creating a new submission page using the Windows Azure HPC Scheduler Web Portal
Note: Submission pages allow the submission of jobs to the Windows Azure HPC Scheduler.
They define default values and general settings for submitting a job.
5. Next, set the Submission page name to PiCalculator and select Basic job under Page type. Click
Next.
Figure 12
Selecting a submission page name and type
6. In the Specify job property visibility and defaults page, enter the following information and
then click Next:
a. Job Name: PiCalculator
b. Command line: mpiexec e:\approot\piCalculator.exe 10000000
c. Max Core: 9999
Figure 13
Specifying the job property visibility and defaults
7. In the Specify visibility and defaults for node preparation and node release tasks page, leave
both checkboxes unchecked, and click Next.
8. In the Specify application profile page click Next, followed by Finish.
9. Next, select New job from the side menu, followed by picalculator.
Figure 14
Creating a new job using the PiCalculator submission page
10. Review the job’s default settings, which were previously set in the submission page, and click
Submit.
11. After the job submits, the browser will move to the My Jobs page, where you can see the new
job in the list. Write down the new job ID.
12. Keep the browser window opened for the next task.
Verification
1. In the portal, click My Jobs link. You will see a list of jobs.
Figure 15
List of jobs in the Windows Azure HPC Scheduler Web Portal
2. Click one of the finished PiCalculator jobs according to the ID you found in Task 4 and 5. You
should see a job information window.
Figure 16
The job information window
3. Click View Tasks tab, and verify that the Output area contains the output of the PiCalculator
application.
Figure 17
Results of running the PiCalculator MPI application
Exercise 2: Deploying and running the
Tachyon MPI application to an existing
Windows Azure HPC Scheduler
In this exercise, you will deploy the Tachyon MPI application to an existing Windows Azure HPC
Scheduler, and submit an MPI job from a command prompt. When you complete the exercise, you will
validate that the job completed successfully and observe the generated image.
Note: In order to execute this exercise you need to download the Tachyon MPI application and extract
the content of tachyon.zip file to the Source\Ex2-TachyonMPI folder.
Task 1 - Inspecting the Tachyon MPI Application Folder Contents
In this task, you will inspect the contents of the folder that contains the Tachyon MPI application and
some essential utilities needed to execute it on Windows Azure compute nodes.
1. Open source\Ex2-TachyonMPI folder, and review its contents.
Figure 18
The contents of the tachyon_mpi folder
The folder contains the following:
a. AzureBlobCopy: This folder contains a command-line utility for uploading files to
Windows Azure blob storage.
b. tachyon.exe: This is the MPI application that will be executed in the Windows Azure
compute nodes.
c. tachyon.bat: This is a batch file that runs the MPI application.
d. Input .dat files: These files will be processed by the tachyon.exe MPI application.
2. Open the tachyon.bat file located in the folder, and review its contents.
CMD
mpiexec %ccp_package_root%tachyon\tachyon.exe -aasamples 4 -trans_vmd
%ccp_package_root%tachyon\stmvao-white.dat -o
%ccp_package_root%tachyon\stmvao-white_short.bmp -format BMP -rescale_lights
0.4 -add_skylight 0.9 -skylight_samples 32 -res 1000 1000
%ccp_package_root%tachyon\AzureBlobCopy\AzureBlobCopy.exe -Action Upload BlobContainer tachyonout -LocalDir %ccp_package_root%tachyon -FileName stmvaowhite_short.bmp
del /Q %ccp_package_root%tachyon\stmvao-white_short.bmp
a. The first command uses the mpiexec command to start the tachyon.exe MPI
application.
b. The second command uses the AzureBlobCopy.exe utility to upload the output of the
MPI application to Windows Azure blob storage.
c. The third command removes the generated output to free up drive space.
Task 2 - Configure the AzureBlobCopy Utility
In this task, you will configure AzureBlobCopy to upload the results of the tachyon execution to a
Windows Azure storage account accessible by you.
Note: To learn more about using blob storage, refer to the Exploring Windows Azure Storage lab on
MSDN.
1. Open the Ex2-TachyonMPI\AzureBlobCopy folder and review its contents.
Figure 19
The contents of the AzureBlobCopy folder
2. Open the AzureBlobCopy.exe.config file and review its contents:
XML
<?xml version="1.0"?>
<configuration>
<appSettings>
<add key="StorageAccountName" value="[YOUR-STORAGE-ACCOUNT]"/>
<add key="StorageKey" value="[YOUR-STORAGE-KEY]" />
</appSettings>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
</startup>
</configuration>
3. Under the configuration tag, update the values of the StorageAccountName and StorageKey
settings to reflect the Windows Azure storage account name and key, which were created by
the Windows Azure HPC Scheduler deployment tool.
Note: To find your storage account’s primary key, follow the steps in the How to View, Copy,
and Regenerate Access Keys for a Windows Azure Storage Account article on MSDN.
Task 3 - Deploy the Tachyon Application to Windows Azure Compute Nodes
In this task, you will copy the Tachyon MPI application to your Windows Azure head node, and from
there deploy it to your compute nodes.
1. In a browser, open the Management Portal and click Hosted Services, Storage Accounts &
CDN.
2. Click Hosted Services.
3. In the main window, locate your hosted service, expand the HeadNode role, and select the
HeadNode_IN_0 instance.
4. On the ribbon, in the Remote Access area, click Connect.
5. When prompted to download the .rdp file, open it, type in the administrator’s password as you
configured it in the previous exercise, and complete the login process. If a warning displays
indicating that the identity of the remote computer cannot be verified, click Yes to connect to
the machine despite the certificate errors.
6. After connecting to the head node instance, open an explorer window and navigate to the
E:\approot folder.
7. In your local machine, open an explorer window, navigate to the source folder of this lab and
copy the Ex2-TachyonMPI folder (Ctrl+C).
8. Paste (Ctrl+V) the copied folder into the E:\approot on the remote machine.
9. On the remote machine, open the command prompt from Start | All Programs | Accessories |
Command Prompt.
10. Run the following commands to create a deployment package for the application:
CMD
e:
cd approot
hpcpack create tachyon.zip Ex2-TachyonMPI\
11. Run the following command to upload the deployment package to Windows Azure package
storage. Replace [YOUR-STORAGE-ACCOUNT] and [YOUR-STORAGE-KEY] with the storage
account’s name and primary key, which you found in the previous task:
CMD
hpcpack upload tachyon.zip /account:[YOUR-STORAGE-ACCOUNT] /key:[YOUR-STORAGEKEY] /relativePath:Tachyon
12. To sync the Windows Azure compute nodes with the new package, run the following command:
CMD
clusrun /nodegroup:computenode hpcsync
13. Run the following command to configure the Windows Azure Firewall to allow MPI
communication between the Windows Azure compute nodes:
CMD
clusrun /nodegroup:computenode hpcfwutil register tachyon.exe
^%ccp_package_root^%tachyon\tachyon.exe
Note: The caret sign (^) is used to escape the environment variable percent sign (%) so that the
environment variable will be evaluated in the compute nodes. Without the caret sign, the
environment variable would have been evaluated on the head node, before running the
command, which would have caused the wrong parameter to be sent to the hpcfwutil.
14. Leave the remote desktop connection open, as you will use it in the next task.
Task 4 – Execute the Tachyon Application and View its Result
In this task, you will submit from a command prompt a job that executes the Tachyon MPI application
via tachyon.bat.
1. If you have closed the remote desktop connection to the compute node, reconnect to it by
performing Steps 1-5 of Task 3.
2. Open a command prompt and type in the following command to submit the MPI job:
CMD
job submit /jobname:Tachyon /nodegroup:computenode /numcores:1-9999
^%ccp_package_root^%tachyon\tachyon.bat
3. Once the job starts, you will see the job ID; write it down.
4. Open a web browser and browse to your Windows Azure HPC Scheduler Web Portal, which
should be located at https://service_name.cloudapp.net/portal, where service_name is the
name of the hosted service you entered in the previous exercise.
5. If you are prompted with a warning message regarding the security certificate, click the option
Continue to this website (not recommended). The reason for the possible warning is that an
untrusted certification authority (that is, the AppConfigure application) issued the certificate
used to identify the server.
6. If a Windows Security dialog box opens, enter the scheduler’s administrator username and
password.
7. Once logged in, look for the Tachyon job in the jobs list. You can identify it by the job ID you
wrote down in Step 3.
8. Wait for the job to finish; you will need to click the refresh button to refresh the list.
Verification
1. Once the job is complete, open a browser and browse to the output file in your blob storage.
This should be located at
http://YourStorageAccount.blob.core.windows.net/tachyonout/stmvao-white_short.bmp,
where YourStorageAccount is the name of the storage account you used in Task 2.
2. You should now see the generated image from the job.
Figure 20
The BMP image result of tachyon.exe, as viewed from a Windows Azure blob account via
Internet Explorer
Summary
In this lab, you learned how to create an MPI application and how to deploy it to a Windows Azure HPC
Scheduler. You also learned how to submit an MPI job from a command prompt and from the Windows
Azure HPC Scheduler Web Portal, and how to use the portal to verify the job’s status.