1. Migrating POP from Linux to Windows Azure

Windows Azure is a powerful cloud computing solution that can significantly improve your access to high performance computing (HPC) resources and applications. This tutorial will help you fit Windows Azure into your workflow by guiding you through the straightforward process of migrating a parallel HPC application from Linux to Windows Azure. Welcome! Soon you'll be running parallel HPC applications in the cloud.

We will be migrating the Parallel Ocean Program (POP) to Windows Azure. POP forms the ocean component of the CCSM, one of the world's premier climate models. Many other parallel HPC programs have build and execute procedures similar to POP's, so this tutorial is easily generalized to other programs. If you're just looking for a binary distribution of POP for Windows, skip to the downloads section of the tutorial summary.

This video was produced by CRIEPI and NCAR researchers running POP at very high resolutions on Japan's Earth Simulator supercomputer. After this tutorial, you will be able to run similar simulations on Windows Azure.

MSDPE Logo nCoreDesign

This work was sponsored by the Microsoft Developer and Platform Evangelism Team and nCore Design.

1.1. Overview and Navigation

Use the slideshow navigation controls next to the page title and the links at the bottom of the page to navigate through the tutorial. Click here if you prefer to view the whole tutorial as a single, printable document.

This tutorial will take between two and four hours to complete and consists of the following steps:

  1. Prepare all required libraries

  2. Port POP from Linux to Windows

  3. Deploy a new Windows Azure compute cluster service

  4. Run POP on Windows Azure and view the results

  5. Tutorial Summary and Downloads

Continue to Step1: Prepare all required libraries when you are ready to begin.


2. Obtain Required Libraries

WRF uses NetCDF to read and write data files, so we'll need NetCDF for Windows before we can build POP for Windows. You can follow the instructions below to build NetCDF from scratch, or you can go to the downloads section of the NetCDF page and download pre-built binaries.

2.1. Build NetCDF as a Windows DLL

  1. ParaTools developed the PToolsWin development environment specifically for porting Linux applications to Windows. PToolsWin is distributed as part of HPC Linux. Download and install HPC Linux, either in a virtual machine or natively.

  2. Open a command line in your HPC Linux distro and load the PToolsWin module:
       1 module load ptoolswin
       2 module list
  3. Create your NetCDF folder:
       1 setenv NETCDFDIR ${HOME}/windows
       2 mkdir -p $NETCDFDIR
       3 cd $NETCDFDIR
  4. Download and extract the NetCDF 4.1.3 source code from Unidata to $NETCDFDIR:

       1 wget ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-4.1.3.tar.gz
       2 tar xvzf netcdf-4.1.3.tar.gz
  5. There's a bug in NetCDF that prevents it from cross compiling. Fortunately, we've developed a patch. Download and apply this patch to the NetCDF source:

       1 wget http://www.paratools.com/Azure/NetCDF/paratools-netcdf-4.1.3.patch -O paratools-netcdf-4.1.3.patch
       2 cd netcdf-4.1.3
       3 patch -p1 < ../paratools-netcdf-4.1.3.patch
  6. Move the NetCDF source code to a sub-folder:
       1 mkdir src
       2 mv * src
       3 cd src
  7. NetCDF includes a configuration script that will automatically generate the makefiles we need. Configure NetCDF as follows. Be sure to type the whole command as one line:
       1 ./configure --prefix=${NETCDFDIR}/netcdf-4.1.3 --host=x86_64-w64-mingw32 --enable-dll --enable-shared --disable-netcdf-4 LDFLAGS="-Wl,--export-all-symbols,--enable-auto-import" CPPFLAGS=-DgFortran

    Let's take a closer look at this command. We are executing the configure script according to the standard GNU build process. The --prefix flag tells the configure script where to install files after they are compiled. --host=x86_64-w64-mingw32 is very important because it tells the configure script to use the MinGW compiler instead of the default GNU compiler. Without this flag, NetCDF would build for Linux, not Windows. --enable-dll and --enable-shared tell configure that we want shared libraries and that those libraries should be Windows DLL files. We use the --disable-netcdf-4 flag to disable obsolete functionality. By setting the LDFLAGS environment variable we are passing flags directly to the program linker. -Wl,--export-all-symbols,--enable-auto-import tells the linker to automatically make all NetCDF functions and subroutines accessible to programs that link against the NetCDF DLLs. Without these flags, the DLL files will be created but your programs will not be able to link with them. Adding "-DgFortran" to CPPFLAGS is required when building with gfortran on Linux.

  8. Once the configure script is finished, build and install NetCDF. (!) If you have multiple cores in your machine, you can reduce compilation time by passing the '-j<ncpus>' flag to make (e.g. 'make -j4').

       1 make
       2 make install
  9. Some versions of GNU Libtool insert invalid -l flags on the linker command line. If your build fails with something like this:

    ... cannot find -l-L/usr/local/pkgs/mingw-w64-bin_x86_64-linux_20110831/...
    collect2: error: ld returned 1 exit status
    then you will need to fix the postdeps line in libtool.
    1. Open ${NETCDFDIR}/netcdf-4.1.3/src/libtool in a text editor.

    2. Scroll to the very bottom of the file and locate the line that begins with "postdeps="
    3. Remove any invalid -l flags from that line. For example, if you see -l -L/usr/local/..., change it to -L/usr/local/...

    4. Once you're satisfied, save the file and run 'make' and 'make install' again.
  10. If everything went well you should see the following message:
    | Congratulations! You have successfully installed netCDF!    |
    |                                                             |
    | You can use script "nc-config" to find out the relevant     |
    | compiler options to build your application. Enter           |
    |                                                             |
    |     nc-config --help                                        |
    |                                                             |
    | for additional information.                                 |
    |                                                             |
    | CAUTION:                                                    |
    |                                                             |
    | If you have not already run "make check", then we strongly  |
    | recommend you do so. It does not take very long.            |
    |                                                             |
    | Before using netCDF to store important data, test your      |
    | build with "make check".                                    |
    |                                                             |
    | NetCDF is tested nightly on many platforms at Unidata       |
    | but your platform is probably different in some ways.       |
    |                                                             |
    | If any tests fail, please see the netCDF web site:          |
    | http://www.unidata.ucar.edu/software/netcdf/                |
    |                                                             |
    | NetCDF is developed and maintained at the Unidata Program   |
    | Center. Unidata provides a broad array of data and software |
    | tools for use in geoscience education and research.         |
    | http://www.unidata.ucar.edu                                 |
    If you do not see this message, review your commands for errors and try again.
  11. The final step is to link the DLL files from the bin directory to the lib directory:
       1 cd $NETCDFDIR/netcdf-4.1.3/lib
       2 ln -s ../bin/*.dll .
    This is necessary because Linux makefiles and build scripts expect program libraries to be in the lib directory, but Windows expects them to be in the same path as the program executables. Making the files accessible at both locations keeps both ends of the cross-compilation process happy.
  12. Here is what the NetCDF bin and lib folders look like after a successful compilation ($NETCDFDIR is /home/livetau/windows):

    [paratools07] 579 > ls
    bin  include  lib  share  src
    [paratools07] 580 > pwd
    [paratools07] 581 > ls
    bin  include  lib  share  src
    [paratools07] 582 > ls -l bin lib
    total 5600
    -rwxr-xr-x. 1 livetau livetau 1823052 Mar 23 13:53 libnetcdf-7.dll
    -rwxr-xr-x. 1 livetau livetau  668674 Mar 23 13:53 libnetcdf_c++-4.dll
    -rwxr-xr-x. 1 livetau livetau 1239191 Mar 23 13:53 libnetcdff-5.dll
    -rwxr-xr-x. 1 livetau livetau    4334 Mar 23 13:53 nc-config
    -rwxr-xr-x. 1 livetau livetau  243830 Mar 23 13:53 nccopy.exe
    -rwxr-xr-x. 1 livetau livetau  381728 Mar 23 13:53 ncdump.exe
    -rwxr-xr-x. 1 livetau livetau  464561 Mar 23 13:53 ncgen3.exe
    -rwxr-xr-x. 1 livetau livetau  885521 Mar 23 13:53 ncgen.exe
    total 4984
    lrwxrwxrwx. 1 livetau livetau      22 Mar 23 13:56 libnetcdf-7.dll -> ../bin/libnetcdf-7.dll
    -rw-r--r--. 1 livetau livetau 1996680 Mar 23 13:53 libnetcdf.a
    lrwxrwxrwx. 1 livetau livetau      26 Mar 23 13:56 libnetcdf_c++-4.dll -> ../bin/libnetcdf_c++-4.dll
    -rw-r--r--. 1 livetau livetau  806660 Mar 23 13:53 libnetcdf_c++.a
    -rwxr-xr-x. 1 livetau livetau  315174 Mar 23 13:53 libnetcdf_c++.dll.a
    -rwxr-xr-x. 1 livetau livetau    1090 Mar 23 13:53 libnetcdf_c++.la
    -rwxr-xr-x. 1 livetau livetau  411564 Mar 23 13:53 libnetcdf.dll.a
    lrwxrwxrwx. 1 livetau livetau      23 Mar 23 13:56 libnetcdff-5.dll -> ../bin/libnetcdff-5.dll
    -rw-r--r--. 1 livetau livetau 1262872 Mar 23 13:53 libnetcdff.a
    -rwxr-xr-x. 1 livetau livetau  233966 Mar 23 13:53 libnetcdff.dll.a
    -rwxr-xr-x. 1 livetau livetau    1308 Mar 23 13:53 libnetcdff.la
    -rwxr-xr-x. 1 livetau livetau     930 Mar 23 13:53 libnetcdf.la
    -rw-rw-r--. 1 livetau livetau   16212 Mar 23 13:53 netcdf_c++dll.def
    -rw-rw-r--. 1 livetau livetau   16952 Mar 23 13:53 netcdfdll.def
    -rw-rw-r--. 1 livetau livetau   11907 Mar 23 13:53 netcdffdll.def
    drwxrwxr-x. 2 livetau livetau    4096 Mar 23 13:53 pkgconfig
    [paratools07] 583 >

You are now ready to proceed to Step 2: Port POP from Linux to Windows.


3. Port POP from Linux to Windows

For many HPC applications, porting from Linux to Windows is as straightforward as recompiling the application source code with a special toolchain. More complex applications may require a little more work, but even complex HPC applications (such as OpenFOAM) can be ported in this way.

This step of the tutorial will guide you through the process of recompiling POP with the PToolsWin development environment. PToolsWin generates native Windows code (no intermediate POSIX layer is required), so your application will perform as well as a native Windows application.

You will recompile recompile POP to produce a Windows executable and then copy the POP executable and run folder to Windows in preparation for uploading them to a Windows Azure Storage service.

3.1. Prerequisites

Before you continue, make sure you have these software prerequisites:

  1. An installation of HPC Linux with PToolsWin.

  2. NetCDF for Windows.

3.2. Build POP as a Windows Executable

  1. Open a command line in your HPC Linux distro and load the PToolsWin module:
       1 module load ptoolswin
       2 module list
  2. Create your POP directory:
       1 setenv POPDIR $HOME/windows
       2 mkdir -p $POPDIR
  3. Download and extract the POP source code from LANL to $POPDIR:

       1 cd $POPDIR
       2 wget http://climate.lanl.gov/Models/POP/POP_2.0.1.tar.Z
       3 tar xvzf POP_2.0.1.tar.Z
  4. Execute the setup_run_dir script in the pop directory to create and populate a new POP run directory named "windows":
       1 cd $POPDIR/pop
       2 ./setup_run_dir windows
  5. When we compiled NetCDF for Windows, we passed the --host=x86_64-w64-mingw32 flag to the configure script to indicate that the MinGW cross compilers should be used instead of the native compilers. This is the preferred method of cross compiling an application that uses the GNU Autoconf build system. Autoconf is very common among Linux applications, but there are some applications (like POP!) that do not use it, and hence do not have a configure script. POP uses a custom build system that gets its configuration from a file with a .gnu extension.

    We have created a customized .gnu file for cross compilation, starting with linux.gnu as the template. Download ptoolswin.gnu to $POPDIR/pop/windows:

       1 cd $POPDIR/pop/windows
       2 wget http://paratools.com/Azure/POP/Step2/ptoolswin.gnu -O ptoolswin.gnu
  6. If you compare ptoolswin.gnu with linux.gnu you can see what changes are required to switch from native compilation to cross compilation:
       1 --- linux.gnu   2012-03-23 14:41:01.309869158 -0700
       2 +++ ptoolswin.gnu       2012-03-23 08:36:37.000000000 -0700
       3 @@ -1,5 +1,5 @@
       4  #
       5 -# File:  linux.gnu
       6 +# File:  ptoolswin.gnu
       7  #
       8  # The commenting in this file is intended for occasional maintainers who 
       9  # have better uses for their time than learning "make", "awk", etc.  There 
      10 @@ -8,7 +8,7 @@
      11  #
      12  FC = mpif90
      13  LD = mpif90
      14 -CC = cc
      15 +CC = gcc
      16  Cp = /bin/cp
      17  Cpp = /lib/cpp -P
      18  AWK = /usr/bin/gawk
      19 @@ -21,8 +21,9 @@
      21  # Adjust these to point to where netcdf is installed
      23 -NETCDFINC = -I/netcdf_include_path
      24 -NETCDFLIB = -L/netcdf_library_path
      25 +NETCDFDIR = $(HOME)/windows/netcdf-4.1.3
      26 +NETCDFINC = -I$(NETCDFDIR)/include
      27 +NETCDFLIB = -L$(NETCDFDIR)/lib
      29  #  Enable trapping and traceback of floating point exceptions, yes/no.
      30  #  Note - Requires 'setenv TRAP_FPE "ALL=ABORT,TRACE"' for traceback.
      31 @@ -78,10 +79,10 @@
      32  #
      33  #----------------------------------------------------------------------------
      35 -LDFLAGS = $(ABI) -v
      36 +LDFLAGS = $(ABI) -v -Wl,--force-exe-suffix
      38  #LIBS = $(NETCDFLIB) -lnetcdf -lX11
      39 -LIBS = $(NETCDFLIB) -lnetcdf
      40 +LIBS = $(NETCDFLIB) -lnetcdf $(NETCDFDIR)/lib/libnetcdf-7.dll $(NETCDFDIR)/lib/libnetcdff-5.dll
      42  ifeq ($(MPI),yes)
      43    LIBS := $(LIBS) 
    Notice that only a few changes are required. PToolsWin provides mpif90 and mpicc commands, so the only compiler change is to explicitly set the C compiler to the PToolsWin cross compiler. LDFLAGS has been updated to force a ".exe" suffix on the binary executable, and the NetCDF DLL files have been added to the linker command line arguments.
  7. Compile POP by setting the ARCHDIR environment variable and running "make":
       1 cd $POPDIR/pop/windows
       2 setenv ARCHDIR ptoolswin
       3 make

3.3. Copy POP and Required Libraries to Windows

Now that POP has compiled successfully, we need to gather together the POP executable and all its required files and transfer them to your Windows machine.

  1. Copy files from POP:
       1 cd $POPDIR/pop/windows
       2 mkdir transfer
       3 cp pop.exe pop_in sample_* transfer
  2. Next, copy the NetCDF libraries and MinGW-w64 runtime libraries:
       1 cp $NETCDFDIR/netcdf-4.1.3/lib/*.dll transfer
       2 cp /usr/local/pkgs/rts/* transfer
  3. POP depends on MPI, but we do not need to copy the Microsoft MPI libraries because they are already installed on the Windows host. All together, your transfer folder should look like this ($POPDIR is /home/livetau/windows):
    [paratools07] 259 > pwd
    [paratools07] 260 > ls -l
    total 26208
    -rwxrwxr-x. 1 livetau livetau  594436 Mar 23 14:52 libgcc_s_sjlj-1.dll
    -rwxrwxr-x. 1 livetau livetau 9522373 Mar 23 14:52 libgfortran-3.dll
    -rwxr-xr-x. 1 livetau livetau 1823052 Mar 23 14:52 libnetcdf-7.dll
    -rwxr-xr-x. 1 livetau livetau  668674 Mar 23 14:52 libnetcdf_c++-4.dll
    -rwxr-xr-x. 1 livetau livetau 1239191 Mar 23 14:52 libnetcdff-5.dll
    -rwxrwxr-x. 1 livetau livetau  615374 Mar 23 14:52 libobjc-4.dll
    -rwxrwxr-x. 1 livetau livetau 1217969 Mar 23 14:52 libquadmath-0.dll
    -rwxrwxr-x. 1 livetau livetau  148957 Mar 23 14:52 libssp-0.dll
    -rwxrwxr-x. 1 livetau livetau 8454837 Mar 23 14:52 libstdc++-6.dll
    -rw-rw-r--. 1 livetau livetau 2357009 Mar 23 14:51 pop.exe
    -rw-r--r--. 1 livetau livetau    8474 Mar 23 14:51 pop_in
    -rwxrwxr-x. 1 livetau livetau   47616 Mar 23 14:52 pthreadGC2-w64.dll
    -rw-r--r--. 1 livetau livetau      41 Mar 23 14:51 sample_history_contents
    -rw-r--r--. 1 livetau livetau      54 Mar 23 14:51 sample_movie_contents
    -rw-r--r--. 1 livetau livetau     231 Mar 23 14:51 sample_tavg_contents
    -rw-r--r--. 1 livetau livetau     123 Mar 23 14:51 sample_transport_file
    -rwxrwxr-x. 1 livetau livetau   90112 Mar 23 14:52 zlib1.dll
  4. Create a zip file from the contents of the transfer folder:
       1 cd $POPDIR/pop/windows/transfer
       2 zip -r $POPDIR/pop.zip *

    Your $POPDIR/pop.zip file should be approximately 6.2M in size.

  5. Verify that you have created pop.zip correctly by comparing your pop.zip file with ours. You can view the contents of our pop.zip file or you can download pop.zip and unpack it.

  6. If pop.zip looks correct, copy it to your local Windows installation. Don't extract it yet. We will create a new Windows Azure service before we unpack pop.zip and upload it to a Windows Azure storage service.

You are now ready to proceed to Step 3: The Windows Azure HPC Scheduler.


4. Deploy the Windows Azure HPC Scheduler

Windows Azure is a powerful and versatile cloud platform with extensive features and capabilities. Everything from big data applications to mobile apps and gaming are supported, but we are most interested in Azure's parallel HPC capabilities.

If you already have a Windows Azure compute cluster service up and running then skip to the next step. Otherwise, follow these instructions to use Windows Azure HPC Schduler to deploy a parallel computing cluster of four nodes (eight cores) as a Windows Azure service. You will install two Microsoft SDKs to your local Windows installation, open the Windows Azure Management Portal, and execute a Powershell script to deploy the cluster service.

5. Prerequisites

We'll begin by configuring your local Windows installation with the Windows Azure SDK and the Windows Azure HPC Scheduler SDK and acquainting you with the Windows Azure Management Portal. Follow the instructions below to prepare your Windows installation.

5.1. Windows Azure SDK version 1.6

The Windows Azure SDK gives developers the necessary tools to interface local applications with Windows Azure services. The full SDK for Microsoft .NET is required to use the tools presented in this tutorial.

  1. Go to the Windows Azure Developer Downloads site and click the "install" button to download the full install of the Windows Azure SDK for .NET:


  2. Execute the downloaded file. You will be prompted three times to allow the operation. Click Allow, Allow, Yes to begin the installation.

  3. The Web Platform Installer may also prompt you to select the authentication mode for SQL Server Express. If this happens, choose Mixed Mode Authentication and set a password that you will remember. The SDK will take a few minutes to download and install.

  4. Click Finish and Exit to complete the installation and exit Web Platform Installer.

5.2. Windows Azure HPC Scheduler SDK version 1.6

The Windows Azure HPC Scheduler will help you to launch and manage parallel HPC applications on a Windows Azure service. Its main jobs are to make it easy to deploy and use a parallel computing cluster as a Windows Azure service and to manage the HPC jobs you execute on the cluster.

  1. Download the Windows Azure HPC Scheduler SDK 64-bit installer from the Microsoft Download Center:


  2. Execute the downloaded file. You will be prompted to allow the installation. Install the SDK in the default location.

5.3. The Windows Azure Management Portal

Windows Azure Management Portal is the seat of power for your Windows Azure subscriptions. From the portal, you can perform service deployment and management tasks and monitor the overall health of your deployments and accounts.

  1. Microsoft Silverlight is required to use the portal. Go to http://www.microsoft.com/silverlight to install Silverlight in your browser.

  2. Go to http://windows.azure.com to log on to the Windows Azure Management Portal. Be sure to use the same Windows Live ID and password that you used to create your Windows Azure subscription.


    Don't forget the dot '.' between "windows" and "azure" in the above URL. If you do, you will be taken to the Windows Azure Homepage, not the Windows Azure Management Portal.

  3. We'll be referring to the portal throughout this part of the tutorial, so stay logged in and keep the browser window open.

6. Deploy Windows Azure HPC Scheduler via Powershell

Deploying the Windows Azure HPC Scheduler involves creating Windows Azure components:

  • A Hosted Service containing six role instances:
    • One HeadNode instance to schedule, manage, and coordinate the parallel deployment,

    • Four ComputeNode instances to provide computational resources,

    • And one FrontEnd instance to provide web-based job submission and monitoring.

  • A Storage Account to hold state information for the cluster and provide a persistent shared storage location to all role instances,
  • And finally, a SQL Azure Database for maintaining scheduler configuration and state information.

Azure Deployment Diagram

All these components can be created easily with Microsoft Visual Studio Professional or via a Powershell script. We will use the Powershell approach because it has fewer prerequisites and is better suited to our needs in this tutorial.

  1. Download the PowerShell script package to your Windows installation and extract the zip file to create the WAHSPowershellDeployment folder.

  2. Double-click on Setup.cmd. Click Allow when you are prompted to allow Setup.cmd to make changes to your computer and you will be presented with a Powershell prompt in the WAHSPowershellDeployment\Code folder. We will refer to this window throughout this part of the tutorial.

  3. Execute the following command in the Powershell window:
  4. Enter azuretutorial.csdef when you are prompted for InputCSDef and azuretutorial.cscfg when you are prompted for InputCSCfg:

    Supply values for the following parameters:
    InputCSDef: azuretutorial.csdef
    InputCSCfg: azuretutorial.cscfg
    We'll learn more about these two files later on.
  5. Enter your Subscription ID when you are prompted for SubscriptionID. To locate your Subscription ID, go to the Windows Azure Management Portal and click on Hosted Services, Storage Accounts & CDN, Hosted Services, and find the subscription you want to use. Your Subscription ID is in the properties pane on the right:


    Copy the Subscription ID by highlighting it with your mouse and pressing Ctrl+C. Return to the Powershell Prompt and paste it by right-clicking and selecting Paste. Press Enter to continue.

  6. Enter 0 when you are prompted for your Windows Azure management certificate to generate a new self-signed certificate, or, if you installed a certificate earlier, type the number of the certificate you would like to use.

  7. If you generated a new certificate you will need to install it. In the Management Portal, click on Hosted Services, Storage Accounts & CDN, Management Certificates, and find the subscription you wish to use. Click on Add Certificate.



    Certificates and their use in Windows Azure is a detailed topic. See Overview of Certificates in Windows Azure to learn more.

  8. Check the Powershell window to find the location of the newly-generated self-signed certificate:


  9. In the Management Portal, browse to the certificate file and click OK. After a few seconds, your new certificate will appear on the list of management certificates. Return to the Powershell console and press Enter to continue.

  10. Enter 0 to create a new hosted service. Specify a name for the new hosted service and wait for the new hosted service to be created.

    The hosted service name must be less than 15 characters long

    The script will not check the length of your hosted service name, so be careful to not use more than 15 characters. If your hosted service name is too long, the script will fail later on.

  11. Enter Y when you are prompted to create a new certificate for password encryption and SSL. Wait a few seconds for the new certificate to be created.

  12. Enter 0 when you are prompted to select a storage account. Specify a name for your new storage account and wait for the new storage account to be created.

  13. Enter 0 when you are prompted to select a SQL Azure server. Enter a name for the SQL Azure server administrator (e.g. admin) and a password for the administrator account. The password must contain at least three of the following four classes:

    • Uppercase alphabetical (e.g. ABCDEFG...)
    • Lowercase alphabetical (e.g. abcdefg...)
    • Numeric (e.g. 123456789)
    • Non-alphanumeric (e.g. !@#$%.;[])
  14. Enter 0 when you are prompted to select a database. Specify the name of your database, and again enter the new administrator name and password. Wait while your database is initialized.

  15. The script will now begin the deployment process by uploading files to a Windows Azure storage container and creating role instances. This may take up to an hour to complete, so be patient. In the meantime, let's learn more about how we define the deployment topology.

    There are two files that work together to define our cluster: a service definition file with a .csdef extension and a service configuration file with a .cscfg extension. Earlier, we specified these files as azuretutorial.csdef and azuretutorial.cscfg. The service definition file defines the Windows Azure roles (e.g. HeadNode, ComputeNode, etc.) that our service will use. Here is a copy of azuretutorial.csdef:

       1 <?xml version="1.0" encoding="utf-8"?>
       2 <ServiceDefinition name="TestService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
       3   <WebRole name="FrontEnd" vmsize="Small">
       4     <Sites>
       5       <Site name="HPC">
       6         <VirtualApplication name="Portal" />
       7         <Bindings>
       8           <Binding name="HPCWebServiceHttps" endpointName="Microsoft.Hpc.Azure.Endpoint.HPCWebServiceHttps"/>
       9         </Bindings>
      10       </Site>
      11     </Sites>
      12     <Endpoints>
      13     </Endpoints>
      14     <Imports>
      15       <Import moduleName="Diagnostics" />
      16       <Import moduleName="RemoteAccess" />
      17       <Import moduleName="HpcWebFrontEnd" />
      18     </Imports>
      19     <ConfigurationSettings>
      20     </ConfigurationSettings>
      21     <LocalResources>
      22       <LocalStorage name="WFELocalStorage" cleanOnRoleRecycle="false" />
      23     </LocalResources>
      24     <Startup>
      25     </Startup>
      26   </WebRole>
      27   <WorkerRole name="HeadNode" vmsize="Medium">
      28     <Imports>
      29       <Import moduleName="Diagnostics" />
      30       <Import moduleName="RemoteAccess" />
      31       <Import moduleName="RemoteForwarder" />
      32       <Import moduleName="HpcHeadNode" />
      33     </Imports>
      34     <ConfigurationSettings>
      35     </ConfigurationSettings>
      36     <Endpoints>
      37     </Endpoints>
      38   </WorkerRole>
      39   <WorkerRole name="ComputeNode" vmsize="Medium">
      40     <Imports>
      41       <Import moduleName="Diagnostics" />
      42       <Import moduleName="RemoteAccess" />
      43       <Import moduleName="HpcComputeNode" />
      44     </Imports>
      45     <ConfigurationSettings>
      46     </ConfigurationSettings>
      47   </WorkerRole>
      48 </ServiceDefinition>

    As you can see, we've defined a small HpcWebFrontend role named "FrontEnd", a medium HpcHeadNode role named "HeadNode", and a medium HpcComputeNode role name "ComputeNode". The size of the node corresponds directly to the Windows Azure Virtual Machine Size and ranges from "extra small" to "extra large". Notice that the service configuration file does not say how many instances of each role we will create. Our cluster's configuration parameters are defined in azuretutorial.cscfg:

       1 <?xml version="1.0" encoding="utf-8"?>
       2 <ServiceConfiguration serviceName="TestService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*">
       3   <Role name="FrontEnd">
       4     <Instances count="1" />
       5     <ConfigurationSettings>
       6       <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
       7     </ConfigurationSettings>
       8   </Role>
       9   <Role name="HeadNode">
      10     <Instances count="1" />
      11     <ConfigurationSettings>
      12       <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
      13     </ConfigurationSettings>
      14   </Role>
      15   <Role name="ComputeNode">
      16     <Instances count="4" />
      17     <ConfigurationSettings>
      18       <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
      19     </ConfigurationSettings>
      20   </Role>
      21 </ServiceConfiguration>

    Here we've configured one instance of the FrontEnd role, one instance of the HeadNode role, and four instances of the ComputeNode role. The Powershell script parses the two configuration files and creates Windows Azure Nodegroups to correspond to each role. If in future you wish to change the topology of your cluster, simply edit these two files accordingly and re-run the Powershell script.

  16. Your Powershell console should look something like this when the deployment is complete:


    Go to the Management Portal and click on Hosted Services, Storage Accounts & CDN. Select your subscription and find the Windows Azure HPC Scheduler Service to see all the node instances in your new compute cluster:


You are now ready to proceed to Step 4: Run POP on Windows Azure


7. Run POP on Windows Azure

You are ready to run POP on Windows Azure! To do that, we will need to submit a new compute job to the cluster. Jobs are submitted from a Command Prompt window on the cluster head node.

In this part of the tutorial, you will connect to the cluster's head node via Remote Desktop Connection (RDC). You will use RDC to upload POP and its supporting libraries, configure the cluster's firewall for MPI communication, submit a new job to execute POP, and finally view the output of the POP command.

7.1. Open a Remote Desktop Connection to the Head Node

  1. Go to http://windows.azure.com to log on to the Windows Azure Management Portal. Be sure to use the same Windows Live ID and password that you used to create your Windows Azure subscription. /!\ Be careful to include the dot '.' between windows and azure if you type the URL. If you omit the dot, you will go to the Windows Azure Homepage, not the Windows Azure Management Portal.

  2. In the Management Portal, click on Hosted Services, Storage Accounts & CDN, Hosted Services. In the main window, expand the Windows Azure HPC Scheduler Service and expand the HeadNode role. Select the HeadNode_IN_0 role instance and verify that the instance's status is set to Ready.


  3. If the instance is not yet ready, wait until it becomes ready before continuing. It may take several minutes for the instance to transition to the ready state.
  4. With HeadNode_IN_0 selected and ready, click Connect on the ribbon bar. Click Open when you are prompted to download the .rdp file.

  5. The .rdp file will open in Remote Desktop Connection. Don't worry if you receive a warning that the remote connection cannot be identified, just click Connect to continue. Enter your administrator password when prompted and click OK. Wait for the connection to be established.

  6. Remote Desktop Connection may warn that the identity of the remote computer cannot be verified. If this happens, check the box next to Don't ask me again for connections to this computer and click Yes.


  7. The Remote Desktop Connection will open, presenting you with a view of the desktop on the head node. We'll be using this window throughout this part of the tutorial so keep it open.

7.2. Upload POP to the Head Node and Distribute

The easiest way to copy files to the cluster is by simple copy-paste over Remote Desktop Connection. We will use this method to upload pop.zip from your Windows installation to the cluster head node. From there, we will distribute POP to the compute nodes.

  1. On your local Windows installation, use Windows Explorer to locate the pop.zip that you created earlier.
  2. Right-click pop.zip and select Copy.

  3. Open the Remote Desktop Connection window to the cluster head node.
  4. On the cluster head node, use Windows Explorer to navigate to E:\approot.

  5. Right-click in E:\approot and select Paste. pop.zip will be copied from your local Windows installation to the cluster:


  6. On the cluster head node, right-click pop.zip and select Extract All. Click Extract to create the new E:\approot\pop folder containing pop.exe and its supporting files.

  7. Open a Command Prompt window on the cluster head node and execute the following two commands to navigate to E:\approot and create a deployment package for POP:
       1 cd /D E:\approot
       2 hpcpack create pop-package.zip pop\


    You can learn more about hpcpack and associated commands in the Windows HPC Server 2008 R2 Technical Reference.

  8. Now that we've created the deployment package we will need to distribute to all the cluster nodes and install it. To do this, we will need your storage account access key. To locate your access key, return to the Windows Azure Management Console. Click on Hosted Services, Storage Accounts & CDN, Storage Accounts, and select your storage account. Click the View button in the Primary access key panel of the properties pane:


  9. Click the Copy to Clipboard button next to your storage account's primary key and click Close:



    For more information about Windows Azure storage keys, see How to View, Copy, and Regenerate Access Keys for a Windows Azure Storage Account.

  10. Return to the Command Prompt on the head node in the Remote Desktop Connection window. Type the following command on one line to upload pop-package.zip to your storage account. Replace accountName and accountKey with the name and key of your storage account, respectively. Since your key has been copied to your clipboard, you can just right-click the Command Prompt window and select Paste when you need to enter your key.

       1 hpcpack upload E:\approot\pop-package.zip /account:accountName /key:accountKey /relativePath:pop
  11. Sync the compute nodes with the new package:
       1 clusrun /nodegroup:computenode hpcsync

    If you are prompted for a password, enter your administrator password and enter Y to remember the password.

  12. POP is now installed in a special location available to all the cluster compute nodes. The %CCP_PACKAGE_ROOT% environment variable gives the path to this location on each node. You will see this environment variable used several times below.

7.3. Configure Firewall Rules for MPI Communication

Before we can run POP we must open the firewall for communication between the compute nodes. These steps must be repeated for any application that communicates across nodes.

  1. Open a Command Prompt window on the cluster head node.
  2. Type the following command on one line to open the firewall to POP on all compute nodes:
       1 clusrun /nodegroup:computenode hpcfwutil register pop.exe ^%CCP_PACKAGE_ROOT^%pop\pop.exe


    The caret sign (^) escapes the environment variable percent sign (%) so that the environment variable will be evaluated on the compute nodes. Without the caret sign, the environment variable would have been evaluated on the head node, before running the command, sending the wrong parameter to hpcfwutil.

  3. If you are prompted for a password, enter your administrator password and enter Y to remember the password.
  4. Wait for the command to finish. All compute nodes should return 0 to indicate success:


7.4. Use Command Prompt to Submit a New Job

With all these preparations complete, we can now run POP on the cluster! We will use the job command to submit a POP run as a new job to the cluster and the HPC Job Manager to monitor the status of our new job and view POP output. 1

  1. Open a Command Prompt window on the cluster head node and execute the following command:
       1 job submit /jobname:POP /nodegroup:computenode /numcores:4 mpiexec -np 4 -wdir ^%ccp_package_root^%pop ^%ccp_package_root^%pop\pop.exe
    If the job submission is successful your new job will be assigned a number. Remember your job number for the next steps.
  2. Go to the Start Menu, and click on All Programs, Microsoft HPC Pack 2008 R2, HPC Job Manager to start the HPC Job Manager.

  3. In the Job Management pane on the left, click on All Jobs and select your job in the main window:

    jobmanager_snip1.png Your job should be in the Running or Finished state. If it is in the Error state, check your job submission command line for errors and resubmit.

  4. Wait until your job is in the Finished state. The example POP run is relatively small, so it should be finished in about a minute. Double-click on your job in the main window. Select View Tasks on the left and view your job's console output in the Output box:


7.5. Notes

  1. Jobs can also be submitted via the Windows Azure HPC Scheduler Web Portal. To reach the portal, navigate to https://<service_name>.cloudapp.net/portal where <service_name> is the name of your Windows Azure Hosted Service. (1)


8. Summary

Congratulations! You have completed the tutorial and ran POP on Windows Azure. In this tutorial you:

  1. Built NetCDF for Windows

  2. Used PToolsWin and HPC Linux to cross-compile POP

  3. Deployed a Windows Azure compute cluster service

  4. Ran POP on Windows Azure and examined the results

We encourage you to view our other Windows Azure tutorials. Please contact us if you have any questions.


Here are the binary files created during this tutorial. Please feel free to use and redistribute this software according to the appropriate software licenses.