Polaris

POLARIS is the name of the third generation of the HPC cluster in the KASI. It is in a correlation room at the Jangyoung-Sil Hall. It is composed of one frontend node (IBM System x3750 M4), 35 compute nodes (IBM System x3530 M4). The frontend has four Intel Xeon E5-4650 processors (8 cores per processor) and 96GB memory. Each compute node has two Intel Xeon E5-2470 processors (8 cores per processor) and 48 GB memory. The total number of cores and memory are 592 and 1776 GB. The frontend and compute nodes are connected with a FDR (56Gbit/sec) Infiniband switch. The total disk space is around 500TB.

The POLARIS cluster was built with the research budget from the Korea Astronomy and Space Science Institute. It is mainly used by KASI researchers who are doing numerical simulations and data processing. The cluster resources are open for the internal KASI members and external astronomers.

If you want to use or have any questions on the cluster, please contact Dr. Jongsoo Kim, (jskim@kasi.re.kr).


Howto

How to compile my serial Fortran or C program?

You may use either GNU or Intel compilers. You don't need any environment settings for the GNU compilers. However, for the Intel compilers and Intel Math Kernel Library (Intel MKL) you should insert

source /share/apps/intel/bin/compilervars.csh intel64
source /share/apps/intel/mkl/bin/mklvars.csh intel64

in your $HOME/.cshrc or

source /share/apps/intel/bin/compilervars.sh intel64
source /share/apps/intel/mkl/bin/mklvars.sh intel64

in $HOME/.bashrc. The followings show the compile examples of the Intel compilers

ifort -o main.x main.f sub1.f sub2.f

for Fortran programs,

icc -o main.x main.c sub1.c sub2.c

and for C programs. If you want to use the GNU Fortran (C) compiler, then replace ifort (icc) with gfortran (gcc) in the above examples.

How to compile my openmp Fortran or C program?

Again, you should set environment variables for the Intel compilers as above.

source /share/apps/intel/bin/compilervars.csh intel64

in your $HOME/.cshrc or

source /share/apps/intel/bin/compilervars.sh intel64

in $HOME/.bashrc. The followings show the compile examples of the Intel compilers

ifort -openmp -o main.x main.f sub1.f sub2.f

for Fortran programs,

icc -openmp -o main.x main.c sub1.c sub2.c

and for C programs.

How to compile my Fortran or C MPI program?

Again, you should set environment variables for the Intel compilers as above.

source /share/apps/intel/impi/5.0.3.048/intel64/bin/mpivars.csh intel64

in your $HOME/.cshrc or

source /share/apps/intel/impi/5.0.3.048/intel64/bin/mpivars.sh intel64

in $HOME/.bashrc. Here are simple examples, using the Intel MPI, to compile your Fortran MPI programs

mpiifort -o main.x main.f sub1.f sub2.f

and your C MPI programs,

mpiicc -o main.x main.c sub1.c sub2.c

How to make a script for my serial batch job?

SGE (SUN Grid Engine) was installed for the scheduling of batch jobs. To run your serial program in a batch mode, you should first prepare a script. Here is a simple script named “serial.sge.”

#!/bin/csh -f
#$ -N serialjob
#$ -cwd
./main.x

It looks simple, doesn't it. There are two highlighted places where you should make changes for your purpose. In the second line, you may find serialjob which becomes the name of your job in a SGE. You can coin any name you like. The second place you have to modify is a name of your executable file, main.x, in the above example.

How to make a script for my openmp batch job?

SGE (SUN Grid Engine) was installed for the scheduling of batch jobs. To run your openmp code, you should first prepare a script. Here is a simple script named “openmp.sge.”

#!/bin/bash
#$ -N openmpjob
#$ -cwd
#$ -S /bin/bash
#$ -o out.txt
#$ -e err.txt
#$ -pe openmp 8

export OMP_NUM_THREADS=$NSLOTS
export LD_LIBRARY_PATH=/share/apps/intel/lib/intel64
./a.out

It becomes a bit complicated. However, there are only three highlighted places where you should make changes for your purpose. In the second line, you can find openmpjob which becomes the name of your job in SGE. You can coin any name you like. In the third line, you can also find 8, which is the number of threads for your job. Since there are only 8 cores in each nodes, the number should be less than 8. The last part you have to modify is a name of an executable file, a.out, in the above example.

How to make a script for my MPI batch job?

SGE (SUN Grid Engine) was installed for the scheduling of batch jobs. To run your MPI code, you should first prepare a script. Here is a simple script named “mpi.sge.”

#!/bin/bash
#$ -N paralleljob
#$ -pe mpich 16
#$ -S /bin/bash
#$ -cwd

# Set Intel MPI environment
mpi_dir=/share/apps/intel/impi/5.0.3.048/intel64/bin
source $mpi_dir/mpivars.sh intel64

echo "Got $NSLOTS slots."
mpirun -genv I_MPI_FABRICS shm:dapl -n $NSLOTS ./main.x

It becomes a bit complicated. However, there are only three highlighted places where you should make changes for your purpose. In the second line, you can find paralleljob which becomes the name of your job in SGE. You can coin any name you like. In the third line, you can also find 16, which is the number of cores for your job. Please adjust the number for your job, but the number should be smaller than 128. The last part you have to modify is a name of an executable file, main.x, in the above example.

How to submit your job?

Make a script named serial.sge (or mpi.sge) according to the above examples. Then use the following command,

qsub serial.sge

or

qsub mpi.sge

How to check whether my job is running?

A command

qstat

allows you to see the current status of queued jobs. Here is an example

job-ID prior name user state submit/start at queue slots ja-task-ID


  111 0.55500 E15_b0.1_5 jskim        r     09/09/2009 20:23:23 all.q@compute-0-3.local           16
  135 0.55500 E15_b010_5 jskim        r     09/17/2009 20:05:54 all.q@compute-0-4.local           16
  103 0.55500 E.15_b10_5 jskim        r     09/04/2009 20:52:03 all.q@compute-0-5.local           16

The fifth “state” column lets you know the status of jobs. For example, the job-ID of “111” is running (“r” state).

How to cancel my job?

If you want to kill your job, for example, with a job-ID 111, use the following command.

qdel 111


Software

Here is the list of installed software in the cluster.

  Intel Cluster Studio 2013 for Linux
  IDL 9.0

Acknowledgment

If you are going to publish your works that utilize the resources of the HPC cluster, please include the following acknowledgment in your writings. And Jongsoo would appreciate it if you let him know your writings.

Numerical simulations (or Data reductions) were performed by using a high performance computing cluster at the Korea Astronomy and Space Science Institute.