Submitting Batch Jobs
The CRC uses Grid Engine for queuing, Modules to load software , and the Red Hat Enterprise Linux operating system.
Short runs (less than 1 hour) for code development, testing, etc. can be done on the CRC Front End machines.
Long runs for production jobs should be submitted through the Grid Engine batch system.
The message of the day MOTD
, displayed when logging in the CRC, contains important and up-to-date communication about changes/outages within the CRC so please check it often. If you have questions, problems, or suggestions please send mail to CRCSupport@nd.edu.
AFS Note
UGE (Univa Grid Engine) allows for transparent use of the AFS filesystem used with the CRC. UGE makes a copy of your token and that is used as authentication on the machine which your jobs runs. The token lifetime used for all batch jobs has been set to 30 days. Please contact CRC staff before submitting a job if you suspect that it will run for longer than 30 days regardless of running on faculty owned machine or CRC general access machines.
Note that the actual maximum lifetime for a job in the general queue is still 15 days and varies from 2 to 30 days between research group resources.
UGE Configuration Intent
Environments - used to categorize job/application type [serial or parallel]
Queues - used to categorize job/application type [debug, long]
Host Groups - used to categorize ownership, access, and priority
User Groups - applied to host groups
We order Host Groups within the Queues such that the group owned equipment is first (in order of each groups newest/greatest capability equipment), and the general access CRC equipment is last. You can strictly specify a Host Group within any/all given Queues using the following syntax where general_access is a host group:
#!/bin/bash
#$ -pe smp 24
#$ -q long@@general_access
Submitting a Job to UGE
You can begin submitting jobs to the batch system using the qsub command. This is done by writing a script in either Bash or the C Shell scripting language and then submitting this script to UGE. Such “batch scripts” usually contain two types of commands, those that specify options to UGE and those that execute the UNIX commands necessary to run your job.
Options to UGE
The following options can be given to qsub on the command line, or preceded with #$ in batch scripts. If an option is specified on both the command line and as a directive in the batch script, the command line value takes precedence. Additional information on qsub can be found by typing “man qsub”.:
-M afsid@nd.edu # Specify an address where SGE should send email about your job.
-m abe # Tell UGE to send email to the specified address if the job is aborted, begins, or ends.
-pe mpi-24 # IF PARALLEL - Tell UGE how many CPU cores your job will need (See Parallel jobs section below).
Note
For parallel jobs the number of CPU cores is required. Jobs that use more cores than they request will be killed by an administrator. The default is one core. If running a serial job, this line is not necessary as the default is one core. Jobs requesting a large number of CPUs may spend a long time waiting in queue – it could be more practical to request a fewer number of CPUs (4-8) and start running sooner than to wait for a large number of processors to become idle.
Some examples of submitting a batch job
Here we’re submitting a batch script named hello_world.job which runs a simple Python3 script. It is assumed that the file hello_world.py and the batch script hello_world.job are in the directory from which you submit this job. It is a single core job. The job is not re-runnable and mail will be sent to johndoe@nd.edu when the job begins, if it aborts, and when it ends / finishes.
The file hello_world.job looks like
#!/bin/bash
#$ -M johndoe@nd.edu
#$ -m abe
module load python/3.6.4
python3 hello_world.py
To submit the job to UGE, you simply type:
qsub hello_world.job
Parallel Example
Using OpenMPI:
#!/bin/bash
#$ -pe mpi-24 48
#$ -q *@@crc_d12chas # <-- Specifying the CRC general access machines which are dual 12 core Intel Haswell machines (24 cores each machine)
module load ompi/2.1.1-intel-17.1
mpirun -np $NSLOTS yourapplication
The -pe mpi-24
command specifies that this is a MPI job which should be run on nodes which have 24 cores each. -pe mpi-*
will allow nodes with an arbitrary number of cores to be used with the caveat that the number of cores specified to be used must be an integer multiple of the number of cores on the assigned nodes. For example, -pe mpi-* 24
can be assigned to 6-, 8-, 12-core machines, and 24 core machines, however -pe mpi-* 32
can only be assigned to 8-core nodes.
Using the -pe smp
directive instead of -pe mpi
is equivalent to asking permission to use multiple cores in your batch job on a single node. To run a job on multiple cores, you will also need to specify the number of cores by some method for the application. Some common methods of doing this are outlined below:
MPI
programs use the -np option to thempirun
command to specify the number of CPUs to run on.openMP programs use the
OMP_NUM_THREADS
environment variable to control the number of CPUs the program runs on.Other programs may have specific flags or methods of starting parallel execution (for instance
-singleCompThread
in Matlab).
Monitoring Batch Jobs
Jobs can be monitored using the qstat command. Some useful forms:
qstat
With no arguments, qstat will print the status of all jobs in the queue. The output shows the following:
The job ID number
Priority of job
Name of job
ID of user who submitted job
State of the job: States can be
t(ransferring)
r(unning)
Submit or start time and date of the job
If running - the queue in which the job is running
The function of the running job (MASTER or SLAVE)
The job array task ID
qstat -u $USER
The -u
flag will show all jobs owned by the passed in user, following the information format above.
qstat -f Job ID
The -f
flag with a provided JobID displays a full listing of the job that has the listed Job ID (or all jobs if no Job ID is given). The output shows the following:
For each queue the information printed consists of:
the queue name
the queue type: Types or a combination of types can be
B(atch)
P(arallel)
The number of used and available job slots
The load average on the queue host
The architecture of the queue host
The state of the queue - Queue states or a combination of states can be
a(larm)
A(larm)
s(uspended)
d(isable)
D(isable)
E(rror)
R(estarted– job has been restarted (Rr) or is waiting to be restarted (Rq), typically follows a node crash).
qstat -j [job_list]
The -j
flag with a JobID list will print the reason a job was not scheduled, or various job related information.
Additional information can be obtained my looking at the man page for qstat. Type man qstat
for additional information.
Canceling Batch Jobs
Jobs can be cancelled or killed using the qdel command. The most common form is qdel JobID
which kills the job that matches the Job ID. The jobID can be obtained through using the qstat
command seen above.
Large Memory Jobs
The CRC’s job scheduler does not provision RAM on a per job basis. If a batch job is known to use a large amount of memory (RAM), it is required to request a larger number of CPUs than necessary in order to keep the ratio of memory to CPUs stable. For example, the general access machines each have 256GB of RAM for 24 cores. 256 GB / 24 Cores = 10.66 GB/Core
. Should your job require more than 10 GB of RAM on a general access node, you should request at least 2 cores with #$ -pe smp 2
within the job script.
Interactive Jobs
UGE is not limited to batch jobs only. You can also run your jobs interactively on the compute nodes, when needed. The command below is to request an interactive job with 8 cores on a single node:
qrsh -pe smp 8