University of Pretoria
Operational / Internal Site

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

computing:hpcc:cst [2015/08/19 07:28] (current)
Line 1: Line 1:
 +====== CST simulations with SLURM ======
 +
 +A total of 6 CST licenses are available for use on the cluster.
 +
 +Below are SLURM templates for two types of CST simulations. The first
 +is primarily for undergraduate project students. The last one is only
 +usable by certain postgraduate students and staff (the SLURM system limits the
 +number of CPUs and memory available to different classes of users).
 +
 +** Please use the smallest feasible values for the simulation so that our
 +cluster usage can be optimised. **
 +
 +Note: In the templates, there are various parameters you should change to
 +reflect your simulation requirements. These parameters are indicated with
 +''<​CHANGETHIS>''​ tag. For example:
 +
 +<​code>​
 +#SBATCH --mail-user=<​CHANGETHIS>​
 +</​code>​
 +
 +===== Moderate size multi-CPU CST simulation: 1 license =====
 +
 +This template, for a model with name ''​cst_multi'',​ is suitable for larger
 +simulations that require more memory than is available on project lab computers
 +and would typically run more than an hour on a lab computer. A single task is
 +defined that will be allocated 4 CPU cores.
 +
 +Typically values for ''​--mem-per-cpu''​ are 1000 (for 1G) or 2000 (for 2G) for
 +larger simulations. Generally CST uses less memory than FEKO.
 +
 +Note the **''​--m''​** is for CST MICROWAVE STUDIO and **''​--q''​** to use the
 +integral equation solver. For other modes, see CST user guide.
 +
 +<file bash cst_multi.slurm>​
 +#!/bin/bash
 +#SBATCH --output=<​CHANGETHIS>​.log
 +#SBATCH --job-name=<​CHANGETHIS>​
 +#SBATCH --cpus-per-task=4
 +#SBATCH --mem-per-cpu=<​CHANGETHIS>​
 +#SBATCH --licenses=cst:​1
 +#SBATCH --time=<​CHANGETHIS>​
 +#SBATCH --mail-type=END
 +#SBATCH --mail-user=<​CHANGETHIS>​
 +
 +# Run CST model simulation
 +srun /​usr/​local/​CST/​CST_STUDIO_SUITE/​cst_design_environment --m --q --numthreads 4 cst_multi.cst
 +</​file>​
 +
 +===== Maximum size multi-CPU CST simulation: 1 license =====
 +
 +This template, for a model with name ''​cst_max'',​ is suitable for very large
 +simulations that would require weeks to run on a desktop computer. A single
 +task is defined that will be allocated 8 CPU cores. CST does not generally
 +scale well beyond 8 threads.
 +
 +Typically values for ''​--mem-per-cpu''​ are 1000 (for 1G) or 2000 (for 2G) for
 +larger simulations. Generally CST uses less memory than FEKO.
 +
 +<file bash cst_max.slurm>​
 +#!/bin/bash
 +#SBATCH --output=<​CHANGETHIS>​.log
 +#SBATCH --job-name=<​CHANGETHIS>​
 +#SBATCH --cpus-per-task=8
 +#SBATCH --mem-per-cpu=<​CHANGETHIS>​
 +#SBATCH --licenses=cst:​1
 +#SBATCH --time=<​CHANGETHIS>​
 +#SBATCH --mail-type=END
 +#SBATCH --mail-user=<​CHANGETHIS>​
 +
 +# Run CST model simulation
 +srun /​usr/​local/​CST/​CST_STUDIO_SUITE/​cst_design_environment --m --q --numthreads 8 cst_max.cst
 +</​file>​
 +
 +===== Integration with Windows based CST front-end =====
 +
 +The following integration with the cluster scheduler makes it somewhat easier
 +to use the clusters for CST simulations. However this approach requires default
 +settings to be made which may not be suitable for all types of simulations.
 +Specifically,​ the effective SLURM settings allocate 6 CPU cores with 1.5 GB per
 +core, with a time limit of 20 hours. For larger / longer simulations one of the
 +above templates should be used and the SLURM parameters appropriate for the
 +simulation set explicitly. With the manual approach the job completion
 +notification can also be set.
 +
 +  - Ensure you have a working cluster username by logging into the head node.
 +  - Create a work directory for the CST front-end simulation uploads (should correspond with the "Work directory"​ setting below):
 +    * **''​mkdir CST''​**
 +  - Download the CST Cluster Integration Guide and Scripts from here: [[http://​lftp.ee.up.ac.za/​CST/​Cluster/​]]
 +  - Follow the instructions in section 4.2.3 starting on page 5. Use the following settings:
 +    * Hostname/IP of Scheduler Node: **''​alpha1-1.ee.up.ac.za''​**
 +    * CST Installation Path on Cluster: **''/​usr/​local/​CST/​CST_STUDIO_SUITE''​**
 +    * Work directory (set your username correctly): **''/​home/<​USERNAME>/​CST''​**
 +    * The username and password are those used to log into the head node.
 +
 +Jobs submitted will appear as a normal SLURM job (visible using
 +**''​squeue''​**). ​ Unfortunately the current integration scripts provided by CST
 +do allow the progress of the simulation to be monitored. You can monitor the
 +progress using the approach described in the next section.
 +
 +===== Monitoring CST job progress =====
 +
 +Change to the top-level directory of the job work directory and use
 +**''​cst_progress''​**. This will periodically show the contents of the
 +"​progress.log"​ file, which is normally used to indicate simulation progress on
 +the CST front-end. Use Ctrl-C to cancel the file monitoring.
 +
 +