Commit 95cb4ad2 authored by Brad Kennedy's avatar Brad Kennedy
Browse files

Add dynamic qsub options fieldname

parent deab7d7f
......@@ -3,13 +3,15 @@
% Usage:
% >> properties = batchconfig2propgrid(batchconfig);
% >> properties = batchconfig2propgrid(batchconfig, scheduler);
% Graphical Interface:
% Required Inputs:
% batch_config = batch_configuration structure created from
% pop_batch_edit.
% batch_config - batch_configuration structure created from
% pop_batch_edit.
% scheduler - string to use as a hint to find the scheduler
% documentation for options
% Optional Inputs:
......@@ -101,7 +103,7 @@ for i=1:length(batchconfig);
'Type', PropertyType('cellstr', 'column'), ...
'Category', ['Level ',num2str(i),' - ',batchconfig(i).file_name], ...
'DisplayName', ['qsub_options'], ...
'Description', 'TODO') ...
'Description', get_scheduler_options(batchconfig(i).exec_func)) ...
PropertyGridField(['qsub[',num2str(i),'].memory'], batchconfig(i).memory, ...
'Type', PropertyType('char', 'row'), ...
'Category', ['Level ',num2str(i),' - ',batchconfig(i).file_name], ...
......@@ -134,3 +136,24 @@ for i=1:length(batchconfig);
'Description', ['Options ???.']) ...
function outstr = get_scheduler_options(scheduler)
% Get this directory
mname = which(mfilename());
[path, ~, ~] = fileparts(mname);
fname = [path '/' scheduler '_options.txt'];
fid = fopen(fname);
if fid == -1
outstr = sprintf( ...
['qsub options depends on the scheduler used, we looked in '...
'%s for this file but were unable to open it, populate this'...
' file to have it show here'], fname);
outstr = fread(fid, '*char')';
-a, --array=indexes job array index values
-A, --account=name charge job to specified account
--bb=<spec> burst buffer specifications
--bbf=<file_name> burst buffer specification file
--begin=time defer job until HH:MM MM/DD/YY
--comment=name arbitrary comment
--cpu-freq=min[-max[:gov]] requested cpu frequency (and governor)
-c, --cpus-per-task=ncpus number of cpus required per task
-d, --dependency=type:jobid defer job until condition on jobid is satisfied
--deadline=time remove the job if no ending possible before
this deadline (start > (deadline - time[-min]))
--delay-boot=mins delay boot for desired node features
-D, --workdir=directory set working directory for batch script
-e, --error=err file for batch script's standard error
--export[=names] specify environment variables to export
--export-file=file|fd specify environment variables file or file
descriptor to export
--get-user-env load environment from local cluster
--gid=group_id group ID to run job as (user root only)
--gres=list required generic resources
--gres-flags=opts flags related to GRES management
-H, --hold submit job in held state
--ignore-pbs Ignore #PBS options in the batch script
-i, --input=in file for batch script's standard input
-I, --immediate exit if resources are not immediately available
--jobid=id run under already allocated job
-J, --job-name=jobname name of job
-k, --no-kill do not kill job on node failure
-L, --licenses=names required license, comma separated
-M, --clusters=names Comma separated list of clusters to issue
commands to. Default is current cluster.
Name of 'all' will submit to run on all clusters.
NOTE: SlurmDBD must up.
-m, --distribution=type distribution method for processes to nodes
(type = block|cyclic|arbitrary)
--mail-type=type notify on state change: BEGIN, END, FAIL or ALL
--mail-user=user who to send email notification for job state
--mcs-label=mcs mcs label if mcs plugin mcs/group is used
-n, --ntasks=ntasks number of tasks to run
--nice[=value] decrease scheduling priority by value
--no-requeue if set, do not permit the job to be requeued
--ntasks-per-node=n number of tasks to invoke on each node
-N, --nodes=N number of nodes on which to run (N = min[-max])
-o, --output=out file for batch script's standard output
-O, --overcommit overcommit resources
-p, --partition=partition partition requested
--parsable outputs only the jobid and cluster name (if present),
separated by semicolon, only on successful submission.
--power=flags power management options
--priority=value set the priority of the job to value
--profile=value enable acct_gather_profile for detailed data
value is all or none or any combination of
energy, lustre, network or task
--propagate[=rlimits] propagate all [or specific list of] rlimits
--qos=qos quality of service
-Q, --quiet quiet mode (suppress informational messages)
--reboot reboot compute nodes before starting job
--requeue if set, permit the job to be requeued
-s, --oversubscribe over subscribe resources with other jobs
-S, --core-spec=cores count of reserved cores
--signal=[B:]num[@time] send signal when time limit within time seconds
--spread-job spread job across as many nodes as possible
Optimum switches and max time to wait for optimum
--thread-spec=threads count of reserved threads
-t, --time=minutes time limit
--time-min=minutes minimum time limit (if distinct)
--uid=user_id user ID to run job as (user root only)
--use-min-nodes if a range of node counts is given, prefer the
smaller count
-v, --verbose verbose mode (multiple -v's increase verbosity)
-W, --wait wait for completion of submitted job
--wckey=wckey wckey to run job under
--wrap[=command string] wrap command string in a sh script and submit
provide a runtime limit (elapsed, wallclock time, not summed
across cpus) specified in any of the following forms:
15 (assumed to be minutes)
15m (same)
.25h (same)
2.5h (2 hours 30 minutes)
3.5d (3 days 12 hours)
84:0 (same, in LSF's hours:minutes format)
-i ifile job reads inputs from 'ifile' (no default)
-o ofile job output to 'ofile' (REQUIRED FOR EVERY JOB)
-e efile job errors go to 'efile' (default: same as -o)
-t|--test 'test' mode: short but immediate (preemptive)
-q queue queue name (serial, threaded, mpi; default serial)
-f flag specify certain flags to modify behavior. flags include:
mpi, interactive, test, mail, permitcoredump
-n ncpus require n cpus or cores (default 1)
-N nnodes require n nodes (does not imply exclusive use)
--ppn=ppn start ppn proceses per node
--tpp=tpp permit tpp threads per process (OMP_NUM_THREADS)
--gpp=gpp allocate gpp gpus per process
--memperproc= amount of memory required by each process. may be specified
like 64M or 2.5G (M=2^20, G=2^30). for an MPI job, this is
the per-rank size. for threaded jobs, it's the process size,
(that is, not per-thread.)
require a specific set of nodes. eg wha[1-4] or req666.
--pack require a minimal number of nodes, so processes occupy
all cpus per node.
--mail-start notify when the job starts.
--mail-end notify when the job ends (either normally or not).
--mail-abort notify when the job ends abnormally.
-m|--mail (compatibility - same as mail-end)
this email only goes to your account's email address.
wait for a list of jobs to complete
-j|--jobname provides a name for the job.
--project specify a project (group) for accounting purposes.
defaults to the user's group. may also be given via
SQ_PROJECT environment variable.
--idfile=fname write the jobid into a file named 'fname'.
--nompirun don't automatically invoke mpirun for mpi jobs.
note that you should probably look at mpirun parameters
sqsub uses, so that you get layout and binding right.
-f flag specify certain flags to modify behavior.
Universal flags include: mpi, threaded, test, mail
on some clusters, other flags have added meaning, such
xeon/opteron on Hound, and dual/quad on Goblin and
selecting sub-clusters on Kraken (bal/bru/dol/meg/tig/wha/nar)
-h or --help show brief usage message
--man show man page
-v|--verbose verbose mode: shows debugging-type details
-d|--debug debug mode: don't actually submit, but show the command
......@@ -83,6 +83,10 @@ classdef PropertyGridField < hgsetget
function a = get.Description(self)
a = self.Description
function self = set.ReadOnly(self, readonly)
validateattributes(readonly, {'logical'}, {'scalar'});
self.ReadOnly = readonly;
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment