|
|
**Table of contents**
|
|
|
|
|
|
[[_TOC_]]
|
|
|
|
|
|
# Run History Template Batch
|
|
|
|
|
|
In File > Batch > Run history template batch we see this interface
|
|
|
|
|
|

|
|
|
|
|
|
It combines both context config and batch config into one interface. For a job we must specify the following 3 things.
|
|
|
|
|
|
1. Our scripts or pipeline steps with "History file" *analysis/support/scripts*
|
|
|
2. Our configuration for those steps with "Load batch config" *analysis/support/config*
|
|
|
3. The data files we wish to process with "Data files or ESS Capsule" *analysis/data*
|
|
|
|
|
|
In our documentation we almost always will call these scripts, config files, and data files respectively. The directory they usually reside in is noted in italics on the right.
|
|
|
|
|
|
Additionally, the context config matches the previous step and must be loaded. We use this in the submission of our jobs.
|
|
|
|
|
|
Note: If you haven't already by this step, you should load your context config before the other options. We have built an extension into Batch config that allows us to specify the directories for scripts/config files/data files that will save you navigating through several directories to get to each resources.
|
|
|
|
|
|
# Selecting the correct history files
|
|
|
Once you have your context configuration loaded clicking "History file" will open you directly to your scripts directory.
|
|
|
|
|
|
Here is an example of what you may see:
|
|
|
|
|
|

|
|
|
|
|
|
The scripts have prefixes of s followed by the number of the pipeline step. We can select multiple steps of the pipeline by ctrl clicking.
|
|
|
|
|
|
Normally we don't need to run the whole pipeline we can just run half the pipeline so long as the data is reasonably clean.
|
|
|
|
|
|
**Full pipeline run**
|
|
|
Select 1-14, 16
|
|
|
** Data side pipeline run**
|
|
|
Select 1-3, 7-9, 13, 16
|
|
|
|
|
|
# Selecting the correct batch configuration files
|
|
|
The configuration files specify the execution function or scheduler used by the remote server. You must select the correct type of configuration files given your chosen remote host.
|
|
|
|
|
|
Click the "Load batch config" button and in the next interface select "Batch configuration file" select remote-sbatch if you are using an sbatch scheduler or remote-sqsub if you are using a sqsub based scheduler.
|
|
|
|
|
|
Note: For Sharcnet/Compute Canada users `sqsub` is used in the legacy systems like kraken, orca, and saw. `sbatch` is used in Graham and Cedar.
|
|
|
|
|
|
Now in these folders you need to select the corresponding config files that match the files in the [previous step](#selecting-the-correct-history-files)
|
|
|
|
|
|
We will talk about [tuning these config files in another page.](tuning-pipeline-configs)
|
|
|
|
|
|
# Selecting your data files
|
|
|
|
|
|
Select your data files by clicking "Data files or ESS Capsule" Navigate into `1_init`.
|
|
|
|
|
|
Tip: If you are using eeglab files (.set) you can change File Name: at the bottom from `*.*` to `*.set`in order to reduce the clutter in the folder.
|
|
|
|
|
|
If you are running only a later section of the pipeline you should still only select .set files as the rest of our scripts are designed to find the correct process file from the preproc directory.
|
|
|
|
|
|
# Submission
|
|
|
|
|
|
Submission is done by pressing "Ok"
|
|
|
|
|
|
The Matlab Command Window will detail the process of submitting the job and display any errors the occur. We have spent a significant amount of time improving the logging of the pipeline so if you choose to submit a Issue about your problem please include a sizable chunk of logs as an attachment.
|
|
|
|
|
|
# Pipeline Diagram
|
|
|
|
|
|

|
|
|
|
|
|
|
|
|
|
|
|
***
|
|
|
[ :arrow_right: **Next Step**](tuning-pipeline-configs)
|
|
|
[ :house: **Return to Main Page**](https://git.sharcnet.ca/bucanl_pipelines/eeg_pipe_asr_amica/wikis/home) |
|
|
\ No newline at end of file |