... | ... | @@ -28,12 +28,13 @@ This is where you can store all the data files for all stages of the pipeline. T |
|
|
|
|
|
2. Copy your subject folders into this directory (assumes that each subject's files for all steps will be stored within its own subject folder).
|
|
|
|
|
|
3. When running a job through the File->Batch->Run History Template Batch menu, replace the values for [in_path] (and [out_path], if exists),
|
|
|
3. When running a job through the File->Batch->Run History Template Batch menu, replace the values for '[in_path]' and '[out_path]' (if it exists),
|
|
|
|
|
|
located in the "replace_string" field of each batch config file, with analysis/data (or [batch_dfp]) if this is what you enter into the "path:" field).
|
|
|
located in the "replace_string" field of each batch config file, with analysis/data.
|
|
|
|
|
|
*Example:* [in_path],analysis/data/1_init ----- change to -----> [in_path],analysis/data
|
|
|
*Example:*
|
|
|
|
|
|
[in_path],analysis/data/1_init ----- change to -----> [in_path],analysis/data
|
|
|
[out_path],analysis/data/2_preproc ----- change to -----> [out_path],analysis/data
|
|
|
|
|
|
4. For now, the path to the folder containing the subject folders must be manually typed/pasted into the "path:" field. This will usually be analysis/data if that's
|
... | ... | @@ -42,9 +43,15 @@ This is where you can store all the data files for all stages of the pipeline. T |
|
|
|
|
|
of the subject folder in the name.
|
|
|
|
|
|
*Example:* subj001/subj001_init.set (this should work regardless of how many folders deep the init file is stored)
|
|
|
*Example:*
|
|
|
|
|
|
5. If you are running your jobs remotely, remember to also change the folder structure on the remote end in the same way as in steps 1-4 above.
|
|
|
subj001/subj001_init.set
|
|
|
subj002/subj002_init.set
|
|
|
subj003/subj003_init.set
|
|
|
|
|
|
This should work regardless of how many folders deep the init file is stored.
|
|
|
|
|
|
5. If you are running your jobs remotely, repeat steps 1-4 on the remote end.
|
|
|
|
|
|
# Log
|
|
|
Log is automatically populated with scripts when you execute the pipeline. You will see a folder generated for every script identified by the script that was run followed by the date and time of the scripts execution. These folders contain:
|
... | ... | |