directory page update authored by Tyler Collins's avatar Tyler Collins
......@@ -3,41 +3,6 @@ The BIDS Lossless EEG pipeline is unsurprisingly, BIDS compliant. As such, data
This goal of this page is to provide a working understanding of how the *pipeline* is configured. As such, when folder contain external dependencies please refer to their respective documentation.
### Stage-wise <a name="stage-wise"></a>
The current default for this is a stage-wise folder structure, which stores all subject files together in each step's folder. This is described below:
### Subject-wise <a name="subject-wise"></a>
If desired, a subject-wise folder structure is also possible to set up with a few minor changes. You may want to to this if you prefer to adhere to a [BIDS](https://docs.google.com/document/d/1ArMZ9Y_quTKXC-jNXZksnedK2VHHoKP3HCeO5HPcgLE/edit#heading=h.4k1noo90gelw) or an [ESS capsule](http://www.eegstudy.org/) standard.
1. Remove all the default folders from the Data directory.
2. Copy your subject folders into this directory (assumes that each subject's files for all steps will be stored within its own subject folder).
3. When running a job through the File->Batch->Run History Template Batch menu, replace the values for '[in_path]' and '[out_path]' (if it exists),
located in the "replace_string" field of each batch config file, with analysis/data.
*Example:*
[in_path],analysis/data/1_init ----- change to -----> [in_path],analysis/data
[out_path],analysis/data/2_preproc ----- change to -----> [out_path],analysis/data
4. For now, the path to the folder containing the subject folders must be manually typed/pasted into the "path:" field. This will usually be analysis/data if that's
the folder you'd like to use. Likewise, the _init.set file name(s) must be manually typed/pasted into the "file:" field. In this case they must include the name
of the subject folder in the name.
*Example:*
subj001/subj001_init.set
subj002/subj002_init.set
subj003/subj003_init.set
This should work regardless of how many folders deep the init file is stored.
5. If you are running your jobs remotely, repeat steps 1 and 2 on the remote end.
## Log <a name="log"></a>
Log is automatically populated with scripts when you execute the pipeline. You will see a folder generated for every script identified by the script that was run followed by the date and time of the scripts execution. These folders contain:
* A specific **.m** file for each subject data file.
......
......