Skip to content

Commit

Permalink
Improve custom confound documentation (#1280)
Browse files Browse the repository at this point in the history
  • Loading branch information
tsalo authored Oct 5, 2024
1 parent 9dea9cb commit 90dc40a
Show file tree
Hide file tree
Showing 7 changed files with 42 additions and 14 deletions.
8 changes: 4 additions & 4 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,10 +99,10 @@ Citing XCP-D
If you use XCP-D in your research, please use the boilerplate generated by the workflow.
If you need an immediate citations, please cite the following preprint:

Mehta, K., Salo, T., Madison, T., Adebimpe, A., Bassett, D. S., Bertolero, M., ... & Satterthwaite, T. D.
(2023).
Mehta, K., Salo, T., Madison, T. J., Adebimpe, A., Bassett, D. S., Bertolero, M., ... & Satterthwaite, T. D.
(2024).
XCP-D: A Robust Pipeline for the post-processing of fMRI data.
*bioRxiv*.
doi:10.1101/2023.11.20.567926.
*Imaging Neuroscience*, 2, 1-26.
doi:10.1162/imag_a_00257.

Please also cite the Zenodo DOI for the version you're referencing.
2 changes: 1 addition & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Contents
********

.. toctree::
:maxdepth: 3
:maxdepth: 1

installation
usage
Expand Down
25 changes: 21 additions & 4 deletions docs/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -438,13 +438,17 @@ plot_design_matrix.html#create-design-matrices>`_.

.. code-block:: python
import json
import os
import numpy as np
import pandas as pd
from nilearn.glm.first_level import make_first_level_design_matrix
N_VOLUMES = 200
TR = 0.8
frame_times = np.arange(N_VOLUMES) * TR
events_df = pd.read_table("sub-X_ses-Y_task-Z_run-01_events.tsv")
events_df = pd.read_table("sub-X_task-Z_events.tsv")
task_confounds = make_first_level_design_matrix(
frame_times,
Expand All @@ -457,11 +461,24 @@ plot_design_matrix.html#create-design-matrices>`_.
# The design matrix will include a constant column, which we should drop
task_confounds = task_confounds.drop(columns="constant")
# Prepare the derivative dataset
os.makedirs("/my/project/directory/custom_confounds/sub-X/func", exist_ok=True)
# Include a dataset_description.json file
with open("/my/project/directory/custom_confounds/dataset_description.json", "w") as fo:
json.dump(
{
"Name": "Custom Confounds",
"BIDSVersion": "1.6.0",
"DatasetType": "derivative"
},
fo,
)
# Assuming that the fMRIPrep confounds file is named
# "sub-X_ses-Y_task-Z_run-01_desc-confounds_timeseries.tsv",
# "sub-X_task-Z_desc-confounds_timeseries.tsv",
# we will name the custom confounds file the same thing, in a separate folder.
task_confounds.to_csv(
"/my/project/directory/custom_confounds/sub-X_ses-Y_task-Z_run-01_desc-confounds_timeseries.tsv",
"/my/project/directory/custom_confounds/sub-X/func/sub-X_task-Z_desc-confounds_timeseries.tsv",
sep="\t",
index=False,
)
Expand Down Expand Up @@ -502,7 +519,7 @@ Something like this should work:
desc: confounds
extension: .tsv
suffix: timeseries
columns:
columns: # Assume the task regressors are called "condition1" and "condition2"
- condition1
- condition2
Expand Down
11 changes: 11 additions & 0 deletions xcp_d/data/boilerplate.bib
Original file line number Diff line number Diff line change
@@ -1,3 +1,14 @@
@article{mehta2024xcp,
title={XCP-D: A Robust Pipeline for the post-processing of fMRI data},
author={Mehta, Kahini and Salo, Taylor and Madison, Thomas J and Adebimpe, Azeez and Bassett, Danielle S and Bertolero, Max and Cieslak, Matthew and Covitz, Sydney and Houghton, Audrey and Keller, Arielle S and others},
journal={Imaging Neuroscience},
volume={2},
pages={1--26},
year={2024},
publisher={MIT Press},
url={https://doi.org/10.1162/imag_a_00257},
doi={10.1162/imag_a_00257}
}

@article{satterthwaite_2013,
title = {An improved framework for confound regression and filtering for control of motion artifact in the preprocessing of resting-state functional connectivity data},
Expand Down
4 changes: 3 additions & 1 deletion xcp_d/workflows/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,9 @@ def init_single_subject_wf(subject_id: str):

workflow.__desc__ = f"""
### Post-processing of {config.workflow.input_type} outputs
The eXtensible Connectivity Pipeline- DCAN (XCP-D) [@mitigating_2018;@satterthwaite_2013]
The eXtensible Connectivity Pipeline- DCAN (XCP-D)
[@mehta2024xcp;@mitigating_2018;@satterthwaite_2013]
was used to post-process the outputs of *{info_dict["name"]}* version {info_dict["version"]}
{info_dict["references"]}.
XCP-D was built with *Nipype* version {nipype_ver} [@nipype1, RRID:SCR_002502].
Expand Down
3 changes: 1 addition & 2 deletions xcp_d/workflows/bold/cifti.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,9 +166,8 @@ def init_postprocess_cifti_wf(
inputnode.inputs.confounds_files = run_data["confounds"]
inputnode.inputs.dummy_scans = dummy_scans

workflow = Workflow(name=name)

workflow.__desc__ = f"""
#### Functional data
For each of the {num2words(n_runs)} BOLD runs found per subject (across all tasks and sessions),
Expand Down
3 changes: 1 addition & 2 deletions xcp_d/workflows/bold/nifti.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,9 +179,8 @@ def init_postprocess_nifti_wf(
inputnode.inputs.confounds_files = run_data["confounds"]
inputnode.inputs.dummy_scans = dummy_scans

# Load confounds according to the config

workflow.__desc__ = f"""
#### Functional data
For each of the {num2words(n_runs)} BOLD runs found per subject (across all tasks and sessions),
Expand Down

0 comments on commit 90dc40a

Please sign in to comment.