Skip to content

Commit

Permalink
deploy: 51b01ff
Browse files Browse the repository at this point in the history
  • Loading branch information
pbarbarant committed Dec 18, 2024
1 parent 5073417 commit 624e681
Show file tree
Hide file tree
Showing 94 changed files with 665 additions and 1,163 deletions.
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n# Template-based prediction.\n\nIn this tutorial, we show how to better predict new contrasts for a target\nsubject using many source subjects corresponding contrasts. For this purpose,\nwe create a template to which we align the target subject, using shared information.\nWe then predict new images for the target and compare them to a baseline.\n\nWe mostly rely on Python common packages and on nilearn to handle\nfunctional data in a clean fashion.\n\n\nTo run this example, you must launch IPython via ``ipython\n--matplotlib`` in a terminal, or use ``jupyter-notebook``.\n"
"\n# Template-based prediction.\n\nIn this tutorial, we show how to improve inter-subject similarity using a template\ncomputed across multiple source subjects. For this purpose, we create a template\nusing Procrustes alignment (hyperalignment) to which we align the target subject,\nusing shared information. We then compare the voxelwise similarity between the\ntarget subject and the template to the similarity between the target subject and\nthe anatomical Euclidean average of the source subjects.\n\nWe mostly rely on Python common packages and on nilearn to handle\nfunctional data in a clean fashion.\n\n\nTo run this example, you must launch IPython via ``ipython\n--matplotlib`` in a terminal, or use ``jupyter-notebook``.\n"
]
},
{
Expand All @@ -29,7 +29,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Definine a masker\nWe define a nilearn masker that will be used to handle relevant data.\n For more information, visit :\n 'http://nilearn.github.io/manipulating_images/masker_objects.html'\n\n\n"
"## Define a masker\nWe define a nilearn masker that will be used to handle relevant data.\n For more information, visit :\n 'http://nilearn.github.io/manipulating_images/masker_objects.html'\n\n\n"
]
},
{
Expand Down Expand Up @@ -58,14 +58,14 @@
},
"outputs": [],
"source": [
"# To infer a template for subjects sub-01 to sub-06 for both AP and PA data,\n# we make a list of 4D niimgs from our list of list of files containing 3D images\n\nfrom nilearn.image import concat_imgs\n\ntemplate_train = []\nfor i in range(5):\n template_train.append(concat_imgs(imgs[i]))\ntarget_train = df[df.subject == \"sub-07\"][df.acquisition == \"ap\"].path.values\n\n# For subject sub-07, we split it in two folds:\n# - target train: sub-07 AP contrasts, used to learn alignment to template\n# - target test: sub-07 PA contrasts, used as a ground truth to score predictions\n# We make a single 4D Niimg from our list of 3D filenames\n\ntarget_train = concat_imgs(target_train)\ntarget_test = df[df.subject == \"sub-07\"][df.acquisition == \"pa\"].path.values"
"# To infer a template for subjects sub-01 to sub-06 for both AP and PA data,\n# we make a list of 4D niimgs from our list of list of files containing 3D images\n\nfrom nilearn.image import concat_imgs\n\ntemplate_train = []\nfor i in range(5):\n template_train.append(concat_imgs(imgs[i]))\n\n# sub-07 (that is 5th in the list) will be our left-out subject.\n# We make a single 4D Niimg from our list of 3D filenames.\n\nleft_out_subject = concat_imgs(imgs[5])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compute a baseline (average of subjects)\nWe create an image with as many contrasts as any subject representing for\neach contrast the average of all train subjects maps.\n\n\n"
"## Compute a baseline (average of subjects)\nWe create an image with as many contrasts as any subject representing for\neach contrast the average of all train subjects maps.\n\n"
]
},
{
Expand All @@ -83,7 +83,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a template from the training subjects.\nWe define an estimator using the class TemplateAlignment:\n * We align the whole brain through 'multiple' local alignments.\n * These alignments are calculated on a parcellation of the brain in 150 pieces,\n this parcellation creates group of functionnally similar voxels.\n * The template is created iteratively, aligning all subjects data into a\n common space, from which the template is inferred and aligning again to this\n new template space.\n\n\n"
"## Create a template from the training subjects.\nWe define an estimator using the class TemplateAlignment:\n * We align the whole brain through 'multiple' local alignments.\n * These alignments are calculated on a parcellation of the brain in 50 pieces,\n this parcellation creates group of functionnally similar voxels.\n * The template is created iteratively, aligning all subjects data into a\n common space, from which the template is inferred and aligning again to this\n new template space.\n\n\n"
]
},
{
Expand All @@ -94,14 +94,14 @@
},
"outputs": [],
"source": [
"from nilearn.image import index_img\n\nfrom fmralign.template_alignment import TemplateAlignment\n\ntemplate_estim = TemplateAlignment(\n n_pieces=150, alignment_method=\"ridge_cv\", mask=masker\n)\ntemplate_estim.fit(template_train)"
"from fmralign.template_alignment import TemplateAlignment\n\n# We use Procrustes/scaled orthogonal alignment method\ntemplate_estim = TemplateAlignment(\n n_pieces=50,\n alignment_method=\"scaled_orthogonal\",\n mask=masker,\n)\ntemplate_estim.fit(template_train)\nprocrustes_template = template_estim.template"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Predict new data for left-out subject\nWe use target_train data to fit the transform, indicating it corresponds to\nthe contrasts indexed by train_index and predict from this learnt alignment\ncontrasts corresponding to template test_index numbers.\nFor each train subject and for the template, the AP contrasts are sorted from\n0, to 53, and then the PA contrasts from 53 to 106.\n\n\n"
"## Predict new data for left-out subject\nWe predict the contrasts of the left-out subject using the template we just\ncreated. We use the transform method of the estimator. This method takes the\nleft-out subject as input, computes a pairwise alignment with the template\nand returns the aligned data.\n\n"
]
},
{
Expand All @@ -112,14 +112,14 @@
},
"outputs": [],
"source": [
"train_index = range(53)\ntest_index = range(53, 106)\n\n# We input the mapping image target_train in a list, we could have input more\n# than one subject for which we'd want to predict : [train_1, train_2 ...]\n\nprediction_from_template = template_estim.transform(\n [target_train], train_index, test_index\n)\n\n# As a baseline prediction, let's just take the average of activations across subjects.\n\nprediction_from_average = index_img(average_subject, test_index)"
"predictions_from_template = template_estim.transform(left_out_subject)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Score the baseline and the prediction\nWe use a utility scoring function to measure the voxelwise correlation\nbetween the prediction and the ground truth. That is, for each voxel, we\nmeasure the correlation between its profile of activation without and with\nalignment, to see if alignment was able to predict a signal more alike the ground truth.\n\n\n"
"## Score the baseline and the prediction\nWe use a utility scoring function to measure the voxelwise correlation\nbetween the images. That is, for each voxel, we measure the correlation between\nits profile of activation without and with alignment, to see if template-based\nalignment was able to improve inter-subject similarity.\n\n"
]
},
{
Expand All @@ -130,7 +130,7 @@
},
"outputs": [],
"source": [
"from fmralign.metrics import score_voxelwise\n\n# Now we use this scoring function to compare the correlation of predictions\n# made from group average and from template with the real PA contrasts of sub-07\n\naverage_score = masker.inverse_transform(\n score_voxelwise(target_test, prediction_from_average, masker, loss=\"corr\")\n)\ntemplate_score = masker.inverse_transform(\n score_voxelwise(\n target_test, prediction_from_template[0], masker, loss=\"corr\"\n )\n)"
"from fmralign.metrics import score_voxelwise\n\naverage_score = masker.inverse_transform(\n score_voxelwise(left_out_subject, average_subject, masker, loss=\"corr\")\n)\ntemplate_score = masker.inverse_transform(\n score_voxelwise(\n predictions_from_template, procrustes_template, masker, loss=\"corr\"\n )\n)"
]
},
{
Expand All @@ -148,14 +148,14 @@
},
"outputs": [],
"source": [
"from nilearn import plotting\n\nbaseline_display = plotting.plot_stat_map(\n average_score, display_mode=\"z\", vmax=1, cut_coords=[-15, -5]\n)\nbaseline_display.title(\"Group average correlation wt ground truth\")\ndisplay = plotting.plot_stat_map(\n template_score, display_mode=\"z\", cut_coords=[-15, -5], vmax=1\n)\ndisplay.title(\"Template-based prediction correlation wt ground truth\")"
"from nilearn import plotting\n\nbaseline_display = plotting.plot_stat_map(\n average_score, display_mode=\"z\", vmax=1, cut_coords=[-15, -5]\n)\nbaseline_display.title(\"Left-out subject correlation with group average\")\ndisplay = plotting.plot_stat_map(\n template_score, display_mode=\"z\", cut_coords=[-15, -5], vmax=1\n)\ndisplay.title(\"Aligned subject correlation with Procrustes template\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We observe that creating a template and aligning a new subject to it yields\na prediction that is better correlated with the ground truth than just using\nthe average activations of subjects.\n\n"
"We observe that creating a template and aligning a new subject to it yields\nbetter inter-subject similarity than regular euclidean averaging.\n\n"
]
}
],
Expand All @@ -175,7 +175,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
# -*- coding: utf-8 -*-

"""
Template-based prediction.
==========================
In this tutorial, we show how to better predict new contrasts for a target
subject using many source subjects corresponding contrasts. For this purpose,
we create a template to which we align the target subject, using shared information.
We then predict new images for the target and compare them to a baseline.
In this tutorial, we show how to improve inter-subject similarity using a template
computed across multiple source subjects. For this purpose, we create a template
using Procrustes alignment (hyperalignment) to which we align the target subject,
using shared information. We then compare the voxelwise similarity between the
target subject and the template to the similarity between the target subject and
the anatomical Euclidean average of the source subjects.
We mostly rely on Python common packages and on nilearn to handle
functional data in a clean fashion.
Expand Down Expand Up @@ -36,7 +37,7 @@
)

###############################################################################
# Definine a masker
# Define a masker
# -----------------
# We define a nilearn masker that will be used to handle relevant data.
# For more information, visit :
Expand Down Expand Up @@ -64,22 +65,17 @@
template_train = []
for i in range(5):
template_train.append(concat_imgs(imgs[i]))
target_train = df[df.subject == "sub-07"][df.acquisition == "ap"].path.values

# For subject sub-07, we split it in two folds:
# - target train: sub-07 AP contrasts, used to learn alignment to template
# - target test: sub-07 PA contrasts, used as a ground truth to score predictions
# We make a single 4D Niimg from our list of 3D filenames
# sub-07 (that is 5th in the list) will be our left-out subject.
# We make a single 4D Niimg from our list of 3D filenames.

target_train = concat_imgs(target_train)
target_test = df[df.subject == "sub-07"][df.acquisition == "pa"].path.values
left_out_subject = concat_imgs(imgs[5])

###############################################################################
# Compute a baseline (average of subjects)
# ----------------------------------------
# We create an image with as many contrasts as any subject representing for
# each contrast the average of all train subjects maps.
#

import numpy as np

Expand All @@ -92,70 +88,53 @@
# ---------------------------------------------
# We define an estimator using the class TemplateAlignment:
# * We align the whole brain through 'multiple' local alignments.
# * These alignments are calculated on a parcellation of the brain in 150 pieces,
# * These alignments are calculated on a parcellation of the brain in 50 pieces,
# this parcellation creates group of functionnally similar voxels.
# * The template is created iteratively, aligning all subjects data into a
# common space, from which the template is inferred and aligning again to this
# new template space.
#

from nilearn.image import index_img

from fmralign.template_alignment import TemplateAlignment

# We use Procrustes/scaled orthogonal alignment method
template_estim = TemplateAlignment(
n_pieces=150, alignment_method="ridge_cv", mask=masker
n_pieces=50,
alignment_method="scaled_orthogonal",
mask=masker,
)
template_estim.fit(template_train)
procrustes_template = template_estim.template

###############################################################################
# Predict new data for left-out subject
# -------------------------------------
# We use target_train data to fit the transform, indicating it corresponds to
# the contrasts indexed by train_index and predict from this learnt alignment
# contrasts corresponding to template test_index numbers.
# For each train subject and for the template, the AP contrasts are sorted from
# 0, to 53, and then the PA contrasts from 53 to 106.
#

train_index = range(53)
test_index = range(53, 106)

# We input the mapping image target_train in a list, we could have input more
# than one subject for which we'd want to predict : [train_1, train_2 ...]
# We predict the contrasts of the left-out subject using the template we just
# created. We use the transform method of the estimator. This method takes the
# left-out subject as input, computes a pairwise alignment with the template
# and returns the aligned data.

prediction_from_template = template_estim.transform(
[target_train], train_index, test_index
)

# As a baseline prediction, let's just take the average of activations across subjects.

prediction_from_average = index_img(average_subject, test_index)
predictions_from_template = template_estim.transform(left_out_subject)

###############################################################################
# Score the baseline and the prediction
# -------------------------------------
# We use a utility scoring function to measure the voxelwise correlation
# between the prediction and the ground truth. That is, for each voxel, we
# measure the correlation between its profile of activation without and with
# alignment, to see if alignment was able to predict a signal more alike the ground truth.
#
# between the images. That is, for each voxel, we measure the correlation between
# its profile of activation without and with alignment, to see if template-based
# alignment was able to improve inter-subject similarity.

from fmralign.metrics import score_voxelwise

# Now we use this scoring function to compare the correlation of predictions
# made from group average and from template with the real PA contrasts of sub-07

average_score = masker.inverse_transform(
score_voxelwise(target_test, prediction_from_average, masker, loss="corr")
score_voxelwise(left_out_subject, average_subject, masker, loss="corr")
)
template_score = masker.inverse_transform(
score_voxelwise(
target_test, prediction_from_template[0], masker, loss="corr"
predictions_from_template, procrustes_template, masker, loss="corr"
)
)


###############################################################################
# Plotting the measures
# ---------------------
Expand All @@ -167,13 +146,12 @@
baseline_display = plotting.plot_stat_map(
average_score, display_mode="z", vmax=1, cut_coords=[-15, -5]
)
baseline_display.title("Group average correlation wt ground truth")
baseline_display.title("Left-out subject correlation with group average")
display = plotting.plot_stat_map(
template_score, display_mode="z", cut_coords=[-15, -5], vmax=1
)
display.title("Template-based prediction correlation wt ground truth")
display.title("Aligned subject correlation with Procrustes template")

###############################################################################
# We observe that creating a template and aligning a new subject to it yields
# a prediction that is better correlated with the ground truth than just using
# the average activations of subjects.
# better inter-subject similarity than regular euclidean averaging.
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,
Expand Down
Binary file modified _images/sphx_glr_plot_alignment_methods_benchmark_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_methods_benchmark_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_methods_benchmark_003.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_methods_benchmark_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_methods_benchmark_005.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_simulated_2D_data_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_simulated_2D_data_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_simulated_2D_data_003.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_simulated_2D_data_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_simulated_2D_data_005.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_simulated_2D_data_006.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_simulated_2D_data_007.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_alignment_simulated_2D_data_008.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_int_alignment_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_int_alignment_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_pairwise_alignment_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_pairwise_alignment_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_pairwise_roi_alignment_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_pairwise_roi_alignment_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_pairwise_roi_alignment_003.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_template_alignment_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_template_alignment_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_template_alignment_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_toy_int_experiment_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_plot_toy_int_experiment_thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion _modules/fmralign/alignment_methods.html
Original file line number Diff line number Diff line change
Expand Up @@ -221,8 +221,8 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../quickstart.html">Quickstart</a></li>
<li class="toctree-l1 has-children"><a class="reference internal" href="../../auto_examples/index.html">Examples</a><input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" role="switch" type="checkbox"/><label for="toctree-checkbox-1"><div class="visually-hidden">Toggle navigation of Examples</div><i class="icon"><svg><use href="#svg-arrow-right"></use></svg></i></label><ul>
<li class="toctree-l2"><a class="reference internal" href="../../auto_examples/plot_pairwise_alignment.html">Pairwise functional alignment.</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../auto_examples/plot_template_alignment.html">Template-based prediction.</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../auto_examples/plot_pairwise_alignment.html">Pairwise functional alignment.</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../auto_examples/plot_pairwise_roi_alignment.html">Pairwise functional alignment on a ROI.</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../auto_examples/plot_alignment_methods_benchmark.html">Alignment methods benchmark (pairwise ROI case).</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../auto_examples/plot_int_alignment.html">Co-smoothing prediction using the Individual Neural Tuning Model</a></li>
Expand Down
Loading

0 comments on commit 624e681

Please sign in to comment.