Skip to content

Code repositry for the Benchmarking Deep Jansen-Rit Parameter Inference: An in Silico Study

Notifications You must be signed in to change notification settings

lina-usc/Jansen-Rit-Model-Benchmarking-Deep-Learning

Repository files navigation

Jansen-Rit-Model-Benchmarking-Deep-Learning

Code repository for the Benchmarking Deep Jansen-Rit Parameter Inference: An in Silico Study. The preprint for this paper is available here.

Model-driven effective connectivity (EC) is essential in understanding how the brain integrates and responds to various stimuli. This approach involves estimating global and local parameters of a generative model of neural activity, and it can be deployed for various applications, such as studying neurodevelopmental disorders. However, accurately determining these connections remains a significant challenge due to the complexity of brain dynamics and the inherent noise in recordings of neural activity, e.g., in electroencephalography (EEG). Current model-driven methods to study EC are computationally complex and cannot scale to all brain regions as required by comprehensive whole-brain analyses. To facilitate EC assessment, an inference algorithm must exhibit reliable prediction of parameters in the presence of noise. Further, the relationship between the model parameters and the neural recordings must be learnable. To progress toward these objectives, we present a simulation module based on the well-known Jansen-Rit neural mass model (JR-NMM) and benchmark it under various noise conditions. We consider simulated recordings with noise levels ranging from none to levels typical of real EEG recordings and simulate 1000 recordings per noise condition. We then benchmark the performance of a Bi-LSTM model to infer JR-NMM parameters from EEG amidst different noise levels. Our study explores how the JR-NMM reacts to changes in critical factors like synaptic gains and time constants. Investigating how such biological parameters impact the neural recordings generated by such models is crucial in understanding the connection between brain activity and behavior. Our results indicate that we can predict the local JR-NMM parameters from EEG, supporting the feasibility of this approach. In the future, we will extend this inference approach to estimating local and global parameters from real EEG in clinically relevant applications, such as autism spectrum disorder.

Updating Paths in Notebooks

Before running the notebooks, you must update the file paths to match your local or server environment. This ensures the notebooks can access necessary data files and save outputs correctly. Follow these steps to update paths in each notebook:

  1. Open each notebook: Start by opening each Jupyter notebook in your preferred environment, such as JupyterLab or Visual Studio Code.

  2. Search for path assignments: Look for lines of code where paths are defined. These are typically assigned to variables like base_path, data_path, output_path, etc.

  3. Modify the paths: Replace the existing paths with paths relevant to your environment. Make sure these directories exist on your machine or create them if necessary.

  4. Save changes: After updating the paths, save the notebook to preserve these changes.

Example of Path Update

Here’s an example of what you might look for and how to change it:

# Original path
base_path = '/path/to/your/dataset/'

# Updated path
base_path = '/Users/yourusername/projects/yourprojectname/data/'

Description

This repository includes three Jupyter notebooks designed for dataset simulation, sensitivity analysis, and deep learning analysis of ERP (Event-Related Potentials) parameters:

1) DCM_for_JR_all_varied_11_May.ipynb generates datasets by simulating X (ERP) and y (parameters).
2) sensitivity_plots_16thApril.ipynb performs sensitivity analysis using the data simulated 
   (using DCM_for_JR_all_varied_11_May.ipynb) with a linear method where all parameters are simulated independently.
3) deeplearning_all_noise.ipynb applies a bi-LSTM model to infer the simulated data's y (Jansen Rit Model parameters).

Installation

pip install -r requirements.txt

Usage

1) Dataset Simulation:
        Run DCM_for_JR_all_varied_11_May.ipynb to generate the datasets.
2) Sensitivity Analysis:
    Use sensitivity_plots_16thApril.ipynb to analyze the parameters' sensitivity. This notebook utilizes datasets generated with specific parameter settings, method='linear', N= 200.

    Here’s an example of how you might simulate data for sensitivity analysis:

    # Define the parameter ranges and other variables
    parameter_ranges = {'A1': (2.6, 9.75)}
    N = 200
    method = 'linear'
    
   #Call the simulation function
    simulate_for_parameter(parameter_ranges, L, Ii, Ip, p, dt, C, transient_duration, 
                           info, src, events, ground_truth=ground_truth, 
                           method=method, N=N, noise=noise, noise_cov=noise_cov, 
                           base_path=base_path)
3)  Deep Learning Analysis:
        Execute deeplearning_all_noise.ipynb to apply the bi-LSTM model. Use generated files to load in the jupyter notebook. 

Contributing

Contributions are welcome! Please fork the repository and submit a pull request with your proposed changes.

License

This project is released under the MIT License.

Contact

For questions or further discussion, open an issue in the repository or contact me directly at [[email protected]].

Cite

@inproceedings{Tilwani2024BenchmarkingDJ,
  title={Benchmarking Deep Jansen-Rit Parameter Inference: An in Silico Study},
  author={Deepa Tilwani and Christian O'Reilly},
  year={2024},
  url={https://api.semanticscholar.org/CorpusID:270357590}
}

About

Code repositry for the Benchmarking Deep Jansen-Rit Parameter Inference: An in Silico Study

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published