MultiscaleRun is an orchestrator of simulators. Currently, only Neurodamus (NEURON) and Metabolism are used together in a dual run, with more integrations planned for the future. It uses the NEURON simulator for neuronal activity, coupled with a metabolism solver.
- have an OBI spack installation working: https://github.com/openbraininstitute/spack
You just need to run the setup script at least once before running the simulation.
source setup.shThe script does:
- set various env variables
- create a
spackenvfolder with the necessary dependencies - create a python virtual env in
venv - call
pip install -e .for development - create the test folder
tiny_CI_test - fill it with the necessary data
If a folder is present (spackenv, venv) the script skips that installation step assuming that is already done. If any of the folders are missing, the script redoes the setup.
The environment is still set as it is needed.
You can always modify them and recall the setup script. It will not override your changes.
In this case we leverage brew. First we need to install a few things:
brew install cmake openmpi hdf5-mpi python@3.12 ninjaWe also need to link python3:
ln -sf /opt/homebrew/bin/python3.12 /opt/homebrew/bin/python3sudo apt-get update
sudo apt-get install -y mpich libmpich-dev libhdf5-mpich-dev hdf5-tools flex libfl-dev bison ninja-build libreadline-devsudo dnf update -y
sudo dnf -y install bison cpp cmake gcc-c++ flex flex-devel git python3.11-devel python3-devel python3-pip readline-devel ninja-build openmpi openmpi-develThis distro does not have openmpi. We need to use the efa installer:
cd /tmp
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
tar xf aws-efa-installer-latest.tar.gz
cd aws-efa-installer
sudo ./efa_installer.sh -y --skip-kmod --mpi=openmpi5
cd -
rm -rf /tmp/aws*Set python 3.11 as default (select 2):
sudo alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
sudo alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo alternatives --config python3This distro does not have hdf5. We install it:
export PATH=/opt/amazon/openmpi5/bin:$PATH
export LD_LIBRARY_PATH=/opt/amazon/openmpi5/lib64:$LD_LIBRARY_PATH
export CC=$(which mpicc)
export CXX=$(which mpicxx)
export MPICC=$(which mpicc)
cd /tmp
curl -O https://support.hdfgroup.org/releases/hdf5/v1_14/v1_14_6/downloads/hdf5-1.14.6.tar.gz
tar xf hdf5-1.14.6.tar.gz
cd hdf5-1.14.6
./configure --enable-parallel --enable-shared --prefix=/opt/circuit_simulation/hdf5/hdf5-1.14.6/install
make -j
sudo make install
cd
rm -rf /tmp/hdf5*The rest of the installation is common for all the architectures (mac, ubuntu, alma linux). Finally, you need to run this at least once before running simulations:
source setup_no_spack.shThe script does:
- set various env variables
- create a python virtual env in
venvwith neuron and neurodamus - build
libsonatareport - build the correct
neurodamus-models - call
pip install -e .for development - create the test folder
tiny_CI_test - fill it with the necessary data
If a folder is present (libsonatareport, neurodamus-models, venv) the script skips that installation step assuming that is already done. If any of the folders are missing, the script redoes the setup.
The environment is still set as it is needed.
You can always modify them and recall the setup script. It will not override your changes.
Just run with pytest:
pytest tests/unitUse ruff:
ruff check --fixYou just need to go to tiny_CI_test and run. The simulation is too slow with just one core. I suggest at least 8 cores. Do not go above 90 for now as this leaves some cores without neurons (edge case that I did not check).
cd tiny_CI_test
mpirun -np 12 multiscale-run computeAt the moment this simulation depleates atpi and fails after 300 ms. TODO: fix it.
After the simulation has completed you can check the results with the postproc jupyter notebook. It is already in the current folder. Just run jupyter:
jupyter labopen postproc.ipynb and run. By default it presents all the traces for the gids [0, 1, 2]. The notebook should be self-explainatory and can be changed at will.
Build the documentation locally with:
sphinx-build -W --keep-going docs docs/build/htmlAlternatively, check the official documentation at: https://multiscalerun.readthedocs.io/stable/
To run on Azure, request a VM from Erik. Once you have the credentials:
- SSH into the VM.
- Install the dependencies by following the Setup section (Linux).
- Run the simulation.
- Start post-processing on the VM:
In parallel, on your local machine, create an SSH tunnel:
jupyter lab --no-browser --port=8888
Then open Jupyter in your local browser at http://localhost:8888ssh -L 8888:localhost:8888 <user>@<remote-host>
Polina Shichkova, Alessandro Cattabiani, Christos Kotsalos, and Tristan Carel
The development of this software was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government's ETH Board of the Swiss Federal Institutes of Technology.
Copyright (c) 2005-2023 Blue Brain Project/EPFL Copyright (c) 2025 Open Brain Institute