Skip to content

Commit

Permalink
Merge branch 'development' into apply_constant_fields_directly
Browse files Browse the repository at this point in the history
  • Loading branch information
dpgrote committed Oct 25, 2023
2 parents 45e4096 + ba217db commit 727cf06
Show file tree
Hide file tree
Showing 210 changed files with 2,749 additions and 1,549 deletions.
14 changes: 9 additions & 5 deletions .clang-tidy
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
Checks: '-*,
bugprone-*
Checks: '
-*,
bugprone-*,
-bugprone-easily-swappable-parameters,
-bugprone-implicit-widening-of-multiplication-result,
-bugprone-misplaced-widening-cast,
-bugprone-unchecked-optional-access,
cert-*
cert-*,
-cert-err58-cpp,
cppcoreguidelines-avoid-goto,
cppcoreguidelines-interfaces-global-init,
Expand Down Expand Up @@ -74,9 +75,12 @@ Checks: '-*,
'

CheckOptions:
- key: modernize-pass-by-value.ValuesOnly
value: 'true'
- key: bugprone-narrowing-conversions.WarnOnIntegerToFloatingPointNarrowingConversion
value: "false"
- key: misc-definitions-in-headers.HeaderFileExtensions
value: "H,"
- key: modernize-pass-by-value.ValuesOnly
value: "true"


HeaderFilterRegex: 'Source[a-z_A-Z0-9\/]+\.H$'
2 changes: 1 addition & 1 deletion .github/workflows/cuda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ jobs:
which nvcc || echo "nvcc not in PATH!"
git clone https://github.com/AMReX-Codes/amrex.git ../amrex
cd ../amrex && git checkout --detach 23.10 && cd -
cd ../amrex && git checkout --detach da79aff8053058371a78d4bf85488384242368ee && cd -
make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_PSATD=TRUE USE_CCACHE=TRUE -j 2
build_nvhpc21-11-nvcc:
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ repos:

# Autoremoves unused Python imports
- repo: https://github.com/hadialqattan/pycln
rev: v2.2.2
rev: v2.3.0
hooks:
- id: pycln
name: pycln (python)
Expand Down
17 changes: 14 additions & 3 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -646,7 +646,7 @@ if(WarpX_PYTHON)
${CMAKE_COMMAND} -E rm -f -r warpx-whl
COMMAND
${CMAKE_COMMAND} -E env PYWARPX_LIB_DIR=$<TARGET_FILE_DIR:pyWarpX_${WarpX_DIMS_LAST}>
python3 -m pip wheel -v --no-build-isolation --no-deps --wheel-dir=warpx-whl ${WarpX_SOURCE_DIR}
${Python_EXECUTABLE} -m pip wheel -v --no-build-isolation --no-deps --wheel-dir=warpx-whl ${WarpX_SOURCE_DIR}
WORKING_DIRECTORY
${WarpX_BINARY_DIR}
DEPENDS
Expand All @@ -660,7 +660,7 @@ if(WarpX_PYTHON)
set(pyWarpX_REQUIREMENT_FILE "requirements.txt")
endif()
add_custom_target(${WarpX_CUSTOM_TARGET_PREFIX}pip_install_requirements
python3 -m pip install ${PYINSTALLOPTIONS} -r "${WarpX_SOURCE_DIR}/${pyWarpX_REQUIREMENT_FILE}"
${Python_EXECUTABLE} -m pip install ${PYINSTALLOPTIONS} -r "${WarpX_SOURCE_DIR}/${pyWarpX_REQUIREMENT_FILE}"
WORKING_DIRECTORY
${WarpX_BINARY_DIR}
)
Expand All @@ -677,7 +677,7 @@ if(WarpX_PYTHON)
# because otherwise pip would also force reinstall all dependencies.
add_custom_target(${WarpX_CUSTOM_TARGET_PREFIX}pip_install
${CMAKE_COMMAND} -E env WARPX_MPI=${WarpX_MPI}
python3 -m pip install --force-reinstall --no-index --no-deps ${PYINSTALLOPTIONS} --find-links=warpx-whl pywarpx
${Python_EXECUTABLE} -m pip install --force-reinstall --no-index --no-deps ${PYINSTALLOPTIONS} --find-links=warpx-whl pywarpx
WORKING_DIRECTORY
${WarpX_BINARY_DIR}
DEPENDS
Expand All @@ -686,6 +686,17 @@ if(WarpX_PYTHON)
${WarpX_CUSTOM_TARGET_PREFIX}pip_install_requirements
${_EXTRA_INSTALL_DEPENDS}
)

# this is for package managers only
add_custom_target(${WarpX_CUSTOM_TARGET_PREFIX}pip_install_nodeps
${CMAKE_COMMAND} -E env WARPX_MPI=${WarpX_MPI}
${Python_EXECUTABLE} -m pip install --force-reinstall --no-index --no-deps ${PYINSTALLOPTIONS} --find-links=warpx-whl pywarpx
WORKING_DIRECTORY
${WarpX_BINARY_DIR}
DEPENDS
pyWarpX_${WarpX_DIMS_LAST}
${WarpX_CUSTOM_TARGET_PREFIX}pip_wheel
)
endif()


Expand Down
2 changes: 1 addition & 1 deletion Docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ docutils>=0.17.1

# PICMI API docs
# note: keep in sync with version in ../requirements.txt
picmistandard==0.26.0
picmistandard==0.28.0
# for development against an unreleased PICMI version, use:
# picmistandard @ git+https://github.com/picmi-standard/picmi.git#subdirectory=PICMI_Python

Expand Down
1 change: 1 addition & 0 deletions Docs/source/install/hpc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ This section documents quick-start guides for a selection of supercomputers that
hpc/karolina
hpc/lassen
hpc/lawrencium
hpc/leonardo
hpc/lumi
hpc/lxplus
hpc/ookami
Expand Down
1 change: 1 addition & 0 deletions Docs/source/install/hpc/lawrencium.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ Optionally, download and install Python packages for :ref:`PICMI <usage-picmi>`
source $HOME/sw/v100/venvs/warpx/bin/activate
python3 -m pip install --upgrade pip
python3 -m pip install --upgrade wheel
python3 -m pip install --upgrade setuptools
python3 -m pip install --upgrade cython
python3 -m pip install --upgrade numpy
python3 -m pip install --upgrade pandas
Expand Down
172 changes: 172 additions & 0 deletions Docs/source/install/hpc/leonardo.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
.. _building-leonardo:

Leonardo (CINECA)
=================

The `Leonardo cluster <https://leonardo-supercomputer.cineca.eu/>`_ is hosted at `CINECA <https://www.cineca.it/en>`_.

On Leonardo, each one of the 3456 compute nodes features a custom Atos Bull Sequana XH21355 "Da Vinci" blade, composed of:

* 1 x CPU Intel Ice Lake Xeon 8358 32 cores 2.60 GHz
* 512 (8 x 64) GB RAM DDR4 3200 MHz
* 4 x NVidia custom Ampere A100 GPU 64GB HBM2
* 2 x NVidia HDR 2×100 GB/s cards

Introduction
------------

If you are new to this system, **please see the following resources**:

* `Leonardo website <https://leonardo-supercomputer.cineca.eu/>`_
* `Leonardo user guide <https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.2%3A+LEONARDO+UserGuide>`_

Storage organization:

* ``$HOME``: permanent, backed up, user specific (50 GB quota)
* ``$CINECA_SCRATCH``: temporary, user specific, no backup, a large disk for the storage of run time data and files, automatic cleaning procedure of data older than 40 days
* ``$PUBLIC``: permanent, no backup (50 GB quota)
* ``$WORK``: permanent, project specific, no backup

.. _building-leonardo-preparation:

Preparation
-----------

Use the following commands to download the WarpX source code:

.. code-block:: bash
git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx
We use system software modules, add environment hints and further dependencies via the file ``$HOME/leonardo_gpu_warpx.profile``.
Create it now:

.. code-block:: bash
cp $HOME/src/warpx/Tools/machines/leonardo-cineca/leonardo_gpu_warpx.profile.example $HOME/leonardo_gpu_warpx.profile
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down

.. literalinclude:: ../../../../Tools/machines/leonardo-cineca/leonardo_gpu_warpx.profile.example
:language: bash

.. important::

Now, and as the first step on future logins to Leonardo, activate these environment settings:

.. code-block:: bash
source $HOME/leonardo_gpu_warpx.profile
Finally, since Leonardo does not yet provide software modules for some of our dependencies, install them once:

.. code-block:: bash
bash $HOME/src/warpx/Tools/machines/leonardo_cineca/install_gpu_dependencies.sh
source $HOME/sw/venvs/warpx/bin/activate
.. dropdown:: Script Details
:color: light
:icon: info
:animate: fade-in-slide-down

.. literalinclude:: ../../../../Tools/machines/leonardo-cineca/install_gpu_dependencies.sh
:language: bash


.. _building-leonardo-compilation:

Compilation
-----------

Use the following :ref:`cmake commands <building-cmake>` to compile the application executable:

.. code-block:: bash
cd $HOME/src/warpx
rm -rf build_gpu
cmake -S . -B build_gpu -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu -j 16
The WarpX application executables are now in ``$HOME/src/warpx/build_gpu/bin/``.
Additionally, the following commands will install WarpX as a Python module:

.. code-block:: bash
cd $HOME/src/warpx
rm -rf build_gpu_py
cmake -S . -B build_gpu_py -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_PYTHON=ON -DWarpX_APP=OFF -DWarpX_DIMS="1;2;RZ;3"
cmake --build build_gpu_py -j 16 --target pip_install
Now, you can :ref:`submit Leonardo compute jobs <running-cpp-leonardo>` for WarpX :ref:`Python (PICMI) scripts <usage-picmi>` (:ref:`example scripts <usage-examples>`).
Or, you can use the WarpX executables to submit Leonardo jobs (:ref:`example inputs <usage-examples>`).
For executables, you can reference their location in your :ref:`job script <running-cpp-leonardo>` or copy them to a location in ``$CINECA_SCRATCH``.

.. _building-leonardo-update:

Update WarpX & Dependencies
---------------------------

If you already installed WarpX in the past and want to update it, start by getting the latest source code:

.. code-block:: bash
cd $HOME/src/warpx
# read the output of this command - does it look ok?
git status
# get the latest WarpX source code
git fetch
git pull
# read the output of these commands - do they look ok?
git status
git log # press q to exit
And, if needed,

- :ref:`update the leonardo_gpu_warpx.profile file <building-leonardo-preparation>`,
- log out and into the system, activate the now updated environment profile as usual,
- :ref:`execute the dependency install scripts <building-leonardo-preparation>`.

As a last step, clean the build directories ``rm -rf $HOME/src/warpx/build_gpu*`` and rebuild WarpX.


.. _running-leonardo:

Running
-------
The batch script below can be used to run a WarpX simulation on multiple nodes on Leonardo.
Replace descriptions between chevrons ``<>`` by relevant values.
Note that we run one MPI rank per GPU.

.. literalinclude:: ../../../../Tools/machines/leonardo-cineca/job.sh
:language: bash
:caption: You can copy this file from ``$HOME/src/warpx/Tools/machines/leonardo-cineca/job.sh``.

To run a simulation, copy the lines above to a file ``job.sh`` and run

.. code-block:: bash
sbatch job.sh
to submit the job.

.. _post-processing-leonardo:

Post-Processing
---------------

For post-processing, activate the environment settings:

.. code-block:: bash
source $HOME/leonardo_gpu_warpx.profile
and run python scripts.
13 changes: 13 additions & 0 deletions Docs/source/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -483,3 +483,16 @@ @article{Stanier2020
author = {A. Stanier and L. Chacón and A. Le},
keywords = {Hybrid, Particle-in-cell, Plasma, Asymptotic-preserving, Cancellation problem, Space weather},
}

@book{Stix1992,
author = {Stix, T.H.},
date-added = {2023-06-29 13:51:16 -0700},
date-modified = {2023-06-29 13:51:16 -0700},
isbn = {978-0-88318-859-0},
lccn = {lc91033341},
publisher = {American Inst. of Physics},
title = {Waves in {Plasmas}},
url = {https://books.google.com/books?id=OsOWJ8iHpmMC},
year = {1992},
bdsk-url-1 = {https://books.google.com/books?id=OsOWJ8iHpmMC}
}
3 changes: 2 additions & 1 deletion Docs/source/theory/cold_fluid_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,8 @@ Step 5: **Current and Charge Deposition**

The implemented MUSCL scheme has a simplifed slope averaging, see the extended writeup for details.

More details on the precise implementation will be made available online soon.
More details on the precise implementation are available here, `WarpX_Cold_Rel_Fluids.pdf`_.
.. _WarpX_Cold_Rel_Fluids.pdf: https://github.com/ECP-WarpX/WarpX/files/12886437/WarpX_Cold_Rel_Fluids.pdf

.. warning::
If using the fluid model with the Kinetic-Fluid Hybrid model or the electrostatic solver, there is a known
Expand Down
14 changes: 13 additions & 1 deletion Docs/source/usage/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,18 @@ ion-Bernstein modes as indicated below.
python3 PICMI_inputs.py -dim {1/2/3} --bdir {x/y/z}
A RZ-geometry example case for normal modes propagating along an applied magnetic field in a cylinder is also available.
The analytical solution for these modes are described in :cite:t:`ex-Stix1992` Chapter 6, Sec. 2.

.. figure:: https://user-images.githubusercontent.com/40245517/259251824-33e78375-81d8-410d-a147-3fa0498c66be.png
:alt: Normal EM modes in a metallic cylinder
:width: 90%

The input file for this example and corresponding analysis can be found at:

* :download:`Cylinderical modes input <../../../Examples/Tests/ohm_solver_EM_modes/PICMI_inputs_rz.py>`
* :download:`Analysis script <../../../Examples/Tests/ohm_solver_EM_modes/analysis_rz.py>`

Ion beam R instability
^^^^^^^^^^^^^^^^^^^^^^

Expand Down Expand Up @@ -243,7 +255,7 @@ The input file for this example and corresponding analysis can be found at:
* :download:`Analysis script <../../../Examples/Tests/ohm_solver_magnetic_reconnection/analysis.py>`

Many Further Examples, Demos and Tests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------------

WarpX runs over 200 integration tests on a variety of modeling cases, which validate and demonstrate its functionality.
Please see the `Examples/Tests/ <https://github.com/ECP-WarpX/WarpX/tree/development/Examples/Tests>`__ directory for many more examples.
Expand Down
4 changes: 3 additions & 1 deletion Docs/source/usage/parameters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1824,7 +1824,9 @@ Particle push, charge and current deposition, field gathering
Available options are: ``direct``, ``esirkepov``, and ``vay``. The default choice
is ``esirkepov`` for FDTD maxwell solvers but ``direct`` for standard or
Galilean PSATD solver (i.e. with ``algo.maxwell_solver = psatd``) and
for the hybrid-PIC solver (i.e. with ``algo.maxwell_solver = hybrid``).
for the hybrid-PIC solver (i.e. with ``algo.maxwell_solver = hybrid``) and for
diagnostics output with the electrostatic solvers (i.e., with
``warpx.do_electrostatic = ...``).
Note that ``vay`` is only available for ``algo.maxwell_solver = psatd``.

1. ``direct``
Expand Down
Loading

0 comments on commit 727cf06

Please sign in to comment.