diff --git a/README.md b/README.md index 6eb798f..a059bdb 100644 --- a/README.md +++ b/README.md @@ -2,31 +2,34 @@ EasterEig ========= [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [![CI-Ubuntu](https://github.com/nennigb/EasterEig/actions/workflows/ci-ubuntu.yml/badge.svg)](https://github.com/nennigb/EasterEig/actions/workflows/ci-ubuntu.yml) [![pypi release](https://img.shields.io/pypi/v/eastereig.svg)](https://pypi.org/project/eastereig/) - - -Consider a parametric eigenvalue problem depending on a parameter \(\nu\). This arises for instance in +Consider a **parametric eigenvalue problem** depending on one scalar $\nu$ or given vector $\boldsymbol\nu =(\nu_1,\nu_2,\ldots,\nu_N) \in \mathbb{C}^N$ of paramaters. This arises for instance in - waveguides, where the _wavenumber_ (eigenvalue) depends on the frequency (parameter) - - waveguides with absorbing materials on the wall, where _modal attenuation_ (eigenvalue imaginary part) depends on the liner properties like impedance, density (parameter) - - structural dynamics with a randomly varying parameter, where the _resonances frequencies_ (eigenvalue) depend on the parameter + - waveguides with absorbing materials on the wall, where _modal attenuation_ (eigenvalue imaginary part) depends on the liner properties like impedance, admittance, density (parameter) + - structural dynamics with a randomly varying parameter, where the _resonances frequencies_ (eigenvalue) depend on for instance of material parameters like Young modulus or density - ... -Exceptional points (EP) of non-Hermitian systems correspond to particular values of the parameter leading to defective eigenvalue. -At EP, both eigenvalues and eigenvectors are merging. +The aim of this package is to **reconstruct** the eigenvalue loci and to **locate** exceptional points (EPs). The EPs in non-Hermitian systems correspond to particular values of the parameters leading to defective eigenvalue. At EPs, both eigenvalues and eigenvectors are merging. -The aim of this package is to **locate** exceptional points and to **reconstruct** the eigenvalue loci. The theoretical part of this work is described in [1], as for the location of _exceptional points_ and illustrated in [2] for eigenvalues reconstruction in structural dynamics. +![Riemann surfaces around an EP2.](figures/EP2_light.png) ![Riemann surfaces around an EP3.](figures/EP3_light.png) -The method requires the computation of successive derivatives of two selected eigenvalues with respect to the parameter so that, after recombination, regular functions can be constructed. This algebraic manipulation enables - * exceptional points (EP) localization, using standard root-finding algorithms; - * computation of the associated Puiseux series up to an arbitrary order. - -This representation, which is associated with the topological structure of Riemann surfaces, allows to efficiently approximate the selected pair in a certain neighborhood of the EP. +The theoretical parts of this work are described in [1] for the location of _exceptional points_ and in [2] for eigenvalues reconstruction. The extension to several parameters is presented in [3]. +The method requires the computation of successive derivatives of some selected eigenvalues with respect to the parameter so that, after recombination, regular functions can be constructed. This algebraic manipulation overcomes the convergence limits of conventional due to the singularity branch point. This enables + * Fast approximation of eigenvalues, converging over a large region of parametric space + * High order EP localization + * Computation of the associated Puiseux series up to an arbitrary order + * Numerical representation of the problem discrimiant and of the partial characteristic polynomial + To use this package : - 1. an access to the **operator derivative** must be possible - 2. the eigenvalue problem must be recast into - \[ \mathbf{L} (\lambda(\nu), \nu) \mathbf{x} (\nu) =\mathbf{0} \] + 1. An access to the **operator derivative** with respect to $\boldsymbol\nu$ is required + 2. The parametric eigenvalue problem must have the form +$$\mathbf{L} (\lambda(\boldsymbol\nu), \boldsymbol\nu) \mathbf{x} (\boldsymbol\nu) =\mathbf{0},$$ +where, for a given vector $\boldsymbol\nu$ which contains $N$ independent complex-valued parameters, $\lambda(\boldsymbol\nu)$ is an eigenvalue and $\mathbf{x}(\boldsymbol\nu)\neq \mathbf{0}$ is the associated right eigenvector. Here the matrix $\mathbf{L}$ admits the decomposition +$$\mathbf{L} (\lambda, \boldsymbol\nu) =\sum_{i \geq 0} f_i(\lambda) \mathbf{K}_i(\boldsymbol\nu)$$ +where $f_i$ is a polynomial function and matrices $\mathbf{K}_i$ +are supposed to be an analytic function of the parameters vector $\boldsymbol\nu$. The matrices of discrete operators can be either of numpy type for _full_, scipy type for _sparse_ or petsc mpiaij type for _sparse parallel_ matrices. @@ -38,187 +41,169 @@ References [1] B. Nennig and E. Perrey-Debain. A high order continuation method to locate exceptional points and to compute Puiseux series with applications to acoustic waveguides. J. Comp. Phys., 109425, (2020). [[doi](https://dx.doi.org/10.1016/j.jcp.2020.109425)]; [[open access](https://arxiv.org/abs/1909.11579)] [2] M. Ghienne and B. Nennig. Beyond the limitations of perturbation methods for real random eigenvalue problems using Exceptional Points and analytic continuation. Journal of Sound and vibration, (2020). [[doi](https://doi.org/10.1016/j.jsv.2020.115398)]; [[open access](https://hal.archives-ouvertes.fr/hal-02536849)] + + [3] B. Nennig, E. Perrey-Debain, Martin Ghienne. Localization of high order exceptional points : applications to acoustic waveguides. 16ème Congrès Français d’Acoustique, Apr 2022, Marseille, France. [[open access](https://hal.science/hal-03838635v1/document)] - -Basic workflow and class hierarchy ----------------------------------- - -`eastereig` provides several top level classes: - - 1. **OP class**, defines operators of your problem - 2. **Eig class**, handles eigenvalues, their derivatives and reconstruction - 3. **EP class**, combines Eig object to locate EP and compute Puiseux series - 4. **Loci class**, stores numerical value of eigenvalues loci and allows easy Riemann surface plotting - -Dependencies -------------- - -`eastereig` is based on numpy (full) and scipy (sparse) for most internal computation and can handle _large_ parallel sparse matrices thanks to **optional** import of [petsc4py](https://petsc4py.readthedocs.io/en/stable/install.html) (and mumps), -[slepc4py](https://slepc4py.readthedocs.io/en/stable/install.html) and -and [mpi4py](https://mpi4py.readthedocs.io/en/stable/install.html). As non-hermitian problems involve complex-valued eigenvalues, computations are realized with complex arithmetic and the **complex petsc version** is expected. -Tested for python >= 3.5 - -> **Remarks :** -> To run an example with petsc (parallel), you need to run python with `mpirun` (or `mpiexec`). For instance, to run a program with 2 proc -> `mpirun -n 2 python myprog.py` - -Riemann surface can also be plotted using the `Loci` class either with `matplotlib` or with [`pyvista`](https://github.com/pyvista/pyvista) (optional). - Install -------- -You'll need : -* python (tested for v >= 3.5); -* python packages: numpy, setuptools, wheel -* pip (optional). -* fortran compiler (optional) -Note that on ubuntu, you will need to use `pip3` instead of `pip` and `python3` instead of `python`. Please see the steps given in the continous integration script [workflows](.github/workflows/ci-ubuntu.yml). +`eastereig` is based on numpy (full) and scipy (sparse) for most internal computation and can handle _large_ parallel sparse matrices thanks to **optional** import of [petsc4py (>=3.20)](https://petsc.org/release/petsc4py/install.html) (and mumps), +[slepc4py](https://slepc4py.readthedocs.io/en/stable/install.html) and +and [mpi4py](https://mpi4py.readthedocs.io/en/stable/install.html). As non-hermitian problems involve complex-valued eigenvalues, computations are realized with complex arithmetic and the **complex petsc version** is expected. +[sympy](https://docs.sympy.org) is used for formal manipulation of multivariate polynomials. +Riemann surface can also be plotted using the `Loci` class either with `matplotlib` or with [`pyvista`](https://github.com/pyvista/pyvista) and and `pyvistaqt` (optional). +Before installing `eastereig`, you'll need python (tested for v >= 3.8), and to install manually optional dependencies you need: +* The python packages `pypolsys` (optional homotopy EP solver) available from https://github.com/nennigb/pypolsys +* The python packages `pyvista` and `pyvistaqt` (optional Riemann surfaces plotting) +* The python package `petsc4py` and `slepc4py` (optional sparse parallel matrices surpport) +* A fortran compiler (optional, tested with gfortran) +**other dependencies will be installed automatically**. -By default, the fortan evaluation of multivariate polynomial is desactivated. To enable it, set the environnement variable: `EASTEREIG_USE_FPOLY=True`. On ubuntu like system, run +By default, the fortan evaluation of multivariate polynomial is deactivated. To enable it, you need a fortran compiler and to set the environment variable: `EASTEREIG_USE_FPOLY=True`. On ubuntu like system, run ```console export EASTEREIG_USE_FPOLY=True ``` - -### Using pip (preferred) -Consider using `pip` over custom script (rationale [here](https://pip.pypa.io/en/stable/reference/pip_install/)). - -You can install `eastereig` either from pypi (main releases only): -``` +Then, you can install `eastereig` either from pypi (main releases only): +```console pip install eastereig [--user] ``` -or from github: -``` -pip install path/to/EeasterEig-version.tar.gz [--user] -``` -or in _editable_ mode if you want to modify the sources -``` -pip install -e path/to/EeasterEig -``` -> The version of the required libraries specified in `install_requires` field from `setup.py` are given to ensure the backward compatibility up to python 3.5. A more recent version of these libraries can be safely used for recent python version. - -### Using python setuptools -Go to root folder. -and run: -``` -python setup.py install [--user] -``` - -To get the lastest updates (dev relases), run: -``` -python setup.py develop [--user] +or from github (more updates): +```console +pip install path/to/EasterEig-version.tar.gz [--user] ``` +Note that on ubuntu, you will need to use `pip3` instead of `pip` and `python3` instead of `python`. Please see the steps given in the continous integration script [workflows](.github/workflows/ci-ubuntu.yml). +We recommend installing `eastereig` in virtual environemment unless you know what you are doing. Running tests ------------- -Tests are handled with doctest. +Tests are handled with `doctest` and with `unittest` To execute the full test suite, run : -``` +```console python -m eastereig ``` -Documentation --------------- -## Generate documentation -Run: -``` -pdoc3 --html --force --config latex_math=True eastereig -``` -N.B: The doctring are compatible with several Auto-generate API documentation, like pdoc. -This notably allows to see latex includes. +Basic workflow and class hierarchy +---------------------------------- +`eastereig` provides several top level classes: -## Generate class diagram -Run: -``` -pyreverse -s0 eastereig -m yes -f ALL -dot -Tsvg classes.dot -o classes.svg -``` -N.B: Class diagram generation is done using `pyreverse` (installed with pylint and spyder). + 1. **OP class**, defines operators of your problem + 2. **Eig class**, handles eigenvalues, their derivatives and reconstruction + 3. **CharPol class**, combines several eigenvalues and their derivatives to reconstruction a part of the characteristic polynomial + 4. **EP class**, combines Eig object to locate EP and compute Puiseux series + 5. **Loci class**, stores numerical value of eigenvalues loci and allows easy Riemann surface plotting -## Generate documentation and class diagram -Both aspects are included in the `makedoc.py' script. So, just run : -``` -python ./makedoc.py -``` Getting started --------------- - Several working examples are available in `./examples/` folder 1. Acoustic waveguide with an impedance boundary condition (with the different supported linear libraries) 2. 3-dof toy model of a structure with one random parameter (with numpy) + 3. 3-dof toy with two parameters leading to EP3 + 4. ... + +> **Remarks :** +> To run an example with petsc (parallel), you need to run python with `mpirun` (or `mpiexec`). For instance, to run a program with 2 proc +> `mpirun -n 2 python myprog.py` To get started, the first step is to define your problem. Basically it means to link the discrete operators (matrices) and their derivatives to the `eastereig` OP class. The problem has to be recast in the following form: -\( \left[ \underbrace{1}_{f_0(\lambda)=1} \mathbf{K}_0(\nu) + \underbrace{\lambda(\nu)}_{f_1(\lambda)=\lambda} \mathbf{K}_1(\nu) + \underbrace{\lambda(\nu)^2}_{f_2(\lambda)} \mathbf{K}_2(\nu) \right] \mathbf{x} = \mathbf{0} \). +```math +\left[\underbrace{1}_{f_0(\lambda)=1} \mathbf{K}_0(\nu) + \underbrace{\lambda(\nu)}_{f_1(\lambda)=\lambda} \mathbf{K}_1(\nu) + \underbrace{\lambda(\nu)^2}_{f_2(\lambda)=\lambda^2} \mathbf{K}_2(\nu) \right] \mathbf{x} = \mathbf{0}. +``` Matrices are then stacked in the variable `K` ```python K = [K0, K1, K2]. ``` -**The functions** that return the derivatives with respect to \(\nu\) of each matrices have to be put in `dK`. The prototype of this function is fixed (the parameter n corresponds to the derivative order) to ensure automatic computation of the operator derivatives. +**The functions** that return the derivatives with respect to $$\nu$$ of each matrices have to be put in `dK`. The prototype of this function is fixed (the parameter n corresponds to the derivative order) to ensure automatic computation of the operator derivatives. ```python dK = [dK0, dK1, dK2]. ``` -Finally, **the functions** that returns derivatives with respect to \( \lambda\) are stored in 'flda' +Finally, **the functions** that returns derivatives with respect to $\lambda$ are stored in 'flda' ```python flda = [None, ee.lda_func.Lda, ee.lda_func.Lda2]. ``` -Basic linear and quadratic dependency are defined in the module `lda_func`. Others dependencies can be easily implemented; provided that the appropriate eigenvalue solver is also implemented). The `None` keyword is used when there is no dependency to the eigenvalue, e. g. \(\mathbf{K}_0\). +Basic linear and quadratic dependency are defined in the module `lda_func`. Others dependencies can be easily implemented; provided that the appropriate eigenvalue solver is also implemented). The `None` keyword is used when there is no dependency to the eigenvalue, e. g. $\mathbf{K}_0$. This formulation is used to automatically compute (i) the successive derivatives of the operator and (ii) the RHS associated to the bordered matrix. -These variables are defined by creating **a subclass** that inherits from the eastereig **OP class**. For example, considering a generalized eigenvalue problem \( \left[\mathbf{K}_0(\nu) + \lambda \mathbf{K}_1(\nu) \right] \mathbf{x} = \mathbf{0} \) : +These variables are defined by creating **a subclass** that inherits from the eastereig **OP class** (`OPmv` for the multivariate case). For example, considering a generalized eigenvalue problem $$\left[\mathbf{K}_0(\nu) + \lambda \mathbf{K}_1(\nu) \right] \mathbf{x} = \mathbf{0} $$: ```python import eastereig as ee class MyOP(ee.OP): - """Create a subclass of the OP class to describe your problem.""" + """Create a subclass of the OP class to describe your problem with scalar parameter.""" - def __init__(self): + def __init__(self, z, ...): """Initialize the problem.""" # Initialize OP interface self.setnu0(z) - + # Mandatory ----------------------------------------------------------- - self._lib = 'scipysp' # 'numpy' or 'petsc' + self._lib='scipysp' # 'numpy' or 'petsc' # Create the operator matrices self.K = self.CreateMyMatrix() # Define the list of function to compute the derivatives of each operator matrix (assume 2 here) - self.dK = [self.dmat0, self.dmat1] + self.dK = [self.dmat0, self.dmat1] # Define the list of functions to set the eigenvalue dependency of each operator matrix - self.flda = [None, ee.lda_func.Lda] + self.flda = [None, ee.lda_func.Lda] # --------------------------------------------------------------------- def CreateMyMatrices(self, ...): """Create my matrices and return a list.""" - ... - return list_of_Ki - + ... + return list_of_Ki + def dmat0(self, n): """Return the matrix derivative with respect to nu. - N.B. : The prototype of this function is fixed, the n parameter - stands for the derivative order. If the derivative is null, - the function returns the value 0. - """ - ... - return dM0 - + N.B. : The prototype of this function is fixed, the n parameter + stands for the derivative order. If the derivative is null, + the function returns the value 0. + """ + ... + return dM0 + def dmat1(self, n): """Return the matrix derivative with respect to nu. - N.B. : The prototype of this function is fixed, the n parameter - stands for the derivative order. If the derivative is null, - the function returns the value 0. - """ - ... - return dM1 + N.B. : The prototype of this function is fixed, the n parameter + stands for the derivative order. If the derivative is null, + the function returns the value 0. + """ + ... + return dM1 +``` + +Documentation +-------------- + +## Generate documentation +Run: +```console +pdoc3 --html --force --config latex_math=True eastereig [--skip-errors] +``` +N.B: The doctring are compatible with several Auto-generate API documentation, like pdoc. +This notably allows to see latex includes. + +## Generate class diagram +Run: +```console +pyreverse -s0 eastereig -m yes -f ALL +dot -Tsvg classes.dot -o classes.svg +``` +N.B: Class diagram generation is done using `pyreverse` (installed with pylint and spyder). + +## Generate documentation and class diagram +Both aspects are included in the `makedoc.py' script. So, just run : +```console +python ./makedoc.py ``` How to contribute ? diff --git a/eastereig/__init__.py b/eastereig/__init__.py index 58d8003..9fe7f0f 100644 --- a/eastereig/__init__.py +++ b/eastereig/__init__.py @@ -73,7 +73,8 @@ def _getversion(): # Import class from .options import gopts from .eig import Eig -from .op import OP +from .op import OP, OPmv from .ep import EP from .loci import Loci +from .charpol import CharPol from . import lda_func diff --git a/eastereig/__main__.py b/eastereig/__main__.py index ee88949..ccdbad0 100644 --- a/eastereig/__main__.py +++ b/eastereig/__main__.py @@ -17,28 +17,34 @@ # along with Eastereig. If not, see . """ -Run the doctest. +# Test suite runner + +Run `eastereig` run the test suite using `doctest` and `unittest` framework. Example ------- -``` +```console python3 -m eastereig ``` """ +import unittest import doctest import sys import numpy as np from eastereig import _petscHere -# immport the file containing the doctest +# Import the file containing the doctest from eastereig.examples import WGimpedance_numpy from eastereig.examples import WGimpedance_scipysp from eastereig.examples import ThreeDoF +from eastereig.examples import toy_3dof_2params +from eastereig.examples import WGadmitance_numpy_mv from eastereig import utils from eastereig import loci from eastereig import ep from eastereig import lda_func from eastereig import eigSolvers from eastereig import fpoly +from eastereig import charpol # Numpy 2.0 change default printing options making doctest failing. # https://numpy.org/neps/nep-0051-scalar-representation.html @@ -49,34 +55,32 @@ if _petscHere: from eastereig.examples import WGimpedance_petsc -# invoke the testmod function to run tests contained in docstring -mod_list = [lda_func, utils, loci, ep, eigSolvers, WGimpedance_numpy, - WGimpedance_scipysp, ThreeDoF, fpoly] +# Explicitely list modules with doctest +mod_list = [lda_func, utils, loci, ep, eigSolvers, fpoly, charpol, + WGimpedance_numpy, WGimpedance_scipysp, ThreeDoF, + toy_3dof_2params, WGadmitance_numpy_mv] if _petscHere: petsc_list = [WGimpedance_petsc] mod_list.extend(petsc_list) if __name__ == '__main__': + import os + tests_dir = os.path.join(os.path.dirname(__file__), 'tests') + print('> Running tests...') Stats = [] + # Create test suite for unittest and doctest + suite = unittest.TestLoader().discover(start_dir=tests_dir, pattern='test*.py') + # Add doctest from all modules of mod_list for mod in mod_list: - print("--------------------------------------------------------- \n", - "> Testing : {} \n".format(mod.__name__), - "--------------------------------------------------------- \n ") - # possible to use the directive "# doctest: +ELLIPSIS" or optionflags=doctest.ELLIPSIS in testmod - # it enable the ellipsis '...' for truncate expresion. usefull for float (but be careful) - stat = doctest.testmod(m=mod, optionflags=doctest.ELLIPSIS, verbose=False) # name=mod.__name__, verbose=True - print(stat) - Stats.append(stat) - - # Summary, with petsc out put sometime hard to read + suite.addTest(doctest.DocTestSuite(mod, + optionflags=(doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE))) + runner = unittest.TextTestRunner(verbosity=2) + result = runner.run(suite) + # Summary, with petsc output is sometime hard to read print("\n", "================ Testing summary ===================") - for i, mod in enumerate(mod_list): - print(" > Testing : {}".format(mod.__name__)) - print(" ", Stats[i]) - if sum([i.failed for i in Stats]) == 0: - print(" Pass :-)") + if result.wasSuccessful(): + print(" Pass :-)") sys.exit(0) else: - print(" Failed :-(") + print(" Failed :-(") sys.exit(1) - print("====================================================") diff --git a/eastereig/charpol.py b/eastereig/charpol.py new file mode 100755 index 0000000..6c7af9d --- /dev/null +++ b/eastereig/charpol.py @@ -0,0 +1,1634 @@ +# -*- coding: utf-8 -*- + +# This file is part of eastereig, a library to locate exceptional points +# and to reconstruct eigenvalues loci. + +# Eastereig is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. + +# Eastereig is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. + +# You should have received a copy of the GNU General Public License +# along with Eastereig. If not, see . + +""" +# Define CharPol class and auxilliary classes. + +This class implements the **partial characteristic polynomial**. +This polynomial is built from Vieta formulas and from the successive +derivatives of a selected set of eigenvalues. +[Vieta' formulas](https://en.wikipedia.org/wiki/Vieta%27s_formulas) + +""" +import numpy as np +from numpy.linalg import norm +import itertools as it +from collections import deque +from eastereig.eig import AbstractEig +from eastereig.utils import (_outer, diffprodTree, div_factorial, diffprodMV, + two_composition, Taylor) +from eastereig.loci import Loci +from eastereig.options import gopts +import sympy as sym +import scipy.linalg as spl +import scipy.optimize as spo +from scipy.special import factorial +import time +import tqdm +# // evaluation in newton method +from concurrent.futures import ProcessPoolExecutor +from functools import partial +import pickle +# multidimentional polyval +# /!\ numpy.polynomial.polynomial.polyval != np.polyval +# from numpy.polynomial.polynomial import polyval +# import numpy.polynomial.polyutils as pu +from eastereig.fpoly import polyvalnd +from matplotlib import pyplot as plt + + +def _dprod(order, S_dcoefs, C_dcoefs, max_order, shape, N): + """Inner function used in Multiply.""" + dcoefs_order = np.zeros(shape, dtype=complex) + # Get all combinaisons of with 2 coefficients to have exactly the given order + for index in two_composition(order, max_order): + # Compute the values derivative of the product + dcoefs_order += diffprodTree([S_dcoefs[max_order[0] - index[0]], + C_dcoefs[max_order[1] - index[1]]], N) + return dcoefs_order + + +class _SequentialExecutor(): + """Context manager that mimic ProcessPoolExecutor without new process. + + Could work even if charpol involved petsc objects. + """ + def __init__(self, *args, **kargs): + pass + + def __enter__(self, *args, **kargs): + return self + + def __exit__(*exc_info): + pass + + @staticmethod + def map(fun, *iterable, **kargs): + """Wrap the builtin function `map`.""" + return map(fun, *iterable) + + +class CharPol(): + r""" + Numerical representation of the *partial* characteristic polynomial. + + This polynomial is built from Vieta formulas and from the successive + derivatives of the selected set of eigenvalues \(\Lambda\). + + This class allows to reconstruct eigenvalue loci and to locate + EP between the eigenvalues from the set. + + This polynomial can be seen as a polynomial in \(\lambda\), whom coefficients \(a_n\\) + are Taylor series in \(\nu_0, ..., \nu_m \). For instance with 3 coefficients, the + attribut `dcoef[n]` represents the array \((a_n)_{i,j,k}\) + $$a_n(\nu_0, \nu_1, \nu_2) = \sum_{i,j,k} (a_n)_{i,j,k} \nu_0^i \nu_1^j \nu_2^k$$ + + The partial characteristic polynomial reads + $$ p = \sum \lambda^{n-1} a_{n-1}(\boldsymbol\nu) + ... + a_0(\boldsymbol\nu)$$ + """ + + def __init__(self, dLambda, nu0=None, vieta=True): + """Initialize the object with a list of derivatives of the eigenvalue or + a list of Eig objects. + + Parameters + ---------- + dLambda : list + List of the derivatives of the eigenvalues used to build this object + nu0 : iterable + the value where the derivatives are computed + vieta : bool, optional, default True + Compute or not the CharPol coefficients using Vieta formula. + Setting `vieta = False` can be useful for alternative constructors. + """ + # Initial check on input are delegate + self.dLambda, self.nu0 = self._initial_init_checks(dLambda, nu0) + # Compute the polynomial coef + if vieta: + self.dcoefs = self.vieta(self.dLambda) + for i, c in enumerate(self.dcoefs): + self.dcoefs[i] = np.asfortranarray(div_factorial(c)) + # Finalize the instanciation + self._finalize_init() + + @staticmethod + def _initial_init_checks(dLambda, nu0): + """Perform initial checks on input parameters.""" + if isinstance(dLambda[0], np.ndarray): + if nu0 is None: + raise ValueError('If initialized by a list of array, `nu0` is mandatory.') + elif isinstance(dLambda[0], AbstractEig): + try: + # Check for nu0. This attribute is created in `getDerivative*` method. + nu0 = dLambda[0].nu0 + except AttributeError: + print('Need to call `getDerivative*` before init a CharPol instance.') + dLambda = [np.array(vp.dlda) for vp in dLambda] + else: + TypeError('Unsupported type for `dLambda` elements.') + return dLambda, nu0 + + def _finalize_init(self): + """Finalize CharPol initialization.""" + # Compute usefull coefficients for jacobian evaluation + self._jacobian_utils() + + # How many eig in the charpol + self.N = len(self.dcoefs) - 1 + + # Store degree of Taylor expannsion in all variables + self._an_taylor_order = tuple(np.array(self.dLambda[0].shape) - 1) + + @classmethod + def _from_dcoefs(cls, dcoefs, dLambda, nu0): + """Define factory method to create CharPol from its polynomial coefficients. + + Parameters + ---------- + dcoefs : list + List of the derivatives of each coef of the polynom. It is assumed that + a_N=1. The coefficaients are sorted in decending order such, + 1 lda**N + ... + a_0 lda**0 + dLambda : list + List of the derivatives of the eigenvalues used to build this + object. + nu0 : iterable + The value where the derivatives are computed. + + Returns + ------- + C : CharPol + An new CharPol instance. + """ + C = cls(dLambda, nu0, vieta=False) + C.dcoefs = dcoefs + C._finalize_init() + return C + + @classmethod + def from_recursive_mult(cls, dLambda, nu0=None, block_size=3): + """Define factory method to create recursively CharPol from the lambdas. + + The methods is based on _divide and conquer_ approach. It combines + the Vieta's formulas and polynomial multiplications to speed up the + computation. + + Parameters + ---------- + dLambda : list + List of the derivatives of the eigenvalues used to build this object + nu0 : iterable + the value where the derivatives are computed + block_size : int, optional, default 3 + The number of eigenvalue used in initial Vieta computation. + + Returns + ------- + CharPol + The new CharPol instance. + """ + # Initial check on inputs and convert them to array + dLambda, nu0 = cls._initial_init_checks(dLambda, nu0) + # Create queue of dLambda block. + num_block = len(dLambda) // block_size + if num_block == 0: + # splitting require at least 1 block + num_block = 1 + dLambda_blocks = np.array_split(dLambda, num_block) + block_list = deque(range(0, len(dLambda_blocks))) + # Create a dictionnary containing CharPol for each dLambda_blocks + # Vieta is used in each block + prod = dict() + for block, dLambda_in_block in zip(block_list, dLambda_blocks): + prod[block] = cls(dLambda_in_block, nu0) + + # Consume the queue by computing the product of successive pairs + while len(block_list) > 1: + # Get 2 items from the left + pair = (block_list.popleft(), block_list.popleft()) + # Add the pair to the right to consume it after + block_list.append(pair) + # Store the results in the dict + prod[pair] = prod[pair[0]] * prod[pair[1]] + + # Return the final product + return prod[block_list[0]] + + def __repr__(self): + """Define the representation of the class.""" + return "Instance of {} @nu0={} with #{} derivatives and #{} eigs.".format(self.__class__.__name__, + self.nu0, + self._an_taylor_order, + self.N) + + def export(self, filename): + """Export a charpol object using pickle. + + Parameters + ---------- + filename : string + The full path to save the data. + """ + with open(filename, 'bw') as f: + pickle.dump(self, f) + + @staticmethod + def load(filename): + """Load a charpol object from pickle. + + Parameters + ---------- + filename : string + The full path to save the data. + + Returns + ------- + C : CharPol + The loaded CharPol object. + """ + with open(filename, 'br') as f: + C = pickle.load(f) + return C + + def lda(self): + """Return the eigenvalue value used in the CharPol.""" + Lda = np.array([lda.flat[0] for lda in self.dLambda]) + return Lda + + def coef0(self): + """Return the coefficient of the CharPol @ nu0.""" + an = np.array([dcoef.flat[0] for dcoef in self.dcoefs]) + return an + + @staticmethod + def vieta(dLambda): + """Compute the sucessive derivatives of the polynomial coefficients knowing + its roots and their successive derivatives. + + The methods is based on the Vieta formulas see for instance + [Vieta' formulas](https://en.wikipedia.org/wiki/Vieta%27s_formulas) + + Parameters + ---------- + dLambda : list + List of the polynom roots successive derivatives. The input format is + [D^(alpha) lda1, D^(alpha) lda2, ...], where alpha represent the + multi-index with the derivation order for each variable. + D^(alpha) lda1 is an ndarray and the first element [0, ...] is lda1. + All D^(alpha) lda_i should have the same shape. + + Returns + ------- + dcoef_pol : list + list of the derivatives of each coef of the polynom. It is assumed that + a_N=1. The coefficaients are sorted in decending order such, + 1 lda^N + ... + a_0 lda^0 + + Examples + -------- + Consider x^2 - 3*x + 2 = (x-1)*(x-2), where all roots are constant (0 derivatives) + >>> dlambda=[np.array([[2, 0], [0, 0]]), np.array([[1, 0], [0, 0]])] + >>> c = CharPol.vieta(dlambda) + >>> norm(np.array([c[0].flat[0], c[1].flat[0], c[2].flat[0]]) - np.array([1, -3, 2])) < 1e-10 + True + + """ + # FIXME probably a problem if the number of lda is not even + + # Number of derivative is N - 1 + N = np.array(dLambda[0].shape) - 1 + # M number of eigenvalue in the set + M = len(dLambda) + # Intialize the coef of the polynomial + dcoef_pol = [] + dcoef0 = np.zeros_like(dLambda[0], dtype=complex) + dcoef0.flat[0] = 1. + dcoef_pol.append(dcoef0) + + # Loop to create each coefficients of the polynomial depeding of the order + for order in range(1, M+1): + lda_id = np.arange(0, M) + # local coeff for summation + dcoef_pol_ = np.zeros_like(dLambda[0], dtype=complex) + + # TODO add test if no product ? + # get all combinaison of with 1 eigenvalue , 2 etc.... + for index in it.combinations(lda_id, order): + # Compute the values derivative of the product + dcoef_pol_ += diffprodTree([dLambda[i] for i in index], N) + + dcoef_pol.append(dcoef_pol_*(-1)**order) + + # Return the successive derivative of the polynomial coef, no factorial !! + return dcoef_pol + + def multiply(self, C, max_workers=None): + """Multiply this polynomial by another Charpol Object. + + The two CharPol object must be computed at the same `nu0` value and the + `dLambda` attributs should have the same format and the same number + of derivatives. + + The derivatives of all coefficients are obtained from the coefficients + of each polynomial. + + Depending of the number of terms, this appraoch may be faster than + using Vieta formula when the number of eigenvalue increases. + + The number of process involved in the computation is defined in EasterEig + options file and can be set using `ee.options.gopts['max_workers_mult'] = 4`. + The default is 1. For better performance, use the number of cores. + If `eig` object involve petsc matrices and collective executions, + it may fail with value different from 1. + + Parameters + ---------- + C : CharPol + The second polynomial. + max_workers : int, optional + The number of process involved in the computation. The default is `None`. + If `None` it use the valuefrom `ee.options.gopts['max_worker_mult']`. + This argument is mainly intent for testing. + + Returns + ------- + P : CharPol + The results of the multiplcation of both CharPol. + """ + if not isinstance(C, self.__class__): + raise ValueError('`C` must be a CharPol object.') + # Need to check if both nu0 are the same + try: + if not np.allclose(self.nu0, C.nu0): + raise ValueError('Both polynomials must be obtained at the same `nu0`') + except AttributeError: + print('The attribut `nu0` should be defined and identical for both polynomial') + # Check derivatives order are the same + if self.dcoefs[0].shape != C.dcoefs[0].shape: + raise ValueError('The number of derivative should be the same for both polynomial') + # Use the default the number of workers + if max_workers is None: + max_workers = gopts['max_workers_mult'] + # If max_workers is 1, define a context manager to avoid the initialization + # of the ProcessPoolManager + if max_workers == 1: + PoolExecutor = _SequentialExecutor + else: + PoolExecutor = ProcessPoolExecutor + + total_order = len(self.dcoefs) + len(C.dcoefs) - 2 + max_order = (len(self.dcoefs) - 1, len(C.dcoefs) - 1) + shape = self.dcoefs[0].shape + # Deduce the highest degrees for derivatives + N = np.array(shape) - 1 + + # Get factorial + F = np.asfortranarray(div_factorial(np.ones(shape, dtype=float))) + F1 = 1. / np.asfortranarray(div_factorial(np.ones(shape, dtype=float))) + + dcoefs = [] + # dcoefs are Taylor series, need to remove the factorial for the derivatives + C_dcoefs = [F1 * an for an in C.dcoefs] + S_dcoefs = [F1 * an for an in self.dcoefs] + # Compute the polynomial coef `dcoefs` + # In CharPol these coefs are in descending order such, + # a_0 lda**N + ... + a_N lda**0 + __dprod = partial(_dprod, S_dcoefs=S_dcoefs, C_dcoefs=C_dcoefs, + max_order=max_order, shape=shape, N=N) + order_range = np.arange(total_order, -1, -1) + with PoolExecutor(max_workers=max_workers) as executor: + # for results in map(__dprod, order_range): + for results in executor.map(__dprod, order_range): + # Local coeff for summation + dcoefs_order = results + # dcoefs should be a Taylor series, need to divide by factorial + dcoefs.append(dcoefs_order * F) + + # Merge both dLambda for the new instance + dLambda = [] + dLambda.extend(self.dLambda.copy()) + dLambda.extend(C.dLambda.copy()) + + return CharPol._from_dcoefs(dcoefs, dLambda, self.nu0) + + def __mul__(self, other): + """Multiply this polynomial by another Charpol Object. + + Shorthand or the `multiply` method. + """ + return self.multiply(other) + + def conj(self): + """Return a copy of the complex conjugate CharPol. + + Remarks + ------- + This may be usefull to speed-up CharPol construction when all + eigenvalues come in a complex conjugate pairs. + """ + dcoefs = [ai.copy().conj() for ai in self.dcoefs] + dLambda = [ai.copy().conj() for ai in self.dLambda] + return CharPol._from_dcoefs(dcoefs, dLambda, self.nu0) + + def trunc(self, param_truncation=-1): + """Return a copy of a truncated conjugate CharPol. + + Parameters + ---------- + param_truncation: iterable or int + This truncation parameter is negative and indicate how many derivative + order should be removed. + If it is a integer, the same truncation is applied for all + parameters, else the truncation may be given for each variable. + The default is `-1` and remove the last order. Use `None` in the + iterable to pick all available order for a given variable. + + Remarks + ------- + This may be usefull to test sensitivity and convergence. + """ + N = len(self.nu0) + # Create slices accounting for truncation + if isinstance(param_truncation, int): + slices = (slice(0, param_truncation),) * (N - 1) + elif hasattr(param_truncation, '__iter__'): + slices = tuple([slice(0, i) for i in param_truncation]) + else: + raise ValueError('`param_truncation` should be an integer or an iterable') + + dcoefs = [ai[slices].copy() for ai in self.dcoefs] + dLambda = [ai[slices].copy() for ai in self.dLambda] + return CharPol._from_dcoefs(dcoefs, dLambda, self.nu0) + + @staticmethod + def _set_param(index, value, an): + """Compute the a new polynomial by setting one of its parameters. + + After moving the axes, the parameters remains in the same order but + without the set parameter. + The evaluation is based on `numpy.polynomial.polyval` and use ascending + power ordering. + The remaining indices of an enumerate multiple sets of coefficients + corresponding to the unset coefficient. + + Examples + -------- + >>> rng = np.random.default_rng(seed=1) + >>> a = rng.random((4, 3, 2)) + >>> b1 = CharPol._set_param(1, 0.1, a) + >>> b_direct = np.polynomial.polynomial.polyval3d(0.2, 0.1, 0.15, a) + >>> np.allclose(b_direct, np.polynomial.polynomial.polyval2d(0.2, 0.15, b1)) + True + >>> b2 = CharPol._set_param(2, 0.15, a) + >>> np.allclose(b_direct, np.polynomial.polynomial.polyval2d(0.2, 0.1, b2)) + True + >>> b0 = CharPol._set_param(0, 0.2, a) + >>> np.allclose(b_direct, np.polynomial.polynomial.polyval2d(0.1, 0.15, b0)) + True + """ + cn = np.moveaxis(an, index, 0) + b = np.polynomial.polynomial.polyval(value, cn) + return b + + def set_param(self, index, value): + """Return a new charpol with one of these parameters set to `value`. + + The parameters remains in the same order but without the set parameter. + + Parameters + ---------- + index: int + The index of the parameter to set. + value: complex + The value to set. + + Returns + ------- + : Charpol + The new charpol with a parameter of. + """ + dcoefs_ = [] + dLambda_ = [] + value -= self.nu0[index] + for an in self.dcoefs: + dcoefs_.append(self._set_param(index, value, an)) + # Factorial are not included in dLambda, just apply it in the set direction + shape = np.ones(len(self.nu0), dtype=int) + shape[index] = self.dLambda[0].shape[index] + fact = factorial(np.arange(0, shape[index])) + fact = fact.reshape(shape) + for ldan in self.dLambda: + ldan_ = ldan / fact + dLambda_.append(self._set_param(index, value, ldan_)) + + nu0_ = np.asarray(self.nu0)[np.arange(self.nu0.size) != index] + return CharPol._from_dcoefs(dcoefs_, dLambda_, nu0_) + + def EP_system(self, vals, trunc=None): + """Evaluate the successive derivatives of the CharPol with respect to lda. + + This system is built on the sucessive derivatives of the partial + characteristic polynomial, + + v = [p0, p1, ... pn]^t, where pi = d^ip0/dlda^i. + + Generically, this système yields to a finite set of EP. + + Parameters + ---------- + vals : iterable + Containts the value of (lda, nu_0, ..., nu_n) where the polynom + must be evaluated. Althought nu is relative to nu0, absolute value + have to be used. + trunc : int or None + The truncation to apply in each parameter wtr the maximum available + order of the Taylor expansion. The integer is supposed + to be negative as in `an[:trunc, :trunc, ..., :trunc]`. + This argument is useful to compare several order of Taylor + expansion. + + Returns + ------- + v : np.array + Value of the vectorial function @ vals + """ + # Total number of variables (N-1) nu_i + 1 lda + N = len(vals) + # Create slices accounting for truncation + slices = (slice(0, trunc),) * (N - 1) + # polynomial degree in lda + deg = len(self.dcoefs) + # Extract input value + lda, *nu = vals + nu = np.array(nu, dtype=complex) - np.array(self.nu0, dtype=complex) + # compute the an coefficients at nu + an = np.zeros((len(self.dcoefs),), dtype=complex) + # Compute a_n at nu + for n, a in enumerate(self.dcoefs): + # an[n] = pu._valnd(polyval, a, *nu) + an[n] = polyvalnd(nu, a[slices]) + + # Compute the derivtaive with respect to lda + + # Create a coefficient matrix to account for lda derivatives + # [[1, 1, 1, 1], [3, 2, 1, 1], [2, 1, 1, 1] + DA = np.ones((N, deg), dtype=complex) + for i in range(1, N): + DA[i, :-i] = np.arange((deg-i), 0, -1) + # Evaluate each polynom pi + v = np.zeros((N,), dtype=complex) + for n in range(0, N): + # apply derivative with respect to lda + dan = an[slice(0, deg-n)] * np.prod(DA[0:(n+1), slice(0, deg-n)], 0) + # np.polyval start by high degree + # np.polynomial.polynomial.polyval start by low degree!!! + v[n] = polyvalnd(lda, dan[::-1]) + + return v + + def _jacobian_utils(self): + """Precompute the shifted indices and combinatoric elements used + in jacobian matrix (N+1) x (N+1). + + Attributes + ---------- + _dlda_prod : np.array + Coefficients used in 1D (lda) polynomial derivative. Assume decending order. + _der_coef : np.array + Coefficients used in ND (nu_0, ..., nu_N) polynomial derivative. + _da_slice : np.array + List of slices of da array. + _da_shape : np.array + List of shape of da array. + """ + # get number of Parameters + N = len(self.dcoefs[0].shape) + 1 + # polynomial degree in lda + deg = len(self.dcoefs) + # shape of a(nu) + shape = np.array(self.dcoefs[0].shape) + # Create a coefficient matrix dlda_prod to account for lda derivatives + # [[1, 1, 1, 1], [3, 2, 1, 1], [2, 1, 1, 1] + DA = np.ones((N+1, deg), dtype=complex) + for i in range(1, N+1): + DA[i, :-i] = np.arange((deg-i), 0, -1) + self._dlda_prod = np.cumprod(DA, 0) + + # Create Coefficients used nu_i derivatives + self._der_coef = np.empty((N, N), dtype=object) + self._da_slice = np.empty((N, N), dtype=object) + self._da_shape = np.empty((N, N), dtype=object) + # Loop of the derivative of P, fill J raws + for row in range(0, N): + # loop over the nu variables, fill column + for col in range(0, N): + # store coef for lda-evaluation + # recall that an[0] * lda**n + an = np.zeros((len(self.dcoefs),), dtype=complex) + # create the matrix accounting for derivative of coefs + der_coef_list = [] + start = np.zeros_like(shape) + # shape of the derivatives of the coef + da_shape = shape.copy() + if col > 0: + da_shape[col-1] -= 1 + start[col-1] += 1 + da_slice_list = [] + for nu_i in range(0, N-1): + if nu_i == col - 1: + der_coef_list.append(np.arange(1, da_shape[nu_i]+1)) + da_slice_list.append(slice(1, da_shape[nu_i]+1)) + else: + der_coef_list.append(np.ones((da_shape[nu_i],))) + da_slice_list.append(slice(0, da_shape[nu_i])) + # Maintains fortran ordering + self._der_coef[row, col] = np.asfortranarray(_outer(*der_coef_list)) + self._da_slice[row, col] = tuple(da_slice_list) + self._da_shape[row, col] = da_shape.copy() + + def jacobian(self, vals, trunc=None): + """Compute the jacobian matrix of the EP system at nu. + + This system is built on the sucessive derivatives of the partial + characteristic polynomial, + + J = [[dp0/dlda, dp0/dnu_0 ...., dp0/dnu_n], + [dp1/dlda, dp1/dnu_0 ...., dp1/dnu_n], + .....] + where pi = d^ip0/dlda^i. + + + Generically, this système yields to a finite set of EP. + + Althought nu is relative to nu0, absolute value have to be used. + + # TODO need to add a test (validated in toy_3doy_2params) + + Parameters + ---------- + vals : iterable + Containts the value of (lda, nu_0, ..., nu_n) where the polynom + must be evaluated. Althought nu is relative to nu0, absolute value + have to be used. + trunc : int or None + The truncation to apply in each parameter wtr the maximum available + order of the Taylor expansion. The integer is supposed + to be negative as in `an[:trunc, :trunc, ..., :trunc]`. + This argument is useful to compare several order of Taylor + expansion. + + Returns + ------- + J : np.array + The jacobian matrix. + + Notes + ----- + Works with _jacobian_utils method. This methods will create some required + indices and constants array. + + """ + # Total number of variables (N-1) nu_i + 1 lda + N = len(vals) + # Create slices accounting for truncation + slices = (slice(0, trunc),) * (N - 1) + # polynomial degree in lda + deg = len(self.dcoefs) + # Extract input value + lda, *nu = vals + nu = np.array(nu, dtype=complex) - np.array(self.nu0, dtype=complex) + J = np.zeros((N, N), dtype=complex) + + # Alias coefficient matrix dlda_prod to account for lda derivatives + dlda_prod = self._dlda_prod + # Alias coefficients and indices matrices to account for nu derivatives + der_coef = self._der_coef + da_slice = self._da_slice + da_shape = self._da_shape + + # Loop of the derivative of P, fill J raws + for row in range(0, N): + # loop over the nu variables, fill column + for col in range(0, N): + # store coef for lda-evaluation + # recall that an[0] * lda**n + an = np.zeros((len(self.dcoefs),), dtype=complex) + # Loop over the polynomial coefs + for n, a in enumerate(self.dcoefs): + # Recall that a[0,...,0] * nu0**0 * ... * nu_m**0 + # Create a zeros matrix + # Maintains fortran ordering + da = np.zeros(da_shape[row, col], dtype=complex, order='F') + # and fill it with the shifted the coef matrix + da = a[da_slice[row, col]] * der_coef[row, col] + # an[n] = pu._valnd(polyval, da, *nu) + an[n] = polyvalnd(nu, da[slices]) + # apply derivative with respect to lda + if col == 0: + # Increase the derivation order + # dan = an[0:-(row+1)] * np.prod(DA[1:(row+2), :-(row+1)], 0) + dan = an[0:-(row+1)] * dlda_prod[row+1, :-(row+1)] + else: + # Apply successived derivative of the parial Char pol + # dan = an[slice(0, deg-row)] * np.prod(DA[0:(row+1), slice(0, deg-row)], 0) + dan = an[slice(0, deg-row)] * dlda_prod[row, slice(0, deg-row)] + # np.polyval start by high degree + # np.polynomial.polynomial.polyval start by low degree!!! + # J[row, col] = polyval(lda, dan[::-1]) + J[row, col] = polyvalnd(lda, dan[::-1]) + return J + + def eval_at(self, vals): + """Evaluate the partial caracteristic polynomial at (lda, nu). + + Parameters + ---------- + vals : iterable + Containts the value of (lda, nu_0, ..., nu_n) where the polynom + must be evaluated. Althought nu is relative to nu0, absolute value + have to be used. + + Returns + ------- + The partial caracteristic polynomial at (lda, nu). + + # TODO need to add a test (validated in toy_3doy_2params) + """ + # Extract input value + lda, *nu = vals + # Evaluate the polynomial *coefficient* at nu + an = self.eval_an_at(nu) + # Evaluate the polynomial + # np.polyval start by high degree + # np.polynomial.polynomial.polyval start by low degree!!! + return polyvalnd(lda, an[::-1]) + + def eval_an_at(self, nu): + """Evaluate the partial caracteristic polynomial coefficient an at nu. + + Parameters + ---------- + nu : iterable + Containts the value of (nu_0, ..., nu_n) where the polynom + must be evaluated. Althought nu is relative to nu0, absolute value + have to be used. + + Returns + ------- + an : array + The partial caracteristic polynomial coefficient at nu in descending order + an[0] * lda **(n-1) + ... + an[n-1] + """ + # Extract input value + nu = np.array(nu, dtype=complex) - np.array(self.nu0, dtype=complex) + an = np.zeros((len(self.dcoefs),), dtype=complex) + # Compute a_n at nu + for n, a in enumerate(self.dcoefs): + # an[n] = pu._valnd(polyval, a, *nu) + an[n] = polyvalnd(nu, a) + + # np.polyval start by high degree + # np.polynomial.polynomial.polyval start by low degree!!! + return an + + def eval_lda_at(self, nu): + """Evaluate the eigenvalues at nu. + + Parameters + ---------- + nu : iterable + Containts the values of (nu_0, ..., nu_n) where the polynom + must be evaluated. Althought nu is relative to nu0, absolute value + have to be used. + + Returns + ------- + lda : array + The eigenvalue estimated at nu. + """ + an = self.eval_an_at(nu) + return np.roots(an) + + @staticmethod + def _newton(f, J, x0, tol=1e-8, normalized=True, verbose=False): + """Run basic Newton method for vectorial function. + + Parameters + ---------- + f : function + The vectorial function we want to find the roots. Must Returns + an array of size N. + J : function + Function that provides the Jacobian matrix NxN. + x0 : array + The initial guess of size N. + tol : float + The tolerance to stop iteration. + normalized : bool + If `True` the tol is applied to the ratio of the ||f(x)||/||f(x0)|| + verbose : bool + print iteration log. + + Returns + ------- + x : array + The solution or None of NiterMax has been reached or if the + computation fail. + """ + # set max iteration number + NiterMax = 150 + x = np.array(x0) + k = 0 + cond = True + while cond: + fx = f(x) + if k == 0: + norm_fx0 = np.linalg.norm(fx) + # working on polynomial may leads to trouble if x is outside the + # convergence radius. Use try/except + try: + x = x - np.linalg.inv(J(x)) @ fx + if verbose: + print(k, x, abs(fx)) + k += 1 + # condition + if normalized: + cond = (k < NiterMax) and (np.linalg.norm(fx)/norm_fx0) > tol + else: + cond = (k < NiterMax) and np.linalg.norm(fx) > tol + except: + print('Cannot compute Newton iteration at x=', x) + print(' when starting from x0=', x0) + return None + + # if stops + if k < NiterMax: + return x + else: + print('The maximal number of iteration has been reached.') + return None + + def lm_from_sol(self, z0, tol=1e-8, normalized=True, verbose=False): + """Find EP with Levenberg-Marquard solver from single starting point. + + Based on scipy/MINPACK implementation of 'lm' through the scipy.optimize + root function. + Since the algorithm work with real value problem. Real and imaginary are + splt. + + Parameters + ---------- + x0 : array + The initial guess of size N. + tol : float + The tolerance to stop iteration. + normalized : bool + If `True` the tol is applied to the ratio of the ||f(x)||/||f(x0)|| + verbose : bool + print iteration log. + + Returns + ------- + x : array + The solution or None of NiterMax has been reached or if the + computation fail. + """ + # set max iteration number + niter_max = 150 + # Def few local function + def to_z(x): + """Convert the in-lined real and imag unknown `x` to complex unknown.""" + return x.reshape((-1, 2)) @ np.array([1, 1j]) + + def to_x(z): + """Convert the complex unknown `z` toin-line real and imag unknown.""" + return np.stack((z.real, z.imag), axis=1).ravel() + + def f_and_jac(x): + """Split the EP system and the jacobian matrix into real and imag.""" + # Compute the Jac and the function from charpol (C) + nparam = len(z0) + z = to_z(x) + Jc = self.jacobian(z) + Fc = self.EP_system(z) + # Compute the real version + Jr = np.zeros((x.size, x.size)) + Fr = np.zeros((x.size,)) + # Add terms from p dans d_lda p + for eq_id in range(nparam): + eq_idr = eq_id * 2 + Fr[eq_idr] = Fc[eq_id].real + Fr[eq_idr + 1] = Fc[eq_id].imag + for i in range(nparam): + ir = i * 2 + # Real part eqs + Jr[eq_idr, ir] = Jc[eq_id, i].real + Jr[eq_idr, ir+1] = - Jc[eq_id, i].imag + # Imag part eqs + Jr[eq_idr+1, ir] = Jc[eq_id, i].imag + Jr[eq_idr+1, ir+1] = Jc[eq_id, i].real + return Fr, Jr + + # see also hybr,s ometimes beter ? + # scaling = np.ones_like(x) + # scaling[6:] = imag_scaling + # Check input + if isinstance(z0, np.ndarray): + z0 = z0.ravel() + else: + z0 = np.array(z0).ravel() + x = to_x(z0) + scaling = 1. #/ np.abs(x) + sol = spo.root(f_and_jac, x, method='lm', jac=True, callback=None, + options={'maxiter': niter_max, 'ftol': tol, 'xtol': tol}) #'diag': scaling + # }) #{}'xtol': 0.00000001, + if verbose: + print(sol) + + z = to_z(sol.x) + x = sol.x + if sol.success: + if verbose: + print('Convergence in {} iterations.'.format(sol.nfev), sol.message) + else: + print(sol.message) + z = None + return z + + def newton_from_sol(self, x, **kargs): + """Find EP with Newton-Raphson solver from single starting point.""" + return self._newton(self.EP_system, self.jacobian, x, **kargs) + + def newton(self, *args, **kargs): + """Mesh parametric space and run iterative solver to find EP in parallel.""" + print('Method `Newton` is now deprecated, use now `iterative_solve`.') + return self.iterative_solve(*args, **kargs) + + def iterative_solve(self, bounds, Npts=4, decimals=4, max_workers=4, tol=1e-4, + verbose=False, algorithm='nr', chunksize=100): + """Mesh parametric space and run iterative solver to find EP in parallel. + + Parameters + ---------- + bounds : iterable + Each item must contains the 2 corners in the complex plane. For + instance if bounds = [(-1-1j, 1+1j), (-2-2j, 2+2j)], + the points will be put in this C**2 domain. + Althought nu is relative to nu0, absolute value have to be used. + Npts : int + The number of point in each direction between the bounds. + decimals : int + The number of decimals keep to filter the solution (using np.unique). + Note it should be concistant with `tol` (~|log10(tol)|). + max_workers : int + The number of worker to explore the parametric space. + tol : float + The tolerance to stop iteration. See solver details for deeper insight. + normalized : bool + If `True` the tol is applied to the ratio of the ||f(x)||/||f(x0)|| + verbose : bool + print iteration log. + algorithm : string + Can be either 'nr' for Newton-Raphson, either 'lm' for Levenberg-Marquard. + The second is often more robust. + chunksize : int + The size of computation grouped together and sent to workers for + parallel execution. + + Returns + ------- + sol : array + The N 'unique' solutions for the M unknowns in a NxM array. + + Remarks + ------- + For parallel execution, consider using (before numpy import) + ```python + import os + os.environ["OMP_NUM_THREADS"] = "1" + ``` + """ + if len(bounds) != len(self.nu0) + 1: + raise IndexError('The length of `bounds` must match the number' + ' of unknown ({}).'.format(len(self.nu0) + 1)) + + def param_mesh(bound, Npts): + Re, Im = np.meshgrid(np.linspace(bound[0].real, bound[1].real, Npts), + np.linspace(bound[0].imag, bound[1].imag, Npts)) + z = Re + 1j * Im + return z.ravel() + + # TODO May limit bounds by convergence radius ? + tic = time.time() + # Create a coarse mesh of the parametric space for Newton solving + grid = [] + # Strategy linsapce + for i, bound in enumerate(bounds): + if bound is None: + if i == 0: + grid.append(self.lda()) + else: + raise ValueError('Only the eigenvalue bound can be `None`.') + else: + grid.append(param_mesh(bound, Npts)) + + # For each, run Newton search + all_sol = [] + + if algorithm == 'nr': + # use partial to fixed all function parameters except lda + _p_newton = partial(self._newton, self.EP_system, self.jacobian, tol=tol, + verbose=verbose) + elif algorithm == 'lm': + _p_newton = partial(self.lm_from_sol, tol=tol, verbose=verbose) + else: + raise NotImplementedError('The request algorithm `{}` is not recognized.'.format(algorithm)) + total = np.prod([len(g) for g in grid]) + pbar = tqdm.tqdm(total=total, position=0, leave=True) + with ProcessPoolExecutor(max_workers=max_workers) as executor: + for s in executor.map(_p_newton, + it.product(*grid), + chunksize=chunksize): + all_sol.append(s) + pbar.update() + + # Filter all solution to keep only unique + # Remove None and convert to array + sol_ = np.array([s for s in all_sol if s is not None]) + # Use unique to remove duplicated row + sol = np.unique(sol_.round(decimals=decimals), axis=0) + print('> ', time.time() - tic, ' s in iterative `{}` solver.'.format(algorithm)) + return sol + + def homotopy_solve(self, degree=None, tracktol=1e-5, finaltol=1e-10, singtol=1e-14, + dense=True, bplp_max=2000, oo_tol=1e-5, only_bezout=False, tol_filter=1e-12): + """Solve EP system by homotopy method. + + Require `pypolsys` python package. + This method defines a simplified interface to `pypolsys` homotopy + solver. Such solver allow to find *all solutions* of the polynomial + systems. An upper bound of the number of solution is given by + the Bezout number, equal to the product of the degrees each polynomials. + This number may be huge. + Here, we use a m-homogeneous variable partition to limit the number of + spurious solution 'at infinty' to track. The number of paths is given + by `bplp` and can be obtained without performing the tracking by setting + `only_bezout=True`. + This method is interesting because all solution are found, whereas + for Newton-Raphson method. + + Parameters + ---------- + tracktol : float + is the local error tolerance allowed the path tracker along + the path. + finaltol : float, optional + is the accuracy desired for the final solution. It is used + for both the absolute and relative errors in a mixed error criterion. + singtol : float, optional + is the singularity test threshold used by SINGSYS_PLP. If + INGTOL <= 0.0 on input, then SINGTOL is reset to a default value. + dense : bool, optional + If `True`, evaluate the polynomial using the fastermultivariate Horner + scheme, optimized for dense polynomial. Else, monomial evaluation + are used. + bplp_max : int, optional + Provides an upper bounds to run homotopy solving. + If `bplp > bplp_max`, the computation is aborted. + oo_tol : float, optional + Tolerance to drop out solution at infinity. If `oo_tol=0`, all + solution are returned. + only_bezout : bool, optional + If `True` just compute the Bezout number (fast) and exit. + tol_filter: flaot, optional + Tolerance to drop small terms in the charpol. To keep all terms, + -1 can be used. The amplitude of leading order in lda**N is 1 by + convention. + + Returns + ------- + bplp : int + The Bezout number corresponding to the number of tracked paths. + r : np.array + The solutions of the system express as absolute value wtr to nu0. + `r` is `None` if the computation has been aborted. If `oo_tol>0`, + the number of rows may be less than `bplp`. + If `bplp > bplp_max`, `r` is None. + """ + try: + import pypolsys + except ModuleNotFoundError: + print('The module `pypolsys` is not installed.', + ' Install it or use `iterative_solve`.') + # Convert to sympy + p0, variables = self.taylor2sympyPol(self.dcoefs, tol=tol_filter) + _lda, *_ = variables + # Build the system + polys = [] + _lda = p0.gens[0] + n_var = len(p0.gens) + # Truncate the serie if needed + if degree is not None: + p0 = self._drop_higher_degree(p0, degree) + polys.append(p0) + for i in range(1, len(variables)): + polys.append(polys[i-1].diff(_lda)) + + # Pypolsys solve + # generate sparse polynomial + t0 = time.time() + pol = pypolsys.utils.fromSympy(polys) + if dense: + pol = pypolsys.utils.toDense(*pol) + part = pypolsys.utils.make_mh_part(n_var, [[i] for i in range(1, n_var+1)]) + pypolsys.polsys.init_poly(*pol) + pypolsys.polsys.init_partition(*part) + + # Check if there is too much solution + bplp = pypolsys.polsys.bezout(singtol) + print('> Bezout number :', bplp) + if bplp > bplp_max: + print(' `bplp` > `bplp_max` ({}). Abort. Increase `bplp_max` and try again.'.format(bplp_max)) + return bplp, None + elif only_bezout: + return bplp, None + else: + bplp = pypolsys.polsys.solve(tracktol, finaltol, singtol) + r = pypolsys.polsys.myroots.copy() + # Express the solution absolutly wrt nu0 + for i, nu in enumerate(self.nu0): + r[i+1, :] += nu + # keep only finite solution, filter point at oo + finite_sol = np.nonzero(np.abs(r[-1, :]) > oo_tol) + print('> ', time.time() - t0, 's in homotopy solve. Found', bplp, 'solutions.') + return bplp, r[:-1, finite_sol[0]].T + + def filter_spurious_solution(self, sol, trunc=-1, filter=True, tol=1e-2, + plot=False, sort=True): + """Remove spurious solution based on roots sensitivty estimation. + + Parameters + ---------- + sol : array + Contains all the found EP candidates solutions along its rows. + trunc : int + The truncation to apply in each parameter wtr the maximum available + order of the Taylor expansion. The integer is supposed + to be negative as in `an[:trunc, :trunc, ..., :trunc]`. + filter : bool, optional + If `True`, only true solution are filter. + tol : float + The tolerance used in the filtering. sol such kappa(sol) > tol are + removed. + plot: bool + Plot the sensitivity distribution. + sort: bool, optional + If True, sort the filtered solution with ascending sensitivity + `deltaf`. the default is `True`. + + Returns + ------- + delta : array + The estimated sensitivity of each `sol`. + solf : array + The filtered solution if `filter=True`, `None` otherwise. + deltaf : array + The sensitivity of the filtered solution if `filter=True`, `None` otherwise. + """ + delta = np.zeros((sol.shape[0],)) + for i, soli in enumerate(sol): + J = self.jacobian(soli, trunc=trunc) + delta[i] = np.linalg.norm(np.linalg.inv(J) @ self.EP_system(soli, trunc=trunc)) + + if filter: + solf = sol[delta < tol, :].copy() + deltaf = delta[delta < tol].copy() + if sort: + index = np.argsort(deltaf) + deltaf = deltaf[index] + solf = solf[index, :] + else: + solf = None + deltaf = None + + if plot: + fig_delta, ax_delta = plt.subplots() + ax_delta.semilogy(sorted(delta), 'k+', markersize='6') + ax_delta.axhline(y=tol, color='grey', linestyle=':') + ax_delta.set_ylabel(r'$\delta$') + ax_delta.set_xlabel('Solution index') + return delta, solf, deltaf + + @staticmethod + def _drop_higher_degree(p, degrees): + """Remove higher degrees for the `sympy` Poly object. + + This method is useful when the polynomial stands for a Taylor series from + which you want drop higher order terms. + + Parameters + ---------- + p : Sympy Poly + The polynom to truncate. + degrees : iterable + The maximum degree to keep for each variable. + + Returns + ------- + Sympy Poly + The truncated polynomial. + """ + p_dict = p.as_dict() + degrees = np.asarray(degrees) + for key, val in list(p_dict.items()): + if (np.array(key) > degrees).any(): + del p_dict[key] + return sym.Poly.from_dict(p_dict, p.gens) + + @staticmethod + def taylor2sympyPol(coef_list, tol=1e-12): + """Convert a polynom into a sympy polynomial object in (nu, lda). + + Let us consider a_n(nu) lda**n + .... + a_0(nu), where nu stands for + all variables. + + Parameters + ---------- + coef_list : List + Contains ndarray with the Taylor series of an [an, a_n-a, ..., a0] + tol : float, optional + The tolerance used to drop out the smaller terms. + + Returns + ------- + P : Pol + Multivariate polynomial in sympy format on complex field. + variables : iterable + The list of the sympy symbolic variables. + + Examples + -------- + >>> a2, a1, a0 = np.array([[1, 2], [3, 6]]), np.array([[10, 0], [0, 0]]), np.array([[0, 5], [7, 0]]) + >>> P, variables = CharPol.taylor2sympyPol([a2, a1, a0]) # lda**2 a2 + lda * a1 + a0 + >>> print(P) + Poly(6.0*lambda**2*nu0*nu1 + 3.0*lambda**2*nu0 + 2.0*lambda**2*nu1 + 1.0*lambda**2 + 10.0*lambda + 7.0*nu0 + 5.0*nu1, lambda, nu0, nu1, domain='CC') + """ + # TODO is there a best order ? + # FIXME add a truncation flag to sparsify + coef_dict = dict() + # Store all coefs in a dict² + for n, c in enumerate(reversed(coef_list)): + for index, dc in np.ndenumerate(c): + if abs(dc) > tol: + key = (n, *index) + coef_dict[key] = dc + + # get the number of Parameters nu + nu_dim = len(coef_list[0].shape) + var_string = 'lambda, nu0:' + str(nu_dim) + # Use the dict to create a sympy Poly expression + variables = sym.symbols(var_string) + P = sym.Poly(coef_dict, variables, domain='CC') + return P, variables + + def getdH(self): + r"""Compute the Taylor series of H function with respect to nu. + + H is proportional to the discriminant of the partial characteristic polynomial. + + This method provides Taylor series of Discriminant, whereas the `discriminant` + method, using the Sylvester matrix, just provides an evaluation at fixed + nu values. + This Taylor series may be particularly usefull when dealing with univariate + polynomial to find all the roots. + + This _discriminant_ estimation is obtained from the eigenvalues (roots of the CharPol), + $$ + H(\nu) = \prod_{i>> p = np.array([-12, 16, -7, 1]) + >>> q = np.array([16, -14, 3]) + >>> CharPol.sylvester(p, q).real + array([[ 1., -7., 16., -12., 0.], + [ 0., 1., -7., 16., -12.], + [ 3., -14., 16., 0., 0.], + [ 0., 3., -14., 16., 0.], + [ 0., 0., 3., -14., 16.]]) + >>> np.abs(np.linalg.det(CharPol.sylvester(q, p))) < 1e-12 + True + + """ + m = p.size + n = q.size + # size of the Symvester matrix + N = m + n - 2 + S = np.zeros((N, N), dtype=complex) + # if q has higher degree, need to swap them + if n > m: + p, q = q, p + m, n = n, m + # fill upper part with p coefficients + for i in range(0, n - 1): + S[i, i:(i+m)] = p[::-1] + # fill lower part with q coefficients + for i in range(0, m - 1): + S[i+n-1, i:(i+n)] = q[::-1] + + return S + + def radii_estimate(self, plot=False): + """Estimate the radii of convergence of all CharPol coefficients. + + The approach is based on least square fit for all paremeters and + all the CharPol coefficients and assume single variable dependance. + + Returns + ------- + R : dict + A dictionnary containing the statistics of the radius of convergence for + all coefficients and along all directions. + The primary keys are the parameter integer index and the condidary + keys are the `mean` and the `std` obtained for all polynomial coefficients. + """ + R = {} + dcoef_r = np.zeros((self.dcoefs[0].ndim, len(self.dcoefs)-1)) + # Loop over chapol coef, skip 1st + for n, an in enumerate(self.dcoefs[1:]): + # Loop over the parameters + for p in range(0, an.ndim): + index = [0] * an.ndim + index[p] = slice(0, None) + dcoef_r[p, n] = Taylor._radii_fit_1D(an[tuple(index)], plot=plot) + for p in range(0, an.ndim): + R[p] = {'mean': dcoef_r[p, :].mean(), + 'std': dcoef_r[p, :].std()} + return R + + def explore(self, index, bounds, nu=None, nvals=51): + """Explore the eigenvalues using brute force evaluation for one parameter. + + The CharPol is converted to a single parameter CharPol and its eigenvalues + are evaluated inside the _absolute_ bounds. + + Parameters + ---------- + index: int + The index of the parameter to set. + bounds: tuple + The lower left and upper-right corner of the parameter + nu: np.array, optional + The value of the parameter vector. If `None` nu0 is used. The size + of `nu` should be the same as the size of `nu0`, although The `nu[index]` + is not used. + nvals: int, optional + The number of points in each complex plane direction. + + Returns + ------- + loci: ee.Loci + An instance of `Loci` containing all computation results. + """ + if nu is None: + nu = self.nu0 + # Create complex plane, bounds are absolute + Re, Im = np.meshgrid(np.linspace(bounds[0].real, bounds[1].real, nvals), + np.linspace(bounds[0].imag, bounds[1].imag, nvals)) + Nu = Re + 1j * Im + LAMBDA_pcp = np.zeros(Nu.shape, dtype=object) + for item, nui in tqdm.tqdm(np.ndenumerate(Nu), total=np.prod(Nu.shape)): + nu_ = nu.copy() + nu_[index] = nui + # Run the eigenvalue computation + Lambda_ = self.eval_lda_at(nu_) + # create a list of the eigenvalue to monitor + LAMBDA_pcp[item] = Lambda_.copy() + # Create Loci instance + LAMBDA_pcp = np.array(LAMBDA_pcp.tolist()) + loci_pcp = Loci(LAMBDA_pcp, Nu) + return loci_pcp diff --git a/eastereig/eig.py b/eastereig/eig.py index 46b8a69..7469e8e 100644 --- a/eastereig/eig.py +++ b/eastereig/eig.py @@ -36,11 +36,13 @@ import numpy as np import scipy as sp import time +import itertools as it from eastereig import _petscHere, gopts, _CONST -from eastereig.utils import pade +from eastereig.utils import pade, Taylor +from numpy.distutils.misc_util import is_sequence -if _petscHere: +if _petscHere: from slepc4py import SLEPc from petsc4py import PETSc from mpi4py import MPI # TODO find a workaround @@ -63,30 +65,33 @@ def Eig(lib, *args, **kwargs): # compatible with Python 2 *and* 3: # ABC = ABCMeta('ABC', (object,), {'__slots__': ()}) class AbstractEig(ABC): - """ Abstract class that manage an eigenpair (lda,x) of an OP object and its derivatives + """Abstract class that manage an eigenpair (lda, x) of an OP object and its derivatives. The derivatives are computed thanks to Andrew, Chu, Lancaster method (1994). Attributes ----------- lib: string {'numpy','scipysp','petsc'} - the name of the matrix library + The name of the matrix library lda: complex - the eigenvalue + The eigenvalue x: complex array_like of type lib - the eigenvector - dx: list (if computated) - the list of the sucessive derivatives of x % nu - dlda: list (if computated) - the list of the sucessive derivatives of lda % nu + The eigenvector + dx: list (if computed) + The list of the sucessive derivatives of x % nu + dlda: list (if computed) + The list of the sucessive derivatives of lda % nu + nu0: None, scalar or iterable + The value is generally set whe the derivative are computed or loaded. """ def __init__(self, lib, lda=None, x=None): - """ Init the instance + """Init the instance """ self._lib = lib self.lda = lda self.x = x # must add normalisation here + self.nu0 = None # init derivative if note None if (lda is not None) & (x is not None): @@ -99,10 +104,16 @@ def __init__(self, lib, lda=None, x=None): def __repr__(self): """ Define the representation of the class """ + # Check if dlda has a 'shape' (works with np.array) + # else use 'len' which works with all iterables... + if hasattr(self.dlda, 'shape'): + nd = tuple(np.array(self.dlda.shape) - 1) + else: + nd = len(self.dlda) - 1 return "Instance of {} @lda={} with #{} derivatives".format(self.__class__.__name__, self.lda, - len(self.dlda)) + nd) @abstractmethod def export(self, filename, eigenvec=True): @@ -139,7 +150,7 @@ def addD(self, dlda, dx): """ add new derivatives """ self.dlda.extend(dlda) - if dx: + if dx is not None: self.dx.extend(dx) @abstractmethod @@ -157,6 +168,11 @@ def getDerivatives(self, N, L): """ pass + def to_Taylor(self): + """Convert the eigenvalue derivatives to Taylor class.""" + return Taylor.from_derivatives(np.array(self.dlda, dtype=complex), + self.nu0) + def taylor(self, points, n=-1): """ Evaluate the Taylor expansion of order n at `points`. @@ -164,29 +180,41 @@ def taylor(self, points, n=-1): Parameters ---------- points : array_like - the value where the series is evaluated. Give the absolute value, + The value where the series is evaluated. Give the absolute value, not the relative % nu0. + In the multivariate case, the function is not vectorized. Just put + the computation point. n : int The number of terms considered in the expansion - if no value is given or if n=-1, the size of the array dlda is considered, + if no value is given or if n is `None`, the full array `dlda` is considered. + if n is negative truncation are used. Returns ------- tay : array_like the eigenvalue Taylor series """ - # check order vs length - if n > len(self.dlda): - print('Run getDerivative before...\n') - - if n == -1: n = len(self.dlda) - # converting to np.array - dlda = np.array(self.dlda, dtype=complex) - # get Taylor coef in ascending order - Df = dlda[0:n] / sp.special.factorial(np.arange(n)) - # polyval require higher degree first - tay = np.polyval(Df[::-1], points - self.nu0) - + # Check order vs length + if len(self.dlda) == 0: + raise IndexError('Run getDerivative* before...\n') + + # Check if scalar + if not is_sequence(self.nu0): + if n is None: n = len(self.dlda) + # Converting to np.array + dlda = np.array(self.dlda, dtype=complex) + # Get Taylor coef in ascending order + Df = dlda[0:n] / sp.special.factorial(np.arange(n)) + # Polyval require higher degree first + tay = np.polyval(Df[::-1], points - self.nu0) + else: + N = self.dlda.ndim + # Create slices accounting for truncation + slices = (slice(0, n),) * N + print(self.dlda[slices].shape) + T = Taylor.from_derivatives(np.array(self.dlda[slices], dtype=complex), + self.nu0) + tay = T.eval_at(points) return tay def pade(self, points, n=-1): @@ -201,17 +229,26 @@ def pade(self, points, n=-1): n : int The number of terms considered in the expansion if no value is given or if n=-1, the size of the array dlda is considered + Returns ------- pad : array_like the value of padé approximant at point + + Remarks + ------- + For now, works only for scalar parameters nu. """ - # check order vs length + # Check order vs length if len(self.dlda) == 1: - print('Run getDerivative before...\n') + raise IndexError('Run getDerivative* before...\n') - if n == -1: n= len(self.dlda) + if is_sequence(self.nu0): + raise NotImplementedError(('Padé approximant works now only for scalar parameter.', + 'Not for nu0={}'.format(self.nu0))) + if n == -1: + n = len(self.dlda) # converting to np.array dlda = np.array(self.dlda, dtype=complex) # get Taylor coef in ascending order @@ -219,7 +256,6 @@ def pade(self, points, n=-1): # order d(0) -> d(n) for padé p, q = pade(Df, n//2) pad = p(points-self.nu0) / q(points-self.nu0) - return pad def puiseux(self, ep, points, index=0, n=-1): @@ -450,8 +486,8 @@ def _InitDirectSolver(A, name='mumps'): return ksp - def getDerivatives(self, N, op): - """ Compute the successive derivative of an eigenvalue of an OP instance + def getDerivatives(self, N, op, timeit=False): + """Compute the successive derivative of an eigenvalue of an OP instance. Parameters ----------- @@ -459,6 +495,9 @@ def getDerivatives(self, N, op): the number derivative to compute L: OP the operator OP instance that describe the eigenvalue problem + timeit : bool, optional + A flag to activate textual profiling outputs. Default is `False`. + RHS derivative must start at n=1 for 1st derivatives """ @@ -510,12 +549,10 @@ def getDerivatives(self, N, op): ksp = self._InitDirectSolver(Bord, name=gopts['direct_solver_name']) # defaut mumps, non symetric... u = Bord.createVecLeft() - """ - getSubVector : - This function may return a subvector without making a copy, therefore it - is not safe to use the original vector while modifying the subvector. - Other non-overlapping subvectors can still be obtained from X using this function. - """ + # getSubVector : + # This function may return a subvector without making a copy, therefore it + # is not safe to use the original vector while modifying the subvector. + # Other non-overlapping subvectors can still be obtained from X using this function. # if N > 1 loop for higher order terms Print('> Linear solve...') @@ -528,7 +565,8 @@ def getDerivatives(self, N, op): # compute RHS tic = time.time() # init timer Ftemp = op.getRHS(self, n) - Print(" # getRHS real time :", time.time()-tic) + if timeit: + Print(" # getRHS real time :", time.time()-tic) Fnest = PETSc.Vec().createNest([Ftemp, Zero]) # monolithique (no copy) @@ -541,7 +579,8 @@ def getDerivatives(self, N, op): tic = time.time() # init timer # F.view(PETSc.Viewer.STDOUT()) ksp.solve(F, u) - Print(" # solve LU real time :", time.time()-tic) + if timeit: + Print(" # solve LU real time :", time.time()-tic) # store results as list # Print('indice :', ind[0][1].getIndices(),u[ind[0][1].getIndices()] ) # self.dlda.append( np.asscalar( u[ind[0][1].getIndices()] ) ) # get value from IS, pb car // @@ -555,11 +594,131 @@ def getDerivatives(self, N, op): # get lda^(n) self.dlda.append(derivee[0]) self.dx.append(PETSc.Vec().createWithArray(u.getSubVector(ind[0][0]).copy())) # get pointer from IS, need copy - Print(n) + if timeit: + Print(n) + if timeit: + Print('\n') + + def getDerivativesMV(self, N, op, timeit=False): + """Compute the successive derivative of an eigenvalue of an OP instance. + + Parameters + ----------- + N: int + The number derivative to compute + L: OPmv + The operator OP instance that describe the eigenvalue problem + timeit : bool + A flag to activate textual profiling outputs. Default is `False`. + + RHS derivative must start at n=1 for 1st derivatives + """ + # create communicator for mpi command + comm = PETSc.COMM_WORLD.tompi4py() + + # get nu0 value where the derivative are computed + self.nu0 = op.nu0 + # Create an empty array of object + self.dx = np.empty(N, dtype=object) + # Create an zeros array for dlda + self.dlda = np.zeros(N, dtype=complex) + self.dlda.flat[0] = self.lda - Print('\n') + # construction de la matrice de l'opérateur L + L = op.createL(self.lda) + # normalization condition (push elsewhere : différente méthode, indépendace vs type ) + # must be done before L1x + v = L.createVecRight() + v.set(1. + 0j) + # see also VecScale + # self.x = self.x / z.dot(x) + self.x.scale(1/v.tDot(self.x)) + self.dx.flat[0] = self.x + + # constrution du vecteur (\partial_\lambda L)x, ie L.L1x + L1x = op.createDL_ldax(self) + + # bordered + # --------------------------------------------------------------------- + # même matrice à factoriser pour toutes les dérivées + # Create the Nested Matrix (For now 3.9.0 petsc bug convert do not work with in_place) + + # convert z as a Matrix object +# vmat = PETScVec2PETScMat(v) # /!\ SEEM NOT MEMORY SAFE /!\ +# #zmatT=PETSc.Mat().createTranspose(zmat) # not inplace for rect mat, not optimal +# vmatT=PETSc.Mat() +# vmat.transpose(vmatT) + vmat = matrow((1, L.size[1]), np.complex128(1.)) + + # PETSC Mat * Vec return Vec + # convert L1x as a Matrix object + L1xmat = PETScVec2PETScMat(L1x) # columnwize + + Bord = PETSc.Mat() + # C = temp matrix, ie for now 3.9.0 petsc bug convert do not work with in_place + + C = PETSc.Mat().createNest([[L, L1xmat], [vmat, None]]) + # get back the value, assume the order is the same in nest and in the converted + ind = C.getNestISs() # create IS + # conversion from nested to aij (for mumps) + C.convert(PETSc.Mat.Type.AIJ, Bord) + + # initialisation du solveur + ksp = self._InitDirectSolver(Bord, name=gopts['direct_solver_name']) # defaut mumps, non symetric... + u = Bord.createVecLeft() + # getSubVector : + # This function may return a subvector without making a copy, therefore it + # is not safe to use the original vector while modifying the subvector. + # Other non-overlapping subvectors can still be obtained from X using this function. + # if N > 1 loop for higher order terms + Print('> Linear solve...') + Zero = PETSc.Vec().create() + Zero.setSizes(size=(None, 1)) # free for local, global size=1 + Zero.setUp() + Zero.setValue(0, 0 + 0j) + # n start now at 1 for uniformization + for n in it.product(*map(range, N)): + # Except for (0, ..., 0) + if n != (0,)*len(N): + # compute RHS + tic = time.time() # init timer + Ftemp = op.getRHS(self, n) + PETSc.garbage_cleanup(comm) + if timeit: + Print(" # getRHS real time :", time.time()-tic) + + Fnest = PETSc.Vec().createNest([Ftemp, Zero]) + # monolithique (no copy) + # getArray Returns a pointer to a contiguous array that contains this processor's + # portion of the vector data + F = PETSc.Vec().createWithArray(Fnest.getArray()) # don't forget () ! + + # n=0 get LU and solve, then solve with stored LU + # solution u contains [dx, dlda]) + tic = time.time() # init timer + # F.view(PETSc.Viewer.STDOUT()) + ksp.solve(F, u) + if timeit: + Print(" # solve LU real time :", time.time()-tic) + # store results as list + # Print('indice :', ind[0][1].getIndices(),u[ind[0][1].getIndices()] ) + # self.dlda.append( np.asscalar( u[ind[0][1].getIndices()] ) ) # get value from IS, pb car // + # get value from IS + derivee = u[ind[0][1].getIndices()] + + if len(derivee) == 0: + derivee = np.array([0.], dtype=np.complex64) + # send the non empty value to all process + derivee = comm.allreduce(derivee, MPI.SUM) + # get lda^(n) + self.dlda[n] = derivee[0] + self.dx[n] = PETSc.Vec().createWithArray(u.getSubVector(ind[0][0]).copy()) # get pointer from IS, need copy + if timeit: + Print(n) + if timeit: + Print('\n') # end class PetscEig @@ -669,9 +828,9 @@ def getDerivatives(self, N, op): tic = time.time() # init timer if n == 1: # compute the lu factor - lu, piv = sp.linalg.lu_factor(Bord) + lu, piv = sp.linalg.lu_factor(Bord, check_finite=False) # Forward and back substitution, u contains [dx, dlda]) - u = sp.linalg.lu_solve((lu, piv), F) + u = sp.linalg.lu_solve((lu, piv), F, check_finite=False) # print(" # solve LU real time :", time.time()-tic) # get lda^(n) @@ -681,7 +840,86 @@ def getDerivatives(self, N, op): self.dx.append(u.copy()[:-1]) # print(n, ' ') - # print('\n') + def getDerivativesMV(self, N, op, timeit=False): + """ Compute the successive derivative of an eigenvalue of an OP instance + + Parameters + ----------- + N: tuple of int + the number derivative to compute + op: OPmv + the operator OP instance that describe the eigenvalue problem + timeit: bool + Unused for this class. + + RHS derivative must start at n=1 for 1st derivatives + """ + + # get nu0 value where the derivative are computed + self.nu0 = op.nu0 + # construction de la matrice de l'opérateur L, ie L.L + L = op.createL(self.lda) + # normalization condition (push elsewhere : différente méthode, indépendace vs type ) + # must be done before L1x + v = np.ones(shape=self.x.shape) + # see also VecScale + scale = (1/v.dot(self.x)) + if np.abs(scale) < 1e6: # TODO this tol is arbitrary + self.x *= scale + else: + print('Warning : v is nearly co-linear to x (|scale|={}). Use random vector for v.'.format(abs(scale))) + # Test (possibily) several random vector + while np.abs(scale) > 1e2: # TODO this tol is arbitrary + v = np.random.rand(*self.x.shape) + scale = (1/v.dot(self.x)) + print(' new scale is {}'.format(abs(scale))) + self.x *= scale + + # Create an empty array of object + self.dx = np.empty(N, dtype=object) + self.dx.flat[0] = self.x + # Create an zeros array for dlda + self.dlda = np.zeros(N, dtype=complex) + self.dlda.flat[0] = self.lda + + # constrution du vecteur (\partial_\lambda L)x, ie L.L1x + L1x = op.createDL_ldax(self) + + # bordered + # --------------------------------------------------------------------- + # Same matrix to factorize for all RHS + Zer = np.zeros(shape=(1, 1), dtype=complex) + Zerv = np.zeros(shape=(1,), dtype=complex) + Bord = np.bmat([[L , L1x.reshape(-1, 1)], + [v.reshape(1, -1), Zer]]) # reshape is to avoid (n,) in bmat + + # if N > 1 loop for higher order terms + print('> Linear solve...') + # n is the deriviative multi-index (tuple) + for n in it.product(*map(range, N)): + # Except for (0, ..., 0) + if n != (0,)*len(N): + # compute RHS + tic = time.time() # init timer + Ftemp = op.getRHS(self, n) + # F= sp.bmat([Ftemp, Zerv]).reshape(-1,1) + F = np.concatenate((Ftemp, Zerv)) + # print(" # getRHS real time :", time.time()-tic) + + tic = time.time() # init timer + if sum(n) == 1: + # compute the lu factor + lu, piv = sp.linalg.lu_factor(Bord) + # Forward and back substitution, u contains [dx, dlda]) + u = sp.linalg.lu_solve((lu, piv), F) + # print(" # solve LU real time :", time.time()-tic) + + # get lda^(n) + derivee = u[-1] + # store the value + self.dlda[n] = derivee + self.dx[n] = u.copy()[:-1] + # print(self.dlda) # end class NumpyEig @@ -806,4 +1044,76 @@ def getDerivatives(self, N, op): print('\n') + def getDerivativesMV(self, N, op, timeit=False): + """ Compute the successive derivative of an eigenvalue of an OP instance + + Parameters + ----------- + N: tuple of int + the number derivative to compute + op: OPmv + the operator OP instance that describe the eigenvalue problem + timeit: bool + Unused for this class. + + RHS derivative must start at n=1 for 1st derivatives + """ + + # get nu0 value where the derivative are computed + self.nu0 = op.nu0 + # construction de la matrice de l'opérateur L, ie L.L + L = op.createL(self.lda) + # normalization condition (push elsewhere : différente méthode, indépendace vs type ) + # must be done before L1x + v = np.ones(shape=self.x.shape) + # see also VecScale + self.x *= (1/v.dot(self.x)) + # Create an empty array of object + self.dx = np.empty(N, dtype=object) + self.dx.flat[0] = self.x + # Create an zeros array for dlda + self.dlda = np.zeros(N, dtype=complex) + self.dlda.flat[0] = self.lda + + # constrution du vecteur (\partial_\lambda L)x, ie L.L1x + L1x = op.createDL_ldax(self) # FIXME change, now with return + Zerv = np.zeros(shape=(1,), dtype=complex) + # bordered + # --------------------------------------------------------------------- + # Same matrix to factorize for all RHS, conversion to scr for scipy speed + Bord = sp.sparse.bmat([[L, L1x.reshape(-1, 1)], + [v.reshape(1, -1), None]]).tocsc() # reshape is to avoid (n,) in bmat + + # if N > 1 loop for higher order terms + print('> Linear solve...') + # n start now at 1 for uniformization + for n in it.product(*map(range, N)): + # Except for (0, ..., 0) + if n != (0,)*len(N): + # compute RHS + tic = time.time() # init timer + Ftemp = op.getRHS(self, n) + # F= sp.bmat([Ftemp, Zerv]).reshape(-1,1) + F = np.concatenate((Ftemp, Zerv)) + print(" # getRHS real time :", time.time()-tic) + + tic = time.time() # init timer + if sum(n) == 1: + # umfpack is not in scipy but need to be installed with scikit-umfpack + # if not present, scipy use superlu + sp.sparse.linalg.use_solver(useUmfpackbool=True) + # compute the lu factor + lusolve = sp.sparse.linalg.factorized(Bord) + + # Forward and back substitution, u contains [dx, dlda]) + u = lusolve(F) + print(" # solve LU real time :", time.time()-tic) + + # get lda^(n) + derivee = u[-1] + # store the value + self.dlda[n] = derivee + self.dx[n] = u.copy()[:-1] + print(n, ' ') + # end class ScipyspEig diff --git a/eastereig/eigSolvers.py b/eastereig/eigSolvers.py index cd8b603..bd4fd3d 100644 --- a/eastereig/eigSolvers.py +++ b/eastereig/eigSolvers.py @@ -25,11 +25,11 @@ The basic use of this class is 1. Create the solver object : - `myOP.createSolver(lib='numpy',pb_type='gen')` + `myOP.createSolver(lib='numpy', pb_type='gen')` 2. Solve : - `myOP.solver.solve(nev=6,target=0+0j,skipsym=False)` + `myOP.solver.solve(nev=6, target=0+0j, skipsym=False)` 3. Get back the eigenvalue and eigenvector in a list of Eig object : - `extracted = myOP.solver.extract( [0,1] )` + `extracted = myOP.solver.extract([0, 1])` 4. Destroy solver (usefull for petsc/slepc) : `myOP.solver.destroy()` @@ -144,8 +144,6 @@ class NumpyEigSolver(EigSolver): """Define the concrete interface common to all numpy Eigenvalue solver. The eigenvalue problem is solved with numpy for full matrix - - # TODO add PEP by linearization """ # keep trace of the lib @@ -170,6 +168,7 @@ def solve(self, nev=6, target=0+0j, skipsym=False): """ print('> Solve {} eigenvalue problem with {} class...\n'.format(self.pb_type, self.__class__.__name__)) + # FIXME need to modify order based on lambda func if self.pb_type == 'std': self.Lda, Vec = sp.linalg.eig(self.K[0], b=None) elif self.pb_type == 'gen': @@ -179,7 +178,7 @@ def solve(self, nev=6, target=0+0j, skipsym=False): else: raise NotImplementedError('The pb_type {} is not yet implemented'.format(self.pb_type)) - # sort eigenvectors and create idx index + # Sort eigenvectors and create idx index self.sort(skipsym=skipsym) self.Vec = Vec[:, self.idx] return self.Lda @@ -188,13 +187,14 @@ def solve(self, nev=6, target=0+0j, skipsym=False): def _pep(K): """Polynomial eigenvalue solver by linearisation with numpy. - 1st basic version limited to quadratic eigenvalue problem. - #TODO Need to compare with F. Tisseur polyeig. + The linearization is performed with the first companion form (as in slepc). + Because of the monomial form, it is recommended to not exceed a degree of 5. + For higher degree, using orthogonal polynomial basis is recommanded. Parameters ---------- K : List - list of matrix. the order is (K[0] + K[1]*lda + K[2]*lda**2)x=0 + list of matrix. The order is (K[0] + K[1]*lda + K[2]*lda**2)x=0 Examples -------- @@ -215,16 +215,37 @@ def _pep(K): """ shape = K[0].shape dtype = K[0].dtype - # create auxiliary matrix + degree = len(K) - 1 + degree1 = degree - 1 + # Create auxiliary matrix I = np.eye(*shape, dtype=dtype) Z = np.zeros(shape, dtype=dtype) + # Create companion matrix + Comp = np.empty((degree, degree), dtype=object) + for (i, j), _ in np.ndenumerate(Comp): + if i == degree1: + # last list line with K + Comp[i, j] = -K[j] + elif j == i + 1: + # fill 1-st diag + Comp[i, j] = I + else: + # zeros elsewhere + Comp[i, j] = Z + + # Fill with I on the main diagonal excepted last term + Diag = np.empty((degree, degree), dtype=object) + for (i, j), _ in np.ndenumerate(Diag): + if (i == j) and (i < degree1): + Diag[i, j] = I + elif (i == j) and (i == degree1): + Diag[i, j] = K[j+1] + else: + # zeros elsewhere + Diag[i, j] = Z - A = np.bmat([[Z, I], - [-K[0], -K[1]] - ]) - B = np.bmat([[I, Z], - [Z, K[2]] - ]) + A = np.bmat(Comp.tolist()) + B = np.bmat(Diag.tolist()) # solved linearised QEP D, V = sp.linalg.eig(A, B) # the (2*N,) eigenvector are normalized to 1. @@ -262,13 +283,14 @@ def solve(self, nev=6, target=0+0j, skipsym=False): if self.pb_type == 'std': self.Lda, Vec = eigs(self.K[0], k=nev, M=None, sigma=target, return_eigenvectors=True) elif self.pb_type == 'gen': - self.Lda, Vec = eigs(self.K[0], k=nev, M=-self.K[1], sigma=target, return_eigenvectors=True) + self.Lda, Vec = eigs(self.K[0], k=nev, M=-self.K[1], + sigma=target, return_eigenvectors=True) elif self.pb_type == 'PEP': self.Lda, Vec = self._pep(self.K, k=nev, sigma=target) else: raise NotImplementedError('The pb_type {} is not yet implemented'.format(self.pb_type)) - # sort eigenvectors and create idx index + # Sort eigenvectors and create idx index self.sort(skipsym=skipsym) self.Vec = Vec[:, self.idx] return self.Lda @@ -277,7 +299,9 @@ def solve(self, nev=6, target=0+0j, skipsym=False): def _pep(K, k=4, sigma=0.): """Polynomial eigenvalue solver by linearisation with scipy sparse. - 1st basic version limited to quadratic eigenvalue problem. + The linearization is performed with the first companion form (as in slepc). + Because of the monomial form, it is recommended to not exceed a degree of 5. + For higher degree, using orthogonal polynomial basis is recommanded. Parameters ---------- @@ -307,16 +331,34 @@ def _pep(K, k=4, sigma=0.): """ shape = K[0].shape dtype = K[0].dtype + degree = len(K) - 1 + degree1 = degree - 1 # create auxiliary matrix I = sp.sparse.eye(*shape, dtype=dtype).tocsc() Z = None + + # Create companion matrix with 'None' + Comp = np.empty((degree, degree), dtype=object) + for (i, j), _ in np.ndenumerate(Comp): + if i == degree1: + # last list line with K + Comp[i, j] = -K[j] + elif j == i + 1: + # fill 1-st diag + Comp[i, j] = I + + # Fill with I on the main diagonal excepted last term + Diag = np.empty((degree, degree), dtype=object) + for (i, j), _ in np.ndenumerate(Diag): + if (i == j) and (i < degree1): + Diag[i, j] = I + elif (i == j) and (i == degree1): + Diag[i, j] = K[j+1] + # FIXME see the impact of .tocsc - A = sp.sparse.bmat([[Z, I], - [-K[0], -K[1]] - ], dtype=dtype).tocsc() - B = sp.sparse.bmat([[I, Z], - [Z, K[2]] - ], dtype=dtype).tocsc() + A = sp.sparse.bmat(Comp.tolist()).tocsc() + B = sp.sparse.bmat(Diag.tolist()).tocsc() + # solved linearised QEP D, V = eigs(A, k=k, M=B, sigma=sigma, return_eigenvectors=True) # the (2*N,) eigenvector are normalized to 1. @@ -462,8 +504,8 @@ def solve(self, nev=6, target=0+0j, key='abs', skipsym=False): It is still possible to interact with the SLEPc solver until the destroy method call """ self._create(nev, target) - Print('> Solve eigenvalue {} problem with {} class ...\n'.format( - self.pb_type, self.__class__.__name__)) + Print('> Solve eigenvalue {} problem with {} class ...\n'.format(self.pb_type, + self.__class__.__name__)) self.E.solve() nconv = self.E.getConverged() diff --git a/eastereig/ep.py b/eastereig/ep.py index bfafd5c..b9219ba 100644 --- a/eastereig/ep.py +++ b/eastereig/ep.py @@ -14,7 +14,7 @@ r""" ##Define the EP class -Assuming that the associated eigenvalue $\lambda^*$ is of algebraical multiplicity 2, +Assuming that the associated eigenvalue \(\lambda^*\) is of algebraical multiplicity 2, the behavior of the two branches of solutions in the vicinity of \(\nu^*\) is given by a convergent **Puiseux series** [Kato:1980] $$ @@ -180,10 +180,10 @@ def dlda2dh(dlda1, dlda2): @staticmethod def Pmatrix(z0, N): - r"""Compute the matrix P defined by equating each power of \nu of the - Puiseux and Taylor expansions of the function g(\nu). + r"""Compute the matrix P defined by equating each power of \(\nu\) of the + Puiseux and Taylor expansions of the function \(g(\nu)\). - arxiv.org/abs/1909.11579 Eq. 13 + arxiv.org/abs/1909.11579 Eq. 13. Parameters ---------- @@ -215,7 +215,7 @@ def solveOddPower(xi, c, N): r""" Compute odd terms of the Puiseux expansion using iterative solver of the Non-Linear system - arxiv.org/abs/1909.11579 Eq. 15-16 + arxiv.org/abs/1909.11579 Eq. 15-16. Description ----------- @@ -334,17 +334,16 @@ def _roots(self, tronc): return roots def locate(self, tol=1e-2, xi=0.95, tronc=0): - r"""Compute the roots \( \\zeta_n \) of \( T_h^{N} \) the taylor expansion of h of order N. + r"""Compute the roots \(\zeta_n\) of \(T_h^{N}\) the taylor expansion of h of order N. The Exceptional Point (EP) is/are one of these roots. - Others roots which do not correspond to the EP are regularly arranged. They mostly seems to be - localized on a circle of radius R + Others roots which do not correspond to the EP are regularly arranged. + They mostly seems to be localized on a circle of radius R. The algorithm to identified the EP is: - - 1. Compare the roots with \( T_h^{N-1} \) - 2. Check if they belong to the mean radius circle \( \\vert \zeta_n \\vert < \\xi R \) + 1. Compare the roots with \(T_h^{N-1}\) + 2. Check if they belong to the mean radius circle \(\vert \zeta_n \vert < \xi R\). Parameters ---------- @@ -405,8 +404,8 @@ def getPuiseux(self, index=0): To limit the condition number of \(\mathbf{P}(\nu^*)\) when \(\vert \nu^*\vert \gg 1\), in Eq. (11) of [arxiv:1909.11579], it is preferable to make the change of variable \( \nu'=\nu-\nu^* \). - This yields to $$ 2 \mathbf{a}_e = \mathbf{P}(\nu_0-\nu^*) \mathbf{b} $$, this expression avoids - to inverse \(\mathbf{P}(\nu^*)\). + This yields to $$ 2 \mathbf{a}_e = \mathbf{P}(\nu_0-\nu^*) \mathbf{b},$$ + this expression avoids to inverse \(\mathbf{P}(\nu^*)\). """ # init and check try: diff --git a/eastereig/examples/WGadmitance_numpy_mv.py b/eastereig/examples/WGadmitance_numpy_mv.py new file mode 100644 index 0000000..1b3cb59 --- /dev/null +++ b/eastereig/examples/WGadmitance_numpy_mv.py @@ -0,0 +1,334 @@ +# -*- coding: utf-8 -*- + +# This file is part of eastereig, a library to locate exceptional points +# and to reconstruct eigenvalues loci. + +# Eastereig is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. + +# Eastereig is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. + +# You should have received a copy of the GNU General Public License +# along with Eastereig. If not, see . +""" +Localization of exceptional point on a 2D waveguide lined with two adminttances +using numpy array. +The resolution use admintance Y=1/Z to avoid the singularities present with +impedances. + +``` + y=h ____________________________ admintance bc (nu) + + | | + -|-|--> oo-duct + | | + + y=0 ~~~~~~~~~~~~Y~~~~~~~~~~~~~~ admintance bc (mu) + +``` + +Description: +------------ +The analytical eigenvalue problem and references values are described in +sec. 2 of 10.1016/j.jsv.2021.116510 (available at +https://hal.science/hal-03388773v1/document). +This yields a **generalized eigenvalue problem** where the eigenvalue lda stands +for the axial wavenumber after FEM discretisation (in sec. 4.1 of +arxiv.org/abs/1909.11579). The admintance is Y=1/Z. + +``` +[(k**2 - lda)*Mmat + mu*iωρ*Gam_bot + nu*iωρ*Gam_top - Kmat]x=0 +``` + +It is noteworthy that mu and nu are multiplied by 1j wrt to the references papers. + +Examples +-------- +>>> import numpy as np +>>> import eastereig as ee +>>> C, sol, error = main(plot=True) # doctest: +ELLIPSIS +> Solve gen eigenvalue problem with NumpyEigSolver class... +>>> error < 1e-3 +True +""" +import os +os.environ["OPENBLAS_NUM_THREADS"] = "1" + +# standard +import sys +import numpy as np +import scipy as sp +from scipy.special import factorial +from scipy.spatial import KDTree +from scipy.spatial import distance_matrix +from scipy.optimize import linear_sum_assignment +import matplotlib.pyplot as plt +# eastereig +import eastereig as ee +import sympy as sym +import time +import matplotlib + +class Ynumpy(ee.OPmv): + """Create a subclass of the interface class OP that describe the problem operator.""" + def __init__(self, y, n, h, rho, c, k): + """Initialize the problem. + + Parameters + ---------- + y : iterable + admintance values (mu, nu) + n : int + number of dof + h : float + length of the cavity + rho : float + air density + c : float + air sound speed + k : float + air wavenumber + """ + + self.h = float(h) + self.n = n + # air properties + self.rho, self.c, self.k = rho, c, k + # element length + self.Le = h/(n-1) + + # assemble *base* matrices, independant of the parameter + self._mass() # create _Mmat + self._stif() # create _Kmat + self._gam() # create _GamMat + # initialize OP interface + self.setnu0(y) + + # mandatory ----------------------------------------------------------- + self._lib = 'numpy' + # create the operator matrices + self.K = self._ImpMat() + # define the list of function to compute the derivatives of each operator matrix + self.dK = [self._dstiff, self._dmass] + # define the list of function to set the eigenvalue dependance of each operator matrix + self.flda = [None, ee.lda_func.Lda] + # --------------------------------------------------------------------- + + # possible to add new methods + def __repr__(self): + """Define the object representation.""" + return "Instance of Operator class {} @nu0={} ({} dof, height={})".format(self.__class__.__name__, self.nu0, self.n, self.h) + + def _mass(self): + """Define the mass matrix, of 1D FEM with ordered nodes. + + The elementary matrix reads + Me = (Le/6) * [2 1; 1 2] + Thus the lines are [2,1,0], [1,4,1], [0,1,2] x (Le/6) + """ + n = self.n + # create M matrix (complex) + # value for a inner M row [m1 m2 m1] + m1 = self.Le / 6. + m2 = self.Le * 4./6. + # Interior grid points + M = sp.sparse.diags([m1, m2, m1], [-1, 0, 1], + shape=(n, n), format='csc').toarray() + # Boundary points + M[0, 0] = m2/2. + M[0, 1] = m1 + M[n-1, n-2] = m1 + M[n-1, n-1] = m2/2. + # store it + self._Mmat = M + + def _stif(self): + """Define the stifness matrix of 1D FEM with ordered nodes. + + The elementary matrix read + Ke = (1/Le) * [1 -1; -1 1] + Thus the lines are [1,-1,0], [-1,2,-1], [0,-1,1] x (1/Le) + """ + n = self.n + # create K and M matrix (complex) + # value for inner K row [k1 k2 k1] + k1 = -1. / self.Le + k2 = 2. / self.Le + + # Striffness matrix + # Interior grid points + K = sp.sparse.diags([k1, k2, k1], [-1, 0, 1], shape=(n, n), format='csc').toarray() + # Boundary points + K[0, 0] = k2/2. + K[0, 1] = k1 + K[n-1, n-2] = k1 + K[n-1, n-1] = k2/2. + # store it + self._Kmat = K + + def _gam(self): + r"""Define the Gamma matrix accounting for the impedance BC. + + Zeros everywhere except on the 1st node $\Gamma(0) = 1 $ + + Parameters + ---------- + z0 : complex + The impedance value + """ + n = self.n + # create Gamma matrix (complex) + # Striffness matrix + Gam_top = sp.sparse.coo_matrix((np.array([1.]), (np.array([0]), np.array([0]))), + shape=(n, n)).toarray() + Gam_bot = sp.sparse.coo_matrix((np.array([1.]), (np.array([n-1]), np.array([n-1]))), + shape=(n, n)).toarray() + # Store it + self._GamMat = {'mu': Gam_bot, 'nu': Gam_top} # mu, nu + + def _ImpMat(self): + """Return the operator matrices list for the generalized eigenvalue problem. + + Returns + ------- + K : matrix + K contains [k0^2*M + Gamma*1i*const.rho0*omega/Z - K , -M] + """ + omega = self.k * self.c # angular freq. + mu = 1j * omega * self.rho * self.nu0[0] + nu = 1j * omega * self.rho * self.nu0[1] + + K = [] + KK = self.k**2*self._Mmat + mu*self._GamMat['mu'] + nu*self._GamMat['nu'] - self._Kmat + # encapsulated into list + K.append(KK) + K.append(-self._Mmat) + return K + + def _dstiff(self, m, n): + r"""Define the sucessive derivative of the $\tilde{K}$ matrix with respect to nuv. + + L = \tilde{K} - lda M (standard FEM formalism) + with polynomial formlism L = K0 + lda K1 + ... + thus K0=\tilde{K} + + Parameters + ---------- + m, n : int + The order of derivation for each variable. + + Returns + ------- + Kn : Matrix (petsc or else) + The n-derivative of global K0 matrix + """ + if (m, n) == (0, 0): + Kn = self.K[0] + elif (m, n) == (1, 0): + omega = self.k * self.c # angular freq. + Kn = self._GamMat['mu'] * 1j*self.rho*omega + elif (m, n) == (0, 1): + omega = self.k * self.c # angular freq. + Kn = self._GamMat['nu'] * 1j*self.rho*omega + else: + Kn = 0 + + return Kn + + def _dmass(self, m, n): + """Define the sucessive derivative of the $M$ matrix with respect to nu. + + L = K - lda M (standard FEM formalism) + with polynomial formlism L = K0 + lda K1 + ... + thus K1=-M + + Parameters + ---------- + m, n : int + The order of derivation for each variable. + + Returns + ------- + Kn : Matrix (petsc or else) + The n-derivative of global K1 matrix + """ + if (m, n) == (0, 0): + return -self._Mmat + # if n!= 0 return 0 because M is constant + else: + return 0 + + +# Value from 10.1016/j.jsv.2021.116510 +nu_ref = np.array((3.1781 + 4.6751j, 3.0875 + 3.6234j, 3.6598 + 7.9684j, + 3.6015 + 6.9459j, 3.9800 + 11.189j, 3.9371 + 10.176j, + 1.0119 + 4.6029j, 1.0041 + 7.7896j)) / 1j + + +def main(plot=False): + """Locate the EP3 and check the error with reference solution.""" + # Number of dof + N = 200 + # Number of derivative + Nderiv = (6, 6) + # Wavenumer and air properties + rho0, c0, k0 = 1., 1., 1. + # Duct height + h = 1. + # Initial value of the admittances + nu0 = np.array([7.01265-4.76715j, 2.89872-2.47j]) + + # Number of mode to keep in the CharPol + Nmodes = 7 + # Create discrete operator of the pb + imp = Ynumpy(y=nu0, n=N, h=h, rho=rho0, c=c0, k=k0) + # Initialize eigenvalue solver for *generalized eigenvalue problem* + imp.createSolver(pb_type='gen') + # Run the eigenvalue computation + Lambda = imp.solver.solve(nev=Nmodes, target=0+0j, skipsym=False) + # Create a list of the eigenvalue to monitor + lda_list = np.arange(0, Nmodes) + # Return the eigenvalue and eigenvector in a list of Eig object + extracted = imp.solver.extract(lda_list) + # Destroy solver + imp.solver.destroy() + + # Compute eigenvalues derivatives + for vp in extracted: + vp.getDerivativesMV(Nderiv, imp) + + # Create CharPol + C = ee.CharPol.from_recursive_mult(extracted) + + # Locate EP3 + s = C.iterative_solve((None, + (3.5 - 3.8j, 7+0j), + (3.5 - 3.8j, 7+0j)), decimals=4, algorithm='lm', Npts=2) + delta, sol, deltaf = C.filter_spurious_solution(s, plot=False, tol=1e-3) + + # Compute error with reference solution + D = distance_matrix(sol[:, 1][:, None], nu_ref[:, None]) + row_ind, col_ind = linear_sum_assignment(D) + error = D[row_ind, col_ind].sum() + return C, sol, error + + +# %% Main +if __name__ == '__main__': + C, sol, error = main(plot=True) + # Compare with reference solution in the complex plane + # Because of symmetry mu and nu value are interchangeable + # Convention differ from the papers (x1j) + plt.plot(sol[:, 1].real, sol[:, 1].imag, 'ko', markerfacecolor='none', label='PCP') + plt.plot(sol[:, 2].real, sol[:, 2].imag, 'ko', markerfacecolor='none', label='PCP') + plt.plot(nu_ref.real, nu_ref.imag, 'rs', label='ref.', markerfacecolor='none', markersize='10') + plt.plot(np.array(C.nu0).real, np.array(C.nu0).imag, 'b*', label=r'$\nu_0$', + markerfacecolor='none', markersize='6') + plt.legend() + plt.xlabel(r'Re $\nu$') + plt.ylabel(r'Im $\nu$') diff --git a/eastereig/examples/toy_3dof_2params.py b/eastereig/examples/toy_3dof_2params.py new file mode 100755 index 0000000..c90dc39 --- /dev/null +++ b/eastereig/examples/toy_3dof_2params.py @@ -0,0 +1,299 @@ +# -*- coding: utf-8 -*- + +# This file is part of eastereig, a library to locate exceptional points +# and to reconstruct eigenvalues loci. + +# Eastereig is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. + +# Eastereig is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. + +# You should have received a copy of the GNU General Public License +# along with Eastereig. If not, see . +r""" +Consider the following 3 DOF system. +``` + +~~~~~~~~~~+ +~~~~~~~~~~+ +~~~~~~~~~~+ + | | | | | | + mu | | k | | k | | nu +X---/\/\---| m1 | -----/\/\----- | m2 | -----/\/\-----| m3 | ---/\/\---X + | | | | | | + +~~~O~~~O~~+ +~~~O~~~O~~+ +~~~O~~~O~~+ +``` +depending on the two complex parameters mu and nu. We would like to find the EP3. + +Examples +-------- +>>> C, sol, error = main() # doctest: +ELLIPSIS +> Solve gen eigenvalue problem with NumpyEigSolver class... +>>> error < 1e-3 +True + +@author: bn +""" +import numpy as np +import eastereig as ee +import itertools as it +import concurrent.futures +from functools import partial +from scipy.spatial import distance_matrix +from scipy.optimize import linear_sum_assignment +import sympy as sym +from sympy.solvers.polysys import solve_generic + + +class Network(ee.OPmv): + """Create subclass of OPmv that describe the multiparameter problem.""" + + def __init__(self, nu0): + """Initialize the problem. + + Parameters + ---------- + nu0 : tuple + The parameters initial Values (complex). + """ + self.m = 1 + self.k = 1 + self.nu0 = nu0 + + # assemble *base* matrices + Mmat = self._mass() + Kmat = self._stiff() + + # initialize OP interface + self.setnu0(nu0) + + # mandatory ----------------------------------------------------------- + self._lib = 'numpy' + # create the operator matrices + self.K = [Kmat, Mmat] + # define the list of function to compute the derivatives of each operator matrix + self.dK = [self._dstiff, self._dmass] + # define the list of function to set the eigenvalue dependance of each operator matrix + self.flda = [None, ee.lda_func.Lda] + # --------------------------------------------------------------------- + + def __repr__(self): + """Define the object representation.""" + text = "Instance of Operator class {} @nu0={}." + return text.format(self.__class__.__name__, self.nu0) + + def _mass(self): + """Define the a diagonal mass matrix.""" + m = self.m + M = - np.array([[m, 0, 0], + [0, m, 0], + [0, 0, m]], dtype=complex) + return M + + def _stiff(self): + """Define the stifness matrix of 1D FEM with ordered nodes.""" + mu = self.nu0[0] + nu = self.nu0[1] + k = self.k + K = np.array([[mu+k, -k, 0.], + [-k, 2*k, -k], + [0., -k, nu+k]], dtype=complex) + return K + + def _dstiff(self, m, n): + r"""Define the sucessive derivative of the $\tilde{K}$ matrix with respect to nu. + + mu -> m + nu -> n + + Parameters + ---------- + m, n : int + the order of derivation + Returns + ------- + Kn : Matrix (petsc or else) + The n-derivative of global K0 matrix + """ + mu, nu = self.nu0 + + if (m, n) == (0, 0): + Kn = self.K[0] + elif (m, n) == (1, 0): + Kn = np.zeros_like(self.K[0], dtype=complex) + Kn[0, 0] = 1. + elif (m, n) == (0, 1): + Kn = np.zeros_like(self.K[0], dtype=complex) + Kn[2, 2] = 1. + # if (m, n) > (1, 1) return 0 because K has a linear dependancy on nu + else: + return 0 + return Kn + + def _dmass(self, m, n): + """Define the sucessive derivative of the $M$ matrix with respect to (mu, nu). + + Parameters + ---------- + m, n : int + the order of derivation + Returns + ------- + Kn : Matrix (petsc or else) + The n-derivative of global K1 matrix + """ + # if (m, n)=(0,0) return M + if (m, n) == (0, 0): + return self.K[1] + # if (m, n) != (0, 0) return 0 because M is constant + else: + return 0 + + +def _inner(l, N, var, nu, mu): + """Inner loop for sympy_check. Usefull for // computation.""" + dlda = np.zeros(N, dtype=complex) + for n in it.product(*map(range, N)): + print(n) + dlda[n] = sym.N(sym.diff(l, mu, n[0], nu, n[1]).subs(var)) + return dlda + + +def sympy_check(nu0, sympyfile=None): + """Check multiple derivatives with sympy. + + Parameters + ---------- + n0 : tuple + Contains the nominal value of mu and nu. + sympyfile : string + filename for saving sympy output. + + Returns + ------- + dlda : list + Contains the sucessive derivative with respect to mu (row) and + nu (column) as np.array + + """ + # set max // workers + max_workers = 3 + # derivatives order + N = (1, 1) + # FIXME hard coded parameters + m = 1. + k = 1. + mu, nu, lda = sym.symbols('mu, nu, lambda', complex=True) + var = {mu: nu0[0], nu: nu0[1]} + M = - sym.Matrix([[m, 0, 0], + [0, m, 0], + [0, 0, m]]) + K = sym.Matrix([[mu+k, -k, 0.], + [-k, 2*k, -k], + [0., -k, nu+k]]) + # Symbols + p0 = sym.det(K + lda*M) + p1 = sym.diff(p0, lda) + p2 = sym.diff(p1, lda) + + # solve with groebner + F = [p0, p1, p2] + g = sym.groebner(F, method='f5b', domain='CC') + NewOption = sym.Options(g.gens, {'domain': 'CC'}) + # sol = _solve_reduced_system2(F, F[0].gens) + EP_sym = solve_generic(g, NewOption) + + # solve for lda + ldas = sym.solve(p0, lda) + dlda = dict() + print('May take a while...') + + # use partial to fixed all function parameters except lda + _inner_lda = partial(_inner, N=N, var=var, nu=nu, mu=mu) + # run the // computation + with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor: + for i, dlda_ in enumerate(executor.map(_inner_lda, ldas)): + dlda[i] = dlda_ + if sympyfile: + np.savez(sympyfile, dlda=dlda) + return dlda, EP_sym, g + + +sol_ana = np.array([[2. - 1.77635684e-15j, 1. + 1.41421356e+00j, 1. - 1.41421356e+00j], + [2. + 1.77635684e-15j, 1. - 1.41421356e+00j, 1. + 1.41421356e+00j], + [2. - 1.73205081e+00j, 0.5-2.59807621e+00j, 1.5-2.59807621e+00j], + [2. + 1.73205081e+00j, 0.5+2.59807621e+00j, 1.5+2.59807621e+00j], + [2. - 1.73205081e+00j, 1.5-2.59807621e+00j, 0.5-2.59807621e+00j], + [2. + 1.73205081e+00j, 1.5+2.59807621e+00j, 0.5+2.59807621e+00j]]) + + +def error_between(sol1, sol2): + """Evalute the error between two sets of unordered vectors. + + Parameters + ---------- + sol1 : np.array + Array wih m1 lines of n-dimentional solution. + sol1 : np.array + Array wih m2 lines of n-dimentional solution. + + Returns + ------- + error : float + The global error between the two set using the best permutation. + """ + D = distance_matrix(sol1, sol2) + row_ind, col_ind = linear_sum_assignment(D) + error = D[row_ind, col_ind].sum() + return error + + +def main(plot=False): + """Find the EP3 of the 3 DOF problem. + + Return + ------ + sol : np.array + Fhe EP3 solutions. For each solution, it constains the eigenvalue, mu and nu. + error : float + The global error between the found EP3 and analytic solution. + """ + # Define the order of derivation for each parameter + Nderiv = (5, 5) + # Define the inital value of parameter + nu0 = np.array([0.95590969 - 1.48135044j + 0.15 + 0.1j, + 1. + 1.41421356e+00j + 0.1 + 0.1j]) + # Instiate the problem + net = Network(nu0) + net.createSolver(pb_type='gen') + # Run the eigenvalue computation + Lambda = net.solver.solve(target=0+0j) + # Create a list of the eigenvalue to monitor + lda_list = np.arange(0, 3) + # Return the eigenvalue and eigenvector in a list of Eig object + extracted = net.solver.extract(lda_list) + # destroy solver (important for petsc/slepc) + net.solver.destroy() + # Compute the eigenvalue derivatives + for vp in extracted: + vp.getDerivativesMV(Nderiv, net) + # Find EP with Charpol + C = ee.CharPol(extracted) + # Use homotopy solver (if installed) + # bplp, s = C.homotopy_solve(tracktol=1e-12, finaltol=1e-13, tol_filter=-1) + s = C.iterative_solve((None, + (nu0 - (1+1j), nu0 + (1+1j)), + (nu0 - (1+1j), nu0 + (1+1j))), Npts=2, algorithm='lm', max_workers=4, tol=1e-5) + delta, sol, deltaf = C.filter_spurious_solution(s, plot=plot, tol=1e-3) + return C, sol, error_between(sol, sol_ana) + # Export a sympy poly object + + +# %% MAIN +if __name__ == '__main__': + C, sol, error = main(plot=True) + print('Found EP3:') + print(sol) + print('Error between found EP and analytic solution: ', error) diff --git a/eastereig/lda_func.py b/eastereig/lda_func.py index 75579e1..9db25c9 100644 --- a/eastereig/lda_func.py +++ b/eastereig/lda_func.py @@ -16,11 +16,11 @@ # You should have received a copy of the GNU General Public License # along with Eastereig. If not, see . -""" Define the functions to compute the successive derivatives of the function of the eigenvalue +r""" Define the functions to compute the successive derivatives of the function of the eigenvalue with respect to nu. These functions appear in L=K_0 * None + K_1*Lda(nu) + K_2 * Lda2(nu) + ... + K_n * f_n(lda(nu)) - - These functions sould have the same interface : my_fun(k,n,dlda) + - These functions sould have the same interface : my_fun(k, n, dlda) - **If k==n, the terms containing lda^(n) are skipped**. They dont belong to the RHS computation - a companion function providing the derivitive with respect to lda should be also defined and enrolled in `dlda_flda` dict (at the end) define a dict to get the derivative % lda of the @@ -28,36 +28,43 @@ Remarks: -------- -1. Only linear and quadratic dependancy are now implemented -2. If function are added, don't forget do update `_dlda_flda` - + * Only linear and quadratic dependancy are now implemented for multivariate case. + * If functions are added, don't forget do update `_dlda_flda`. + * `faaDiBruno` method allows to compute the derivative for a large variety of composite. + function \( f(\lambda (\nu)) \) with scalar parameter (see example with `Lda3`). + * For multivariate case, Liebnitz' rule should be prefered. """ import scipy as sp import numpy as np +from eastereig.utils import faaDiBruno, diffprodTree - -# linear dependancy in lda +# Linear dependancy in lda def Lda(k, n, dlda): """ Compute the k-th derivative of lda(nu) with respect to nu. - **If k==n, the terms containing lda^(n) are skiped**. They dont belong + For multivariate case, the index `k` and `n` become tuple. + + **If k==n, the terms containing lda^(n) are skipped**. They dont belong to the RHS computation Parameters ----------- - k : int - the requested derivative order of the flda term - n : int - the final requested number of derivative of lda + k : int or tuple + The requested derivative order of the flda term + n : int or tuple + The final requested number of derivative of lda dlda : iterable - the value of the eigenvalue derivative + The value of the eigenvalue derivative """ # k=0 nothing to du just return dlda[0] - if k == 0: - return dlda[0] - # by convention this terms is skip (do not belong to RHS) + # to tackle mv modify with sum(k) + if np.sum(k) == 0: + # Use asarray to avoid trouble between list, scalar, array + # TODO see if it the best way and uniformize the process + return np.asarray(dlda).flat[0] + # by convention this terms is skipped (do not belong to RHS) if k == n: return 0 # the other are computed, and depend of the function @@ -71,48 +78,88 @@ def dLda(lda): return 1. -# quadartic dependancy in lda +# Quadartic dependancy in lda def Lda2(k, n, dlda): - """ Compute the k-th derivative of (lda(nu)**2) ie (lda(nu)**2)**(k) with respect to nu - based on liebnitz-rule, see arxiv.org/abs/1909.11579 Eq. 38. + """ Compute the k-th derivative of (lda(nu)**2) ie (lda(nu)**2)**(k) with respect to nu. - **If k==n, the terms containing lda^(n) are skiped**. They dont belong to the RHS computation + For multivariate case, the index `k` and `n` become tuple. In univaraite case, + the implementation is based on symmetric Liebnitz-rule, see arxiv.org/abs/1909.11579 Eq. 38. + + **If k==n, the terms containing lda^(n) are skipped**. They dont belong to the RHS computation Parameters ----------- - k : int + k : int or tuple the requested derivative order of the flda term - n : int + n : int or tuple the final requested number of derivative of lda dlda : iterable the value of the eigenvalue derivative Examples --------- - Compute the derivatives of y(x)^2, with \( y = log(x) + 2\) for x=2 - # 0 1 2 3 4 5 - >>> valid = np.array([7.253041736158007, 8.07944154167984, 3.15342640972003, -0.903426409720027, 1.35513961458004, -2.83527922916008]) - >>> dlda = np.array([2.69314718055995, 1.50000000000000, -0.250000000000000, 0.250000000000000, -0.375000000000000, 0.750000000000000]) - >>> abs(Lda2(4,5,dlda) - valid[4]) < 1e-12 + Compute the derivatives of y(x)^2, with y = log(x) + 2 for x=2 + # 0 1 2 3 4 5 + >>> valid = np.array([7.253041736158007, 2.69314718055995, -0.846573590279973, 0.596573590279973, -0.644860385419959, 0.914720770839918]) + >>> dlda = np.array([2.69314718055995, 0.50000000000000, -0.250000000000000, 0.250000000000000, -0.375000000000000, 0.750000000000000]) + >>> abs(Lda2(4, 5, dlda) - valid[4]) < 1e-12 + True + + Test for multivariate formalism in Univariate case + >>> abs(Lda2((4,), (5,), dlda)- valid[4]) < 1e-12 True - """ - binom = sp.special.binom - start = 0 - # k=0 nothing to du just return dlda[0]**2 - if k == 0: - d = dlda[0]**2 - # else compute ;-) + Test for multivariate formalism in Multivariate case + Compute the derivatives of y(x1, x2)^2, with y(x1, x2) = x1*log(x2) + 2, for x1, x2 = 0.5, 2. + >>> dlda = np.array([[ 2.346573590279973, 0.25 , -0.125 , 0.125 ],\ + [ 0.693147180559945, 0.5 , -0.25 , 0.25 ],\ + [ 0. , 0. , 0. , 0. ]]) + >>> valid = np.array([[ 5.506407614599441, 1.173286795139986, -0.461643397569993, 0.399143397569993],\ + [ 3.253041736157983, 2.693147180559945, -0.846573590279973, 0.596573590279973],\ + [ 0.960906027836403, 1.386294361119891, 0.306852819440055, -0.806852819440055]]) + >>> abs(Lda2((2, 1), (3, 2), dlda) - valid[2, 1]) < 1e-12 + True + + Check that for `k==n`, `Lda2((1, 3), (1, 3), dlda)` is equal to `valid[1, 3] - d lda**2/d lda * lda**(2)` + >>> abs(Lda2((1, 3), (1, 3), dlda) - valid[1, 3] + 2*dlda.flat[0]*dlda[1, 3] ) < 1e-12 + True + """ + # Check if multivariate + if hasattr(n, '__iter__') or hasattr(k, '__iter__'): + # Multivariate case + # k=0 nothing to du just return dlda[0]**2 + if all(ki == 0 for ki in k): + d = dlda.flat[0]**2 + else: + if k == n: + # Crop and put a 0 in dlda[k] to remove the term containing lda**(n) + # which is not in the RHS + crop = tuple(slice(0, ki+1) for ki in k) + dlda_ = dlda[crop].copy() + dlda_[k] = 0. + else: + dlda_ = dlda + # diffprodTree compute all derivative up do the k-th, return last one + d = diffprodTree([dlda_, dlda_], k).flat[-1] else: - if k == n: start=1 - # init sum - d = 0 - # upper bound - stop = int(np.floor(k/2.)) - for j in range(start, stop+1): - # delta is equal to one only when $n$ is even - delta = (k/2.) == j - d = d + binom(k, j)*(2 - delta)*dlda[k-j]*dlda[j] + # Univariate case + # This implementation is faster and clearer for univariate case + binom = sp.special.binom + start = 0 + # k=0 nothing to du just return dlda[0]**2 + if k == 0: + d = dlda[0]**2 + # else compute ;-) + else: + if k == n: start=1 + # init sum + d = 0 + # upper bound + stop = int(np.floor(k/2.)) + for j in range(start, stop+1): + # delta is equal to one only when $n$ is even + delta = (k/2.) == j + d = d + binom(k, j)*(2 - delta)*dlda[k-j]*dlda[j] return d @@ -123,19 +170,154 @@ def dLda2(lda): return 2*lda -# def faaDiBruno(): -# """ -# #TODO ! to add more general dependancies... -# """ +# Cubic dependancy in lda +def Lda3(k, n, dlda): + """ Compute the k-th derivative of (lda(nu)**3) ie (lda(nu)**3)**(k) with respect to nu + using Faa Di Bruno method for composite function. + + This approach is limited to scalar parameter nu. + + **If k==n, the terms containing lda^(n) are skipped**. They dont belong to the RHS computation + + Parameters + ----------- + k : int + The requested derivative order of the flda term. + n : int + The final requested number of derivative of lda. + dlda : iterable + The value of the eigenvalue derivative. + + + Examples + --------- + Compute the derivatives of y(x)^3, with \( y = log(x) + 2\) for x=2 + # 0 1 2 3 4 5 + >>> valid = np.array([19.5335089022175, 10.8795626042370, -1.40006053127857, 0.130200145858610, 0.699560166632044, -2.36641091139403 ]) + >>> dlda = np.array([2.69314718055995, 0.50000000000000, -0.250000000000000, 0.250000000000000, -0.375000000000000, 0.750000000000000]) + >>> abs(Lda3(4, 5, dlda) - valid[4]) < 1e-12 + True + + Check that for `k==n`, `Lda3(4, 4, dlda)` is equal to `valid[4] - d lda**3/d lda * lda**(4)` + >>> abs(Lda3(4, 4, dlda) - valid[4] + 3*dlda[0]**2*dlda[4] ) < 1e-12 + True + """ + # Check that input argument are not iterable (not supported by faa di Bruno) + if hasattr(n, '__iter__') or hasattr(k, '__iter__'): + raise ValueError('Input argument cannot be an iterable.' + ' This implementation of `Lda3`' + ' cannot handle multivariate case.') + + # k=0 nothing to du just return dlda[0]**3 + if k == 0: + d = dlda[0]**3 + # else compute ;-) + else: + # Create the 'outer' function derivatives + df = np.array([dlda[0]**3, 3.*dlda[0]**2, 6.*dlda[0], 6.], dtype=dlda.dtype) + # Append a 0 in dlda[n] to remove the term containing lda**(n) which is not known + # thus not in the RHS + if k == n: + dlda_ = np.zeros(k + 1, dtype=dlda.dtype) + dlda_[:k] = dlda[:k] + else: + dlda_ = dlda[:k+1] + # Remarks : [:k+1] works even if its bigger than the vector size, it takes all + d = faaDiBruno(df[:k+1], dlda_, k)[k] + + return d + + +def dLda3(lda): + """ Compute the 1st derivative of the function Lda3 with respect to lda. + """ + return 3.*lda*lda + + +# exp(-tau * lda) +def ExpLda(k, n, dlda, tau=0.001j): + """ Compute the k-th derivative of `exp(-tau*lda(nu))` with respect to nu + using Faa Di Bruno method for composite function. + + This approach is limited to scalar parameter nu. + + **If k==n, the terms containing lda^(n) are skipped**. They dont belong to the RHS computation + + Parameters + ----------- + k : int + The requested derivative order of the flda term. + n : int + The final requested number of derivative of lda. + dlda : iterable + The value of the eigenvalue derivative. + tau : complex + A scaling factor. + + Examples + --------- + Compute the derivatives of exp(-tau*lda(nu)), with \( lda(nu) = nu + nu**2 + 2\) for x=2 and tau = 0.001 + # 0 1 2 3 4 + >>> valid = np.array([0.992031914837061, -0.00496015957418530, -0.00195926303180320, 2.96369534557572e-5, 1.16073934235404e-5]) + >>> dlda = np.array([8, 5, 2, 0, 0, 0]) + >>> abs(ExpLda(4, 5, dlda, tau=0.001) - valid[4]) < 1e-12 + True + + Check that for `k==n`, `ExpLda(4, 4, dlda)` the last terms has been removed + >>> abs(ExpLda(2, 2, dlda, tau=0.001) - valid[2] - 0.001*ExpLda(0, 2, dlda, tau=0.001)*dlda[2] ) < 1e-12 + True + """ + # Check that input argument are not iterable (not supported by faa di Bruno) + if hasattr(n, '__iter__') or hasattr(k, '__iter__'): + raise ValueError('Input argument cannot be an iterable.' + ' This implementation of `Lda3`' + ' cannot handle multivariate case.') + # k=0 nothing to do + if k == 0: + d = np.exp(-tau*dlda[0]) + # else compute ;-) + else: + # Create the 'outer' function derivatives + f = np.exp(-tau*dlda[0]) + df = (-tau)**np.arange(0, len(dlda))*f + # Append a 0 in dlda[n] to remove the term containing lda**(n) which is not known + # thus not in the RHS + if k == n: + dlda_ = np.zeros(k + 1, dtype=complex) + dlda_[:k] = dlda[:k] + else: + dlda_ = dlda[:k+1] + # Remarks : [:k+1] works even if its bigger than the vector size, it takes all + d = faaDiBruno(df[:k+1], dlda_, k)[k] + + return d + + +def dExpLda(lda): + """ Compute the 1st derivative of the function exp(-tau*lda) with respect + to lda. + """ + # FIXME find a better way to do that... + # trick to recover the defaut value of tau in ExpLda + tau = ExpLda.__defaults__[0] + return -tau*np.exp(-tau*lda) # Mapping between f(lda) -> d_\dlda f(lda) _dlda_flda = {Lda: dLda, Lda2: dLda2, + Lda3: dLda3, + ExpLda: dExpLda, None: lambda x: 0, } r""" Define a dict to map the derivative with respect to lda \( \partial_\lambda f(\lambda) \) and the function of lda \( f(\lambda) \) . -If new function is added below, a link to its derivative hould be be added here +If new function is added below, a link to its derivative must be be added here. """ + +# %% Main for basic tests +if __name__ == '__main__': + # run doctest Examples + import doctest + doctest.testmod() \ No newline at end of file diff --git a/eastereig/loci.py b/eastereig/loci.py index d6a9cf7..7d06502 100644 --- a/eastereig/loci.py +++ b/eastereig/loci.py @@ -93,7 +93,7 @@ def export(self, LAMBDAfile): """ np.savez(LAMBDAfile, LAMBDA=self.LAMBDA, NU=self.NU) - # %% Using matplotlib + # %% Using matlab def plotRiemannMatlab(self, part, eng, n=3, label=[r'$\nu$', r'$\lambda']): """ Plot Riemman surfaces with matlab [optional]. @@ -237,9 +237,9 @@ def plotRiemann(self, Type='Re', N=2, EP_loc=None, Title='empty', return Fig, ax def plotDiscriminant(self, index=None, variable='\\nu', scale='log', - normalize=True, fig=None, - nooutput=False, clip=(0.01, 0.99)): - r""" Plot of the modulus of the discriminant of the 'partial' + normalize=True, fig=None, clip=(0.01, 0.99), + cmap=plt.cm.turbo, gradient=False): + r"""Plot of the modulus of the discriminant of the 'partial' characteristic polynomial. This analytic function vanishes for all multiple roots and is @@ -266,15 +266,20 @@ def plotDiscriminant(self, index=None, variable='\\nu', scale='log', `clip` represents the min and max quantiles to set the color limits of the current image. Using `clip` avoid that isolate extreme values disturbed the image. + cmap : cmap + Change the default colormap. The default is the high + contrast `turbo` corlormap. + gradient : bool, optional + Plot also the magnitude of the gradient on another figure. Returns ------- fig : fig - the pyplot fig + The pyplot fig ax : axes - the pyplot axes for futher plot - im : ScalarMappable - The image object (i.e., Image, ContourSet, etc.) + The pyplot axes for futher plot + ax_g : axes, optional + The pyplot axes of the gradient if `gradient=True`, else `None`. """ lda = np.asarray(self.LAMBDA) @@ -307,28 +312,34 @@ def plotDiscriminant(self, index=None, variable='\\nu', scale='log', Fig = plt.figure(num=fig) ax = Fig.add_subplot(111) if scale == 'lin': - im = plt.pcolormesh(nur, nui, np.abs(P), zorder=-1) + im = plt.pcolormesh(nur, nui, np.abs(P), zorder=-1, cmap=cmap) elif scale == 'log': field = np.log10(np.abs(P)) - im = plt.pcolormesh(nur, nui, field, zorder=-1) + im = plt.pcolormesh(nur, nui, field, zorder=-1, cmap=cmap) stats = (field.min(), field.max(), field.mean()) print('> stats:', stats) # im = plt.pcolormesh(nur, nui, P.imag, zorder=-1) plt.xlabel(r'$\mathrm{Re}\,' + variable + r' $') plt.ylabel(r'$\mathrm{Im}\,' + variable + r' $') plt.colorbar() - if not(nooutput): - plt.show() - im.set_clim(np.quantile(field.ravel(), clip[0]), np.quantile(field.ravel(), clip[1])) - return Fig, ax, im + if gradient: + df = np.gradient(field) + m = np.sqrt(df[0]**2 + df[1]**2) + fig_g, ax_g = plt.subplots() + im_g = ax_g.pcolormesh(nur, nui, m, zorder=-1, cmap=cmap) + ax_g.set_xlabel(r'$\mathrm{Re}\,' + variable + r' $') + ax_g.set_ylabel(r'$\mathrm{Im}\,' + variable + r' $') + else: + ax_g = None + return Fig, ax, ax_g # %% Using pyvista def plotRiemannPyvista(self, Type='Re', N=2, Title='empty', - variable='\\nu', lda_list=None, qt=True): + variable='\\nu', lda_list=None, qt=True, normalize=1.): """ Plot Riemman surfaces of the selected eigenvalues using pyvista. The real or imaginary part of the values to be plotted are sorted and @@ -360,6 +371,9 @@ def plotRiemannPyvista(self, Type='Re', N=2, Title='empty', qt : bool Flag to swtich between `pyvista` and `pyvistaqt`. It allows to plot in background using another thread. + normalize: float, optional + Normalize the eigenvalue using the `normalize` scaling constant. The + default is 1. Returns ------- @@ -368,7 +382,7 @@ def plotRiemannPyvista(self, Type='Re', N=2, Title='empty', picked: list The list of the coordinates of the picked points. """ - try: + try: import pyvista as pv # pyvistaqt allows to plot in background if qt: @@ -433,10 +447,10 @@ def __call__(self, state): col_name = {'Re': 'Im λ', 'Im': 'Re λ'} for mode in range(min(len(lda), N)): if Type == 'Re': - points = np.column_stack([u, v, ldaplot[:, :, mode].ravel().real]) + points = np.column_stack([u, v, normalize * ldaplot[:, :, mode].ravel().real]) col = ldaplot[:, :, mode].ravel().imag elif Type == 'Im': - points = np.column_stack([u, v, ldaplot[:, :, mode].ravel().imag]) + points = np.column_stack([u, v, normalize * ldaplot[:, :, mode].ravel().imag]) col = ldaplot[:, :, mode].ravel().real cloud = pv.PolyData(points) cloud[col_name[Type]] = col diff --git a/eastereig/op.py b/eastereig/op.py index e8f9b14..b5d960f 100644 --- a/eastereig/op.py +++ b/eastereig/op.py @@ -45,7 +45,7 @@ from . import eigSolvers from .adapter import adaptVec, adaptMat # adapter patern to avoid interface missmatch from . import lda_func -from .utils import multinomial_index_coefficients +from .utils import multinomial_index_coefficients, multinomial_multiindex_coefficients from eastereig import _petscHere from abc import ABC, abstractmethod @@ -273,3 +273,100 @@ def createSolver(self, pb_type='gen', opts=None): lib = self._lib # create solver self.solver = OP.SOLVER_DICT[lib](self.K, pb_type) + + +# Abstract class, not instanciable +class OPmv(OP): + """ Absract class to manage multivariate problems. + """ + + def getRHS(self, vp, n): + """Compute RHS vector as defined in the Andrew, Chu, Lancaster method. + + Function of L, D and derivative of eigenvalue and eigenvector + La fonction dépend surement de nu, L, D et des dérivées des vp et VP + RHS must be coherent with the chosed solver (scipy, petsc) + + + Symbolic computation of the RHS for polynomial and generalized polynomial eigenvalue + problem : + + [K_0 + f_1(lda)*K_1 + ... + + f_n(lda)*K_n ]x=0 + + [K_0 + lda*K_1 + ... + + lda**n * K_n ]x=0 + + The operator should be described by 3 lists + K=[K0,K1,K2] that contains the operator matrix + dK=[dK0,dK1,dK2] that contains the derivatives of operator matrix + flda = [None, lin, quad] that contains the function of the eigenvalue. The function will + return their nth derivatives for general dependancy Faa di Bruno foruma should be used. + + Parameters + ---------- + vp : Eig + the eigenvalue object + + Returns + ------- + F.obj : vector (petsc,numpy,scipy) + the RHS vector is the good format + """ + # use adapter ! + lib = self._lib + # adapt the class interface to be independant of the library + x = adaptVec(vp.x, self._lib) + + # init + F = adaptVec(x.duplicate(), lib) # RHS same shape as eigenvector + F.set(0.) + + # number of terms, usually 3 (K_i * f_i(lda) * xi), sometimes 2 + NTERM = 3 + # get number of variable involved in the derivatives + nvar = len(n) + # tuple to remove, because not in the RHS + # (Matrix, eigenvector, eigenvalue) + # Remarks : the lda**(n) are remove in the lda_func + #skip2 = {(0, n)} # K_0**(0) x**(n) + skip2 = {tuple([(0,)*nvar, n])} # K_0**(0, ..., 0) x**(n1, ..., n_nvar) + # skip3 = {(0, n, 0)} # K_1**(0) x**(n) dla**(0) + skip3 = {tuple([(0,)*nvar, n, (0,)*nvar])} # K_1**(0, ..., 0) x**(n1, ...,n_nvar) dla**(0, ..., 0) + + # Init matrix derivative index at previous step, to force 1st computation + m0old = -1 + # loop over operator matrices + for (Kid, _) in enumerate(self.K): + # TODO caching the matrix + # How many terms for liebnitz 2 or 3 + if self.flda[Kid] is None: + ntermi = NTERM - 1 # 2 + skip_set = skip2 + else: + ntermi = NTERM # 3 + skip_set = skip3 + + # multinomial index and coef + mind, mcoef = multinomial_multiindex_coefficients(ntermi, n) + for (mi, m) in enumerate(mind): + # check if index belong to RHS + if (tuple(m) not in skip_set): + # Computing the operator derivative may be long, the matrix is cached + # until its derivation order change + if m[0] != m0old: + # if matrix order derivative has changed since last computation, compute it + dK_m0_ = self.dK[Kid](*m[0]) + if dK_m0_ is not int(0): + # if matrix derivative do not vanish... + dK_m0 = adaptMat(dK_m0_, lib) # FIXME may be 0 + dx_m1 = adaptVec(vp.dx[m[1]], lib) # normaly never 0 + # compute the eigenvalue m[2]-th derivative, return 0 if skiped + if ntermi == 2: + F.obj -= dK_m0.dot(dx_m1.dot(mcoef[mi])) + else: + # filter if lda**(n) or d_lda L lda**(n) because not in RHS + dlda_m = self.flda[Kid](m[2], n, vp.dlda) + if abs(dlda_m) != 0: + F.obj -= dK_m0.dot(dx_m1.dot(dlda_m*mcoef[mi])) + m0old = m[0] + del m0old, dK_m0_, dK_m0, dx_m1 + return F.obj \ No newline at end of file diff --git a/eastereig/options.py b/eastereig/options.py index 91becd1..5fa055c 100755 --- a/eastereig/options.py +++ b/eastereig/options.py @@ -16,17 +16,20 @@ # You should have received a copy of the GNU General Public License # along with Eastereig. If not, see . """ -This file is use to set default value of eastereig options. Most of these options -concerns petsc and slepc solvers (for the moment). +This file is used to set default values of eastereig options. Most of these options +concerns petsc/slepc solvers or charpol. - 1. For the momment, only `mumps` direct solver has been used. - 2. Sometimes mumps crash with high fill. The problem arise if the prediction/mem - allocation is too different (default 20%) from the real computation. + 1. For the momment, only `mumps` direct solver has been used. + 2. Sometimes mumps crash with high fill. The problem arise if the prediction/mem + allocation is too different (default 20%) from the real computation. Setting 'icntl_14'=50, fix the problem. """ -gopts ={'direct_solver_name':'mumps', # petsc name of the direct solver - 'direct_solver_petsc_options_name':'mat_mumps_', # petsc direct solver name of in `PETSc.Options` - 'direct_solver_petsc_options_dict':{'icntl_14':50}, # dictionnary of the petsc options name, value - } \ No newline at end of file +gopts = {'direct_solver_name': 'mumps', # petsc name of the direct solver + 'direct_solver_petsc_options_name': 'mat_mumps_', # petsc direct solver name of in `PETSc.Options` + 'direct_solver_petsc_options_dict': {'icntl_14': 50}, # dictionnary of the petsc options name, value + # Number of workers in charpol multiply. Set to the number of cores for speed, + # set to 1 for more compatibility if mixed with petsc matrices + 'max_workers_mult': 1, + } diff --git a/eastereig/tests/test_charpol.py b/eastereig/tests/test_charpol.py new file mode 100644 index 0000000..db21fca --- /dev/null +++ b/eastereig/tests/test_charpol.py @@ -0,0 +1,197 @@ +# -*- coding: utf-8 -*- + +# This file is part of eastereig, a library to locate exceptional points +# and to reconstruct eigenvalues loci. + +# Eastereig is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. + +# Eastereig is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. + +# You should have received a copy of the GNU General Public License +# along with Eastereig. If not, see . + +"""Unittest test suite for charpol module. +""" + +import unittest +import numpy as np +import eastereig as ee +import time +import tempfile +from os import path +from eastereig.examples.WGadmitance_numpy_mv import Ynumpy, nu_ref +from scipy.spatial import distance_matrix +from scipy.optimize import linear_sum_assignment +from importlib.util import find_spec as _find_spec + +class Test_charpol_mult(unittest.TestCase): + """Define multinomial multiindex coefficients test cases. + """ + @classmethod + def setUpClass(cls): + print('\n> Tests of ', cls.__name__) + + def setUp(self): + # Take a small number of nodes for rapidity + N = 20 + # Set all constant to 1 for simplicity + rho0, c0, k0, h = 1., 1., 1., 1. + # Initial admittance + nu0_ref = np.array([3.1781 + 4.6751j, 3.0875 + 3.6234j]) / 1j + nu0 = np.array([1.5, 0.8])*nu0_ref + # number of mode to compute + Nmodes = 6 + + # Create discrete operator of the pb + imp = Ynumpy(y=nu0, n=N, h=h, rho=rho0, c=c0, k=k0) + # Initialize eigenvalue solver for *generalized eigenvalue problem* + imp.createSolver(pb_type='gen') + # run the eigenvalue computation + Lambda = imp.solver.solve(nev=Nmodes, target=0+0j, skipsym=False) + # create a list of the eigenvalue to monitor + lda_list = np.arange(0, Nmodes) + # return the eigenvalue and eigenvector in a list of Eig object + extracted = imp.solver.extract(lda_list) + # destroy solver (important for petsc/slepc) + imp.solver.destroy() + # Number of derivative + Nderiv = (4, 4) + # Get eigenvalues derivatives + for vp in extracted: + vp.getDerivativesMV(Nderiv, imp) + # Store them + self.extracted = extracted + self.imp = imp + + def test_charpol_mult_same_direct_computation(self): + """Test that `multiply` yield same results that direct computation. + + It tests the behavior in parallel and in sequentiel. + """ + extracted = self.extracted + + # Create globals and partial CharPol + C05 = ee.CharPol(extracted[0:6]) + C04 = ee.CharPol(extracted[0:5]) + + C02 = ee.CharPol(extracted[0:3]) + C35 = ee.CharPol(extracted[3:6]) + C34 = ee.CharPol(extracted[3:5]) + + # Check with 2 polynomials of the same size in sequential ------------ + P = C02.multiply(C35, max_workers=1) + print('\n') + # Check size + self.assertTrue(len(C05.dcoefs) == len(P.dcoefs)) + + # Check values + check_an = np.zeros((len(C05.dcoefs),), dtype=bool) + for i, (an, bn) in enumerate(zip(C05.dcoefs, P.dcoefs)): + check_an[i] = np.allclose(an, bn) + self.assertTrue(check_an.all()) + + # Check with 2 polynomials of the same size in parallel -------------- + P = C02.multiply(C35, max_workers=4) + print('\n') + # Check size + self.assertTrue(len(C05.dcoefs) == len(P.dcoefs)) + + # Check values + check_an = np.zeros((len(C05.dcoefs),), dtype=bool) + for i, (an, bn) in enumerate(zip(C05.dcoefs, P.dcoefs)): + check_an[i] = np.allclose(an, bn) + self.assertTrue(check_an.all()) + + # Check operator form for asymetric polynomials ---------------------- + # Deactivate parallelism in the charpol `multiply` method + ee.options.gopts['max_workers_mult'] = 1 + Q = C02 * C34 + # Check values + check_an = np.zeros((len(C04.dcoefs),), dtype=bool) + for i, (an, bn) in enumerate(zip(C04.dcoefs, Q.dcoefs)): + check_an[i] = np.allclose(an, bn) + self.assertTrue(check_an.all()) + # Activate parallelism in the charpol `multiply` method + ee.options.gopts['max_workers_mult'] = 2 + Q = C02 * C34 + # Check values + check_an = np.zeros((len(C04.dcoefs),), dtype=bool) + for i, (an, bn) in enumerate(zip(C04.dcoefs, Q.dcoefs)): + check_an[i] = np.allclose(an, bn) + self.assertTrue(check_an.all()) + + def test_factory_from_recursive_mult(self): + """Test that `from_recursive_mult` yield same CharPol than Vieta. + """ + extracted = self.extracted + # Create globals and partial CharPol + Cv = ee.CharPol(extracted[0:6]) + t0 = time.time() + Cr2 = ee.CharPol.from_recursive_mult(extracted[0:6], block_size=2) + Cr3 = ee.CharPol.from_recursive_mult(extracted[0:6], block_size=3) + print('time =', time.time() - t0) + + # Check all polynomials have the same size + self.assertTrue(len(Cv.dcoefs) == len(Cr2.dcoefs)) + self.assertTrue(len(Cv.dcoefs) == len(Cr3.dcoefs)) + + # Check values + check_r2 = np.zeros((len(Cv.dcoefs),), dtype=bool) + check_r3 = np.zeros((len(Cv.dcoefs),), dtype=bool) + for i, (an, bn, cn) in enumerate(zip(Cv.dcoefs, Cr2.dcoefs, Cr3.dcoefs)): + check_r2[i] = np.allclose(an, bn) + check_r3[i] = np.allclose(an, cn) + self.assertTrue(check_r2.all()) + self.assertTrue(check_r3.all()) + + def test_charpol_set_param(self): + """Test the `set_param` method. + """ + extracted = self.extracted + # Create globals and partial CharPol + C = ee.CharPol.from_recursive_mult(extracted[0:4]) + nu0 = C.nu0 + C1 = C.set_param(1, nu0[1] + 0.1) + C0 = C.set_param(0, nu0[0] - 0.1) + ref = C.eval_an_at(np.array(nu0) + np.array([-0.1, 0.1])) + self.assertTrue(np.allclose(C0.eval_an_at(nu0[1] + 0.1), ref)) + + def test_load_and_export(self): + """Test the `load` and `export` method. + """ + extracted = self.extracted + filename = 'PCP.npz' + # Create globals and partial CharPol + C = ee.CharPol.from_recursive_mult(extracted[0:4]) + C_lda = C.eval_lda_at(C.nu0 + 0.1) + with tempfile.TemporaryDirectory() as tmpdirname: + C.export(path.join(tmpdirname, filename)) + C2 = ee.CharPol.load(path.join(tmpdirname, filename)) + C2_lda = C2.eval_lda_at(C2.nu0 + 0.1) + C_lda.sort() + C2_lda.sort() + self.assertTrue(np.allclose(C_lda, C2_lda)) + + @unittest.skipUnless(_find_spec('pypolsys'), 'pypolsys is an optional solver') + def test_homotopy_solver(self): + """Test the homotopy (optional) EP solver. + """ + C = ee.CharPol.from_recursive_mult(self.extracted[0:5]) + bplp, s = C.homotopy_solve(tracktol=1e-12, finaltol=1e-13, tol_filter=-1) + delta, sol, deltaf = C.filter_spurious_solution(s, plot=False, tol=1e-1) + # Compute error with reference solution + D = distance_matrix(sol[:, 1][:, None], nu_ref[:, None]) + row_ind, col_ind = linear_sum_assignment(D) + error = D[row_ind, col_ind].sum() + self.assertTrue(error < 5e-2) + + +if __name__ == '__main__': + # run unittest test suite + unittest.main() diff --git a/eastereig/tests/test_multiindex.py b/eastereig/tests/test_multiindex.py new file mode 100755 index 0000000..ca626b2 --- /dev/null +++ b/eastereig/tests/test_multiindex.py @@ -0,0 +1,126 @@ +# -*- coding: utf-8 -*- + +# This file is part of eastereig, a library to locate exceptional points +# and to reconstruct eigenvalues loci. + +# Eastereig is free software: you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. + +# Eastereig is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. + +# You should have received a copy of the GNU General Public License +# along with Eastereig. If not, see . + +"""Test with sympy the computation of multinomial_multiindex_coefficients. +""" + +import unittest +import numpy as np +import eastereig.utils as utils +try: + import sympy as sym +except ImportError: + print('Sympy is required to run this test.') + + +class Test_multinomial_multiindex_coefficients(unittest.TestCase): + """Define multinomial multiindex coefficients test cases. + """ + @classmethod + def setUpClass(cls): + print('\n> Tests of ', cls.__name__) + + def setUp(self): + sym.init_printing(forecolor='White') + + def test_product_of_2_func(self): + """Test multinomial_multiindex_coefficients with a product of 2 functions. + """ + N = (2, 3) + x, y = sym.symbols('x, y') + f, g = sym.Function('f')(x, y), sym.Function('g')(x, y) + + # 2 functions f*g + m = 2 + mmi_index, mmi_coef = utils.multinomial_multiindex_coefficients(m, N) + # compute reference solution with sympy + d = sym.diff(f*g, x, N[0], y, N[1]) + + # built sympy expressions from ee coefs and index + d_ = 0 + for i, c in zip(mmi_index, mmi_coef): + d_ += c * sym.diff(f, x, i[0][0], y, i[0][1]) \ + * sym.diff(g, x, i[1][0], y, i[1][1]) + # check + self.assertTrue(d_ == d) + + def test_product_of_3_func(self): + """Test multinomial_multiindex_coefficients with a product of 3 functions. + """ + N = (2, 3) + x, y = sym.symbols('x, y') + f, g, h = sym.Function('f')(x, y), sym.Function('g')(x, y), sym.Function('h')(x, y) + + # 3 functions f*g + m = 3 + mmi_index, mmi_coef = utils.multinomial_multiindex_coefficients(m, N) + # compute reference solution with sympy + d = sym.diff(f*g*h, x, N[0], y, N[1]) + + # built sympy expression from ee coefs and index + d_ = 0 + for i, c in zip(mmi_index, mmi_coef): + d_ += c * sym.diff(f, x, i[0][0], y, i[0][1]) \ + * sym.diff(g, x, i[1][0], y, i[1][1]) \ + * sym.diff(h, x, i[2][0], y, i[2][1]) + # check + self.assertTrue(d_ == d) + + def test_diffprodMV_and_diffprodTreeMV(self): + """Test the computation derivative of multivariate product of functions. + + example : H = (x*y**2) * exp(x*y) @ x=0.5, y=1.5 + """ + tol = 1e-10 + # derivation order (Cannot be changed here) + N = (2, 3) + x = 0.5 + y = 1.5 + + dh0 = np.array([[x*y**2, x*2*y, x*2, 0, 0], + [y**2, 2*y, 2, 0, 0], + [0, 0, 0, 0, 0], + [0, 0, 0, 0, 0]]) + exy = np.exp(x*y) + xy = x*y + + dh1 = np.array([[exy, x * exy, x**2 * exy, x**3 * exy, x**4*exy], + [y*exy, (xy + 1)*exy, x*(xy + 2)*exy, x**2 * (xy + 3)*exy, x**3*(xy + 4)*exy], + [y**2 * exy, y*(xy + 2)*exy, (x**2 * y**2 + 4*xy + 2) * exy, x*(x**2 * y**2 + 6*xy + 6) * exy, x**2*(x**2*y**2 + 8*xy + 12)*exy], + [y**3*exy, y**2*(xy + 3)*exy, y*(x**2*y**2 + 6*xy + 6)*exy, (x**3*y**3 + 9*x**2*y**2 + 18*xy + 6)*exy, x*(x**3*y**3 + 12*x**2*y**2 + 36*xy + 24)*exy]]) + # 'numerical' derivation with ee Liebnitz (standard) + dH = utils.diffprodMV([dh0, dh1], N) + # 'numerical' derivation with ee Liebnitz (optimized) + dHt = utils.diffprodTreeMV([dh0, dh1], N) + + # Compute sympy ref solution + x_, y_ = sym.symbols('x, y') + H_ = (x_*y_**2) * sym.exp(x_*y_) + dH_12 = sym.diff(H_, x_, 1, y_, 2) + dH_23 = sym.diff(H_, x_, 2, y_, 3) + + # check with sympy + self.assertTrue(abs(dH[1, 2] - sym.N(dH_12.subs({x_: x, y_: y}))) < tol) + self.assertTrue(abs(dH[2, 3] - sym.N(dH_23.subs({x_: x, y_: y}))) < tol) + self.assertTrue(abs(dHt[1, 2] - sym.N(dH_12.subs({x_: x, y_: y}))) < tol) + self.assertTrue(abs(dHt[2, 3] - sym.N(dH_23.subs({x_: x, y_: y}))) < tol) + + +if __name__ == '__main__': + # run unittest test suite + unittest.main() diff --git a/eastereig/utils.py b/eastereig/utils.py index 8431c8b..e2d029f 100644 --- a/eastereig/utils.py +++ b/eastereig/utils.py @@ -15,34 +15,115 @@ # You should have received a copy of the GNU General Public License # along with Eastereig. If not, see . -""" -Contains helping functions (combinatoric, ...) + +"""Contains helper functions (combinatoric, derivation, ...) """ import numpy as np from numpy import zeros, asarray, eye, poly1d, hstack, r_ from scipy import linalg +from scipy.special import factorial, binom +import itertools as it +from collections import deque, namedtuple +from functools import reduce +from functools import lru_cache +from eastereig.fpoly import polyvalnd +# Maxsize of the lru_cache +MAXSIZE = None + + +# lru_cache doesn't work out-of-the-box for generator +def two_composition(order, max_order): + r"""Yield all the 2-compostion of `order` with list of two integers. + + Whereas for _partition_, the order matter. This combinatoric generator + could be used to get all terms of a given order in the multiplcation of + two polynomials. + + See https://gist.github.com/jasonmc/989158/682cfe1c25d39d5526acaeacc1426d287ef4f5de + or https://tel.archives-ouvertes.fr/tel-00668134/document for generalization + to k-composition of `order`. + + Parameters + ---------- + order : int + The targeted sum. + max_order : iterable + The max. value for the two items of the list. + + Yields + ------ + tuple + The composition members. + + Examples + -------- + All possible arrangements to get 4 with two integers smaller than 4 and 3. + >>> for c in two_composition(4, (4, 3)): + ... print(c, sum(c)) + (4, 0) 4 + (3, 1) 4 + (2, 2) 4 + (1, 3) 4 + + Check that all members are there. The compositions of `s` into exactly `n` + parts is given by the binomial coefficient \( {s+n-1 \choose s} \) + see https://en.wikipedia.org/wiki/Composition_(combinatorics) + >>> comp = list(two_composition(3, (3, 3))) + >>> len(comp) == binom(3+2-1, 3) + True + """ + # Check if it is possible + if order > sum(max_order): + raise ValueError('Impossible to reach `order` from this `max_order`.') + # Define the start state + if order > max_order[0]: + start = (max_order[0], order - max_order[0]) + else: + start = (order, 0) + + # Define the stop criterion + delta = order - max_order[1] + stop = -1 if delta <= 0 else (delta-1) + # Iteration loop + for i in range(start[0], stop, -1): + j = order-i + yield (i, j) + + +@lru_cache(maxsize=MAXSIZE) def multinomial_index_coefficients(m, n): r"""Return a tuple containing pairs ``((k1,k2,..,km) , C_kn)`` where ``C_kn`` are multinomial coefficients such that ``n=k1+k2+..+km``. - - Adapted from sympy sympy/ntheory/multinomial.py to return sorted index (speed up) + + (x_1 + ... + x_m)**n + + Adapted from sympy sympy/ntheory/multinomial.py to return sorted index (speed up) to be sure that the first index is changing slowly + Parameters + ---------- + m : int + Number of variable + n: int + Power + Returns - -------- - mindex : - mcoef : - + ------- + mindex : list + Containing all tuple (k1,k2,..,km). + mcoef : list + Containing all coefficients. + Examples - -------- - >>> mindex, mcoef = multinomial_index_coefficients(2, 5) + -------- + >>> mindex, mcoef = multinomial_index_coefficients(2, 5) >>> mindex [(0, 5), (1, 4), (2, 3), (3, 2), (4, 1), (5, 0)] >>> mcoef [1, 5, 10, 10, 5, 1] - + Notes ----- The algorithm is based on the following result: @@ -51,7 +132,7 @@ def multinomial_index_coefficients(m, n): \frac{k_1 + 1}{n - k_1} \sum_{i=2}^m \binom{n}{k_1 + 1, \ldots, k_i - 1, \ldots} Code contributed to Sage by Yann Laigle-Chapuy, copied with permission of the author. - + """ m = int(m) n = int(n) @@ -59,11 +140,11 @@ def multinomial_index_coefficients(m, n): if n: return {} return {(): 1} -# FIXME check bounds ! -# if m == 2: -# return binomial_coefficients(n) -# if m >= 2*n and n > 1: -# return dict(multinomial_coefficients_iterator(m, n)) + # FIXME check bounds ! + # if m == 2: + # return binomial_coefficients(n) + # if m >= 2*n and n > 1: + # return dict(multinomial_coefficients_iterator(m, n)) t = [n] + [0] * (m - 1) r = {tuple(t): 1} if n: @@ -101,20 +182,337 @@ def multinomial_index_coefficients(m, n): mindex, mcoef = sortdict(r) return (mindex, mcoef) +@lru_cache(maxsize=MAXSIZE) +def multinomial_multiindex_coefficients(m, N): + """Compute the multinomial coefficients and indexes for multi-index. -def sortdict(adict): + It appear in multivariable Liebnitz formula to compute derivatives like + in (f*g*h)(x, y)^(N), where N is a tuple containing the order of derivation + for each variable ('x' and 'y' here). + + It returns 2 lists that contains : + - sublist with derivation order for all functions, for all variables, + for instance, with 2 functions f and g such (f*g)(x, y)^(2, 3) + [[(0, 0), (3, 5)], [(0, 1), (3, 4)],...] + means [(df/dx0, dfdy0), (dg/dx3, dgdy5)], [(df/dx0, dfdy1), (dg/dx3, dgdy4)], ... ] + - The product of the multinomial coefficients. + + Parameters + ---------- + m : integer + Number of functions. + N: tuple + Contains the integer (derivative order) for all variables. + + Returns + ------- + multi_multi_index : list + Containing all tuple (n1, n1, .., n_m). + multi_multi_coef : list + Containing the product of the multinomial coefficients. + + Examples + -------- + This example has been tested with sympy (see 'tests' folder). + >>> mindex, mcoef = multinomial_multiindex_coefficients(2, (2, 3)) + >>> mindex + [[(0, 0), (2, 3)], [(0, 1), (2, 2)], [(0, 2), (2, 1)], [(0, 3), (2, 0)], [(1, 0), (1, 3)], \ +[(1, 1), (1, 2)], [(1, 2), (1, 1)], [(1, 3), (1, 0)], [(2, 0), (0, 3)], [(2, 1), (0, 2)], \ +[(2, 2), (0, 1)], [(2, 3), (0, 0)]] + >>> mcoef + [1, 3, 3, 1, 2, 6, 6, 2, 1, 3, 3, 1] """ - Return a list the sorted keys and the associated list of value + # Create the multinomial index/coef for each variable + mindex, mcoef = [], [] + for n in N: + mindex_, mcoef_ = multinomial_index_coefficients(m, n) + mindex.append(mindex_) + mcoef.append(mcoef_) + + # Create the gloabl index/coef + multi_multi_index = [] + multi_multi_coef = [] + for mmindexi, mmcoefi in zip(it.product(*mindex), + it.product(*mcoef)): + # gloabl coef are the product of each coefs + multi_multi_coef.append(np.prod(mmcoefi)) + # rearrange data structure to regroup them by function + multi_multi_index.append([pair for pair in zip(*mmindexi)]) + + return (multi_multi_index, multi_multi_coef) + + +def sortdict(adict): + """Return a list the sorted keys and the associated list of value. """ keys = adict.keys() sorted_keys = sorted(keys) - return sorted_keys,[adict[key] for key in sorted_keys] + return sorted_keys, [adict[key] for key in sorted_keys] + + +# TODO : depreciated +def diffprod(dh, N): + r"""Compute the n-th derivative of a product of function hi knowing the hi**(k) + depending on a single variable. + + For instance, if H(x) = h0*h1*h2*...*h_(M-1) we want to compute + H^(n) = (h0*h1*h2*...)^(n) with n>> dh = [np.array([1, 2*1, 2, 0, 0]), np.exp(1)*np.ones((5,))] + >>> dh_ref = np.array([2.71828182845905, 8.15484548537714, 19.0279727992133, 35.3376637699676]) + >>> d = diffprod(dh, 4) + >>> np.linalg.norm(dh_ref - np.array(d)) < 1e-10 + True + + # With a product of 3 functions : h0 = x, h1 = exp(x), h2=x @ x=1 + >>> dh = [np.array([1, 1, 0, 0, 0]), np.exp(1)*np.ones((5,)), np.array([1, 1, 0, 0, 0])] + >>> d = diffprod(dh, 4) + >>> np.linalg.norm(dh_ref - np.array(d)) < 1e-10 + True + """ + # Get the number of functions hi + M = len(dh) + + # init DH of all h_i (no derivative) + DH = [np.prod([dh[i][0] for i in range(0, M)])] + # compute global derivative order n + for n in range(1, N): + # get multinomial index and coefficients + multi, coef = multinomial_index_coefficients(M, n) + # liebnitz formula + # sum + sh = complex(0.) + for index, k in enumerate(multi): + # coefk = multinom(n,k) + coefk = coef[index] + # produit + ph = complex(1.) + for t in range(0, M): + ph *= dh[t][k[t]] + sh += ph*coefk + # store nth derivatibe + DH.append(sh) + + # DH contains the successive derivative, no factorial inside !! + return DH + + +def diffprodMV(dh, N): + r"""Compute the n-th derivative of a product of function hi knowing the + hi^(k), depending on a single variable (if N is an int) or of multiple + variables (N is a tuple). + + For instance, if H(x) = h0*h1*h2*...*h_(M-1) we want to compute + H^(n) = (h0*h1*h2*...)^(n) with n>> dh = [np.array([1, 2*1, 2, 0, 0]), np.exp(1)*np.ones((5,))] + >>> dh_ref = np.array([2.71828182845905, 8.15484548537714, 19.0279727992133, 35.3376637699676]) + >>> d = diffprodMV(dh, (3,)) + >>> np.linalg.norm(dh_ref - np.array(d)) < 1e-10 + True + + # With 3 functions : h0 = x, h1 = exp(x), h2=x @ x=1 + >>> dh = [np.array([1, 1, 0, 0, 0]), np.exp(1)*np.ones((5,)), np.array([1, 1, 0, 0, 0])] + >>> d = diffprodMV(dh, (3,)) + >>> np.linalg.norm(dh_ref - np.array(d)) < 1e-10 + True + + Other examples are present in `test_multiindex.py`. + """ + # TODO check type of N + # Get the number of functions hi + M = len(dh) + + # Check if single or multivariable using N + mv = len(N) != 1 + + # init DH of all h_i (no derivative) + DH = np.zeros(np.array(N)+1, dtype=complex) + # Create the generator + if mv: + # for multivariable case + derivative_deg = it.product(*map(range, np.array(N)+1)) + multinomial = multinomial_multiindex_coefficients + else: + # for mono variable case + derivative_deg = np.arange(0, N[0]+1) + multinomial = multinomial_index_coefficients + + # Compute global derivative order n + for n in derivative_deg: + # Get multinomial index and coefficients + multi, coef = multinomial(M, n) + # Liebnitz formula + # sum + sh = complex(0.) + for index, k in enumerate(multi): + # coefk = multinom(n,k) + coefk = coef[index] + # produit + ph = complex(1.) + for t in range(0, M): # TODO try to improve more cython, jit... + ph *= dh[t][k[t]] + sh += ph*coefk + # store nth derivatibe + DH[n] = sh + + # DH contains the successive derivative, no factorial inside !! + return DH + +def diffprodTree(dh, N): + r"""Compute the n-th derivative of a product of function hi knowing the + hi^(k), depending on a single variable (if N is an int) or of multiple + variables (N is a tuple). + + For instance, if H(x) = h0*h1*h2*...*h_(M-1) we want to compute + H^(n) = (h0*h1*h2*...)^(n) with n>> dh = [np.array([1, 2*1, 2, 0, 0]), np.exp(1)*np.ones((5,))] + >>> dh_ref = np.array([2.71828182845905, 8.15484548537714, 19.0279727992133, 35.3376637699676]) + >>> d = diffprodTree(dh, (3,)) + >>> np.linalg.norm(dh_ref - np.array(d)) < 1e-10 + True + + # With 3 functions : h0 = x, h1 = exp(x), h2=x @ x=1 + >>> dh = [np.array([1, 1, 0, 0, 0]), np.exp(1)*np.ones((5,)), np.array([1, 1, 0, 0, 0])] + >>> d = diffprodTree(dh, (3,)) + >>> np.linalg.norm(dh_ref - np.array(d)) < 1e-10 + True + + """ + # create a queue to be consumed + lda_list = deque(range(0, len(dh))) + # create a dictionnary containing all the derivatives of the tree + deriv = dict(zip(lda_list, dh)) + + # consume the queue by computing derivative of successive pairs + while len(lda_list) > 1: + # get the pair + pair = (lda_list.popleft(), lda_list.popleft()) + lda_list.append(pair) + # store the results in the dict + # FIXME don't forget to change mane here + deriv[pair] = diffprodMV((deriv[pair[0]], deriv[pair[1]]), N) + + # return the final derivative + return deriv[lda_list[0]] + + +# Nothing to do, juste create an alias (just in case) +diffprodTreeMV = diffprodTree + +def div_factorial(dH): + """Convert Multivariate derivation matrix to Taylor series by dividing + by the factorial coeff. + + Parameters + ---------- + dH : ndarray + Contains the derivative of H with respect of each variable. + dH[3,2] means up dx**2 dy (H). + + # TODO chnage into factorial matrix to avoid to recomputed... + Returns + ------- + Th : ndarray + Taylor series coefficients. + """ + N = dH.shape + nmax = max(N) + # compute the factorial once. + fact = factorial(np.arange(0, nmax)) + fact_list = [fact[0:ni] for ni in N] + D = _outer(*fact_list) + Th = dH/D + return Th + + +def _outer(*vs): + """Compute the outer product of sequence of vectors. + + https://stackoverflow.com/questions/17138393/numpy-outer-product-of-n-vectors + + Parameters + ---------- + vs : tuple of iterable + Each element is an array that will contribute to the outer product. + """ + return reduce(np.multiply.outer, vs) def pade(an, m, n=None): """ Return Pade approximation to a polynomial as the ratio of two polynomials. - version coming from scipy 1.4, previous version doesn't support complex + + Version coming from scipy 1.4, previous version doesn't support complex with modify test case to check complex behaviour [scipy.interpolate](https://github.com/scipy/scipy/blob/master/scipy/interpolate/_pade.py) @@ -139,7 +537,7 @@ def pade(an, m, n=None): adapt to complex >>> e_exp = [1.0+0j, 1.0+0.j, (1.0+0.j)/2.0, (1.0+0.j)/6.0, (1.0+0.j)/24.0, (1.0+0.j)/120.0] >>> p, q = pade(e_exp, 2) - >>> p(1)/q(1) + >>> p(1)/q(1) # doctest: +ELLIPSIS (2.7179487179487...+0j) Compute Taylor exp(1+1j+x) @ x=0 @@ -147,7 +545,7 @@ def pade(an, m, n=None): 0.7343469699579426 +1.1436776435894211j, 0.24478232331931418 +0.3812258811964737j, \ 0.061195580829828546+0.09530647029911843j , 0.01223911616596571 +0.019061294059823684j]) >>> p, q = pade(e_expc, 2) - >>> p(-1j)/q(-1j) + >>> p(-1j)/q(-1j) # doctest: +ELLIPSIS (2.7186371354862...+6.211398939416...e-05j) """ an = asarray(an) @@ -172,3 +570,551 @@ def pade(an, m, n=None): p = pq[:n+1] q = r_[1.0, pq[n+1:]] return poly1d(p[::-1]), poly1d(q[::-1]) + + +# Define named tupled to manipulate Bell polynomials +Bell = namedtuple('Bell', 'len coef pow') + + +def _partialBellPoly(N, K): + r"""Compute partial Bell polynomials. + + These polynomials appear in the computation of derivatives of composite function. + + The exponential Bell polynomial encodes the information related to the + ways a set of N can be partitioned in K subsets. + + The partial Bell polynomials are computed by a recursion relation + (https://en.wikipedia.org/wiki/Bell_polynomials#Recurrence_relations) : + $$ B_{n,k}=\sum _{i=1}^{n-k+1}{\binom {n-1}{i-1}}x_{i}B_{n-i,k-1} $$ + where \( B_{0,0}=1\), \( B_{n,0}=0\;\mathrm {for} \;n\geq 1\) and + \(B_{0,k}=0\;\mathrm {for} \;k\geq 1.\) + + Parameters + ---------- + N : int + Set size. + K : int + subset size. + + Returns + ------- + B : array + return all the partial Bell polynomial. Each polynomial is represented as a named-tuple, + with len, pow and coef field. `coef[i]` constains the coefficient of the i-thmonomial + and `pow[i]` contains the expononent of each symbolic variable `x_i` + (see below in the example). + + Examples + -------- + For instance, to compute + $$ B_{6,3}(x_{1},x_{2},x_{3},x_{4})= 15x_{4}x_{1}^{2}+60x_{3}x_{2}x_{1}+15x_{2}^{3} $$ + >>> B = _partialBellPoly(6, 3) + >>> B[6, 3] # doctest: +NORMALIZE_WHITESPACE + Bell(len=3, + coef=array([15, 60, 15], dtype=int32), + pow=array([[0, 3, 0, 0, 0, 0], + [1, 1, 1, 0, 0, 0], + [2, 0, 0, 1, 0, 0]], dtype=int32)) + + Alternativelly, such number can be computed in sympy, using + `bell(6, 3, symbols('x:6')[1:])`. It means there are : + * 15 ways to partition a set of 6 as 4 + 1 + 1, -> x4 x1**2 + * 60 ways to partition a set of 6 as 3 + 2 + 1, and -> x3 x2 x1 + * 15 ways to partition a set of 6 as 2 + 2 + 2. -> x2**3 + + \[B_{6,2} = 6 x_{1} x_{5} + 15 x_{2} x_{4} + 10 x_{3}^{2} \] + >>> B[6, 2] # doctest: +NORMALIZE_WHITESPACE + Bell(len=3, + coef=array([10, 15, 6], dtype=int32), + pow=array([[0, 0, 2, 0, 0, 0], + [0, 1, 0, 1, 0, 0], + [1, 0, 0, 0, 1, 0]], dtype=int32)) + """ + # Init output results + B = np.empty((N+1, N+1), dtype=object) + + # Create 0 bell polynomials + b0 = Bell(0, np.array([0]), np.zeros((1, N), dtype=np.int32)) + # Recursive building + for n in range(0, N+1): + for k in range(0, n+1): + if (k == 0) and (n == 0): + # B[0, 0] = 1 + B[0, 0] = Bell(1, np.array([1]), np.zeros((1, N), dtype=np.int32)) + elif ((k >= 1) and (n == 0)) or ((n >= 1) and (k == 0)): + # case k==0, fill with b0 + B[n, k] = b0 + else: + # Compute B(n,k) + # Inititalize temp field + coef = [] + Pow = [] + for i in range(1, n-k+2): + c = binom(n-1, i-1) + # may contains several terms + for j in range(1, B[n-i, k-1].len + 1): + # multiply with binomial coef + coef_ = B[n-i, k-1].coef[j-1]*c + pow_ = B[n-i, k-1].pow[j-1].copy() + # shift power + pow_[i-1] = pow_[i-1] + 1 + # store them + Pow.append(pow_) + coef.append(coef_) + + # Find polynomials that appear several time, and sum there coeff + # powu = pow(ic), ic previous size, but with new index + powu, ia, ic = np.unique(Pow, axis=0, return_index=True, return_inverse=True) + coefu = np.zeros(ia.size, dtype=np.int32) + for j in range(0, len(coef)): + coefu[ic[j]] = coef[j] + coefu[ic[j]] + + # Store the new polynomial as Bell object + B[n, k] = Bell(len(coefu), coefu, powu) + + return B + + +def faaDiBruno(df, dg, N=None): + r"""Compute the successive derivative of a composite function with + Faa Di Bruno formula. + + Expressed in terms of Bell polynomials Bn,k(x1,...,xn−k+1), this yields + $${d^{n} \over dx^{n}}f(g(x))=\sum _{k=1}^{n}f^{(k)}(g(x))\cdot B_{n,k}\left(g'(x),g''(x),\dots ,g^{(n-k+1)}(x)\right). $$ + see https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno's_formula + + Parameters + ---------- + df : array + value of df/dg computed at x0. By convention, df[0] is the function value + (not the derivatives). + dg : array + values of dg/dx computed at x0. By convention, dg[0] is the function value + (not the derivatives). + N : int, optional + The requested number of derivatives if less than `len(df)` + + Returns + ------- + dfog : array + Values of the successive derivative of the composite function. `len(dfog)` is N+1 + or `len(df)`. + + Examples + -------- + Compute the 4 first derivatives of the composite function exp(x)^2 at x=1. + Let be f = g^2 and g(x) = exp(x). + Define `df` the successive derivative of f with respect to g. + Define `dg` the (constant) successive derivative of g with respect to x. + >>> ref = np.array([7.38905609893065, 14.7781121978613, 29.5562243957226, 59.1124487914452, 118.224897582890]) + >>> x = 1. + >>> dg = np.repeat(np.exp(x), 5) + >>> df = np.array([np.exp(x)**2, 2*np.exp(x), 2, 0, 0]) + >>> dfog = faaDiBruno(df, dg) + >>> np.linalg.norm(dfog - ref) < 1e-12 + True + + Compute the first derivatives of the composite function (x^3^2 @ x=2 + >>> x = 2. + >>> dg = np.array([x**3, 3*x**2, 6*x, 6, 0, 0, 0]) + >>> df = np.array([dg[0]**2, 2*dg[0], 2, 0, 0, 0, 0]) + >>> dfog = faaDiBruno(df, dg) + >>> ref = np.array([64.0, 192.0, 480.0, 960.0, 1440.0, 1440.0, 720.0]) + >>> np.linalg.norm(dfog - ref) < 1e-12 + True + """ + # Get the maximal number of available derivatives + if (N is None): + N = len(df) - 1 + # Get the maximal number of df non vanishing terms, + # Remove trailling 0 to speed up loop. + kmax = np.flatnonzero(df)[-1] + 1 + # Compute required Bell polynomials + B = _partialBellPoly(N, kmax) + # Compute the derivative up to N + dfog = [] + for n in range(0, N+1): + if n == 0: + dfog.append(df[0]) + else: + # init Bell polynomial storage variable + dfog_ = 0j + # Sum until df has trailling zeros + for k, dfk in enumerate(df[0:min(n+1, kmax)]): + if k > 0: + s = 0j + for i, c in enumerate(B[n, k].coef): + s += c * np.prod(np.power(dg[1:(n-k+2)], + B[n, k].pow[i][0:(n-k+1)])) + # multipy by the function f derivatives + dfog_ += dfk * s + dfog.append(dfog_) + + return dfog + + +def complex_map(bounds=(-5-5j, 5+5j), N=30): + """Create a map of the complex plane between the bounds. + + Parameters + ---------- + bounds : tuple + The two corners on the map in the complex plane. + N : int + The number of points in each direction. + + Returns + ------- + np.array: + The complex coordinate of the points. + """ + zr, zi = np.meshgrid(np.linspace(bounds[0].real, bounds[1].real, N), + np.linspace(bounds[0].imag, bounds[1].imag, N)) + return zr + 1j*zi + + +class Taylor: + r"""Define a multivariate Taylor series. + + The series is defined as + \(T = \sum_{n_0, n_1, \dots} a_{n_0, n_1, \dots} \nu_0^{n_0}, \nu_1^{n_1} \dots\) + where the constant 0-order term is \(a_{0, 0, \dots}\). + """ + + def __init__(self, an, nu0): + """Initialize the object with df/d nu divided by factorial. + + Parameters + ---------- + an : np.ndarray + The coefficients of the Taylor expansion. + nu0 : iterable + The value where the Taylor series is computed. + """ + self.an = an + self.nu0 = nu0 + + @classmethod + def from_derivatives(cls, dn, nu0): + """Instanciate Taylor object from the sucessive derivatives df/dnu. + + Parameters + ---------- + dn : array + The coefficients of the function derivatives wrt nu. + nu0 : iterable + The value where the derivatives are computed. + + Returns + ------- + Taylor + An new Taylor instance. + """ + an = div_factorial(dn) + return cls(an, nu0) + + def __repr__(self): + """Define the representation of the class.""" + return "Instance of {} @nu0={} with #{} derivatives.".format(self.__class__.__name__, + self.nu0, + self.an.shape) + + def eval_at(self, nu): + """Evaluate the Taylor series at nu with derivatives computed at nu0. + + Parameters + ---------- + nu : iterable + Containts the value of (nu_0, ..., nu_n) where the polynom + must be evaluated. Althought nu is relative to nu0, absolute value + have to be used. + + Returns + ------- + The evaluation of the Taylor series at nu. + + Examples + -------- + Works also for scalar case, + >>> T = Taylor(1/factorial(np.arange(0, 6)), 0.) + >>> np.allclose(T.eval_at(0.1), np.exp(0.1), atol=1e-4) + True + """ + # Extract input value + nu = np.array(nu, dtype=complex) - np.array(self.nu0, dtype=complex) + # Evaluate the polynomial + return polyvalnd(nu, self.an) + + @staticmethod + def _radii_fit_1D(a, plot=False): + r"""Estimate the radii of convergence using least square of 1D sequence. + + Based on Cauchy-Hadamard, + $$\frac{1}{\rho} = |a_n|^{1/n}$$ + yields to + $$-\mathbf{V} \alpha + beta = \ln{\mathbf{a}}$$ + + For instance if some terms are vanishing, like odd or even terms, + the LS fit add a bias (The theory introduce the sup lim!). To limit + this, approach based on peaks location may works (not implemented). + + Parameters + ---------- + a: array 1D + The Taylor series coefficients. + plt: bool, optional + If `True` plots the data and the fit. + + Return + ------ + rho: float + The radius of convergence. + """ + # tol for detecting alternate vanishing coefs + tol = 0.25 + eps = 1e-16 + # Remove log singularity + a = a + eps + D = a.size + alp = np.arange(0, D)[:, None] + # LS fit + V = np.hstack((np.ones((D, 1)), alp)) + # beta correspond to the normalization by a0 + beta, alpha = np.linalg.pinv(V) @ (np.log(np.abs(a).reshape(-1, 1))) + rho = np.exp(-alpha.flat[0]) + # Estimation based only on last term + rho_ = 1/(np.abs((a[-1]/a[0]))**(1/(a.size-1))) + if abs(a[0]) < eps: + print('Warning: The first term is near zero.') + if (rho-rho_)/rho > tol: + print('Warning: suspect alternate vanishing coefficients rhof={}, rho1={}.'.format(rho, rho_)) + if plot: + from matplotlib import pyplot as plt + plt.figure() + plt.plot(alp, np.log(np.abs(a)), '*', label='True') + # plt.plot(alp, np.log(np.abs(a_) + eps), '.', label='MV') + plt.plot(alp, beta + alp * alpha, label='fit') + plt.ylabel(r'$\log{a_i}$') + plt.xlabel(r'Power') + plt.legend() + return rho + + def radii_estimate(self, plot=False): + r"""Estimate the radii of convergence of the Taylor expansion. + + The approach is based on least square fit for all paremeters and + all the series coefficients and assume single variable dependance. + + Returns + ------- + R : array + The radius of convergence for along all directions. + plt: bool, optional + If `True` plots the data and the fit. + + Examples + -------- + If $$\frac {1}{1-x} = \sum _{n=0}^{\infty }x^{n}$$, with radius of convergence 1. + >>> T = Taylor(np.ones((12,)), 0) + >>> np.allclose(T.radii_estimate(), 1, atol=1e-4) + True + + When test on $$\tan(x)$$ around 0, it fails because of alternate vanishing terms, + illustrating the importance of sup lim in the theory, + >>> T = Taylor(np.array([0, 1.00000000000000, 0, 0.333333333333333, 0, 0.133333333333333, 0, 0.0539682539682540, 0, 0.0218694885361552,0, 0.00886323552990220]), 0) + >>> T.radii_estimate() # doctest: +ELLIPSIS + Warning: ... + + When test on $$\tan(1+x)$$ around 0, it works better + >>> T = Taylor(np.array([3.42551882081476, 5.33492947248766, 9.45049997787964, 16.4965914915633, 28.9182083191928, 50.6548588382890]), 0) + >>> np.allclose(T.radii_estimate(), np.pi/2 -1, atol=1e-1) + True + """ + an = self.an + R = np.zeros(an.ndim) + # Loop over the parameters + for p in range(0, an.ndim): + index = [0] * an.ndim + index[p] = slice(0, None) + R[p] = Taylor._radii_fit_1D(an[tuple(index)]) + return R + + + # def myroots(coef): + # """Provide a global interface for roots of polynom with variable coefficients. + + # The polynomial is defined such, coef[0] x**n + ... + coef[-1] and + + + # Analytic formula are used for qudratric and cubic polynomial and numerical + # methods are used for higher order. + + + # Parameters + # ---------- + # coef : list of complex array like + # Contains the value of each coefficients. + + # Returns + # ------- + # r : list + # list of roots for each vector coef + + # """ + + # # number of coef of the polynome + # n = len(coef) + # if n == 3: + # # quadratique + # r = quadratic_roots(*coef) + # elif n==4: + # # cubic + # r = cubic_roots(*coef) + # elif n==5: + # # quatic, only exceptionhandle is the Biquadratic equation + # r = quartic_roots(*coef) + + # else: + # # quelquonque + # N = len(coef[0]) + # r=np.zeros((n-1, N), dtype=complex) + + # for i, c in enumerate(zip(*coef)): + # print(c) + # ri = np.roots(c) + # r[:,i] = ri + + # return r + + + # def quadratic_roots(a, b, c): + # """Analytic solution of a quadratic polynom P(x)=ax^2+bx+c. + + # Parameters + # ---------- + # a, b, c : complex array like + # coeff of the polynom such + + # Returns + # -------- + # r : list + # list of roots + # """ + # tol = 1e-8 + # # check on 1st coef + # if np.linalg.norm(a)< tol: + # raise ValueError('The first coefficient cannot be 0.') + + # Delta = np.sqrt(b**2 - 4*a*c, dtype=np.complex) + # r1 = (- b - Delta)/(2*a) + # r2 = (- b + Delta)/(2*a) + # r = [r1, r2] + + # return r + + # def cubic_roots(a,b,c,d): + # """Analytic solution of a cubic polynom P(x)=ax^3+bx^2+cx+d. + + # Parameters + # ---------- + # a, b, c, d : complex array like + # coeff of the polynom such + + # Returns + # -------- + # r : list + # list of roots + + # Examples + # -------- + # Solve `x**3 - 4.0*x**2 + 6.0*x - 4.0` with complex roots, + # >>> coef = [1, -4, 6, -4] + # >>> r = np.array(cubic_roots(*coef)); r.sort() + # >>> r_ = np.array([2, 1+1j, 1-1j]); r_.sort() + # >>> np.linalg.norm(r - r_) < 1e-12 + # True + # """ + # tol = 1e-8 + # # check on 1st coef + # if np.linalg.norm(a)< tol: + # raise ValueError('The first coefficient cannot be 0.') + + # D0 = b**2 - 3*a*c + # D1 = 2*b**3 - 9*a*b*c + 27*(a**2)*d + # C = ((D1 + np.sqrt(D1**2 - 4*D0**3, dtype=np.complex))/2.)**(1./3.) + # xi = -0.5 + 0.5*np.sqrt(3)*1j + # x = [] + # for k in range(1, 4): + # xk = -1/(3*a)*(b + xi**k*C + D0/(xi**k*C)) + # x.append(xk) + # return x + + # def quartic_roots(a,b,c,d,e): + # """Analytic solution of a quartic polynom P(x)=ax^4+bx^3+cx^2+dx+e. + + # Parameters + # ---------- + # a, b, c, d, e : complex array like + # coeff of the polynom. Better to unpack list to set them. + + # Returns + # -------- + # r : array + # list of roots + + # Remarks + # -------- + # Only the biquadratic exception is implemented. Through extensive tests, no + # other problems have been found, but... + # https://en.wikipedia.org/wiki/Quartic_function + + # Examples + # -------- + # Solve `x**4 - 7.0*x**3 + 2.0j*x**3 + 18.0*x**2 - 8.0j*x**2 - 22.0*x + 12.0j*x + 12.0 - 8.0j` with complex roots, + # >>> coef = [1, -7.0 + 2.0j, 18.0 - 8.0j, -22.0 + 12.0j, 12.0 - 8.0j] + # >>> r = np.array(quartic_roots(*coef)); r.sort() + # >>> r_ = np.array([1+1j , 2, 1-1j, 3-2j]); r_.sort() + # >>> np.linalg.norm(r - r_) < 1e-12 + # True + + # Biquadratic equation (degenerate case) + # >>> coef = [1, 0, -9 + 2j, 0, -18j] + # >>> r = np.array(quartic_roots(*coef)); r.sort() + # >>> r_ = np.array([3, -3, 1-1j, -1+1j]); r_.sort() + # >>> np.linalg.norm(r - r_) < 1e-12 + # True + # """ + # tol = 1e-8 + # # check on 1st coef + # if np.linalg.norm(a)< tol: + # raise ValueError('The first coefficient cannot be 0.') + + # Delta0 = c**2 - 3*b*d + 12*a*e + # Delta1 = 2*c**3 - 9*b*c*d + 27*b**2*e + 27*a*d**2 - 72*a*c*e + # p = (8*a*c - 3*b**2)/(8*a**2) + # q = (b**3 - 4*a*b*c + 8*a**2*d) / (8*a**3) + # if np.linalg.norm(q)< tol: + # # degenerate case of Biquadratic equation + # r = quadratic_roots(*[a, c, e]) + # r[0] = np.sqrt(r[0]) + # r[1] = np.sqrt(r[1]) + # r.extend([-r[0], -r[1]]) + # else: + # Q = np.power( 0.5*(Delta1 + np.sqrt(Delta1**2 - 4*Delta0**3, dtype=np.complex)), 1/3., dtype=np.complex) + # S = 0.5*np.sqrt(-2*p/3 + 1/(3*a)*(Q + Delta0/Q), dtype=np.complex ) + + + # B = -b/(4*a) + # Dp = 0.5*np.sqrt(-4*S**2 - 2*p + q/S, dtype=np.complex) + # Dm = 0.5*np.sqrt(-4*S**2 - 2*p - q/S, dtype=np.complex) + # r = [B - S + Dp, + # B - S - Dp, + # B + S + Dm, + # B + S - Dm] + # return r +# %% Main for basic tests +if __name__ == '__main__': + # run doctest Examples + import doctest + doctest.testmod() diff --git a/figures/EP2_light.png b/figures/EP2_light.png new file mode 100644 index 0000000..3dbe3b8 Binary files /dev/null and b/figures/EP2_light.png differ diff --git a/figures/EP3_light.png b/figures/EP3_light.png new file mode 100644 index 0000000..afcfb6e Binary files /dev/null and b/figures/EP3_light.png differ diff --git a/setup.py b/setup.py index 1321513..ed59957 100644 --- a/setup.py +++ b/setup.py @@ -40,7 +40,7 @@ def _getversion(): name="eastereig", version=this_version, author="B. Nennig, M. Ghienne", - author_email="benoit.nennig@supmeca.fr, martin.ghienne@supmeca.fr", + author_email="benoit.nennig@isae-supmeca.fr, martin.ghienne@isae-supmeca.fr", description="A library to locate exceptional points and to reconstruct eigenvalues loci", long_description=long_description, long_description_content_type="text/markdown", @@ -52,7 +52,9 @@ def _getversion(): ext_modules=ext_modules, install_requires=['numpy', 'scipy', - 'matplotlib'], + 'sympy>=1.4', + 'matplotlib', + 'tqdm'], classifiers=[ "Programming Language :: Python :: 3", "License :: OSI Approved :: GNU General Public License v3 (GPLv3)",