Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions docs/pages/tutorials/parameters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -321,6 +321,24 @@ computation time over a timescale ``end_time-start_time``, so make sure
to set these to reflect those you actually intend to use in
calculations.

Long memory lengths
~~~~~~~~~~~~~~~~~~~

When defining the bath parameters with a spectral density (for example with a ``CustomSD`` or ``PowerLawSD``), large values of ``tcut`` can lead to numerical instabilities in the computation of the influence matrix. Indeed, the coefficients of the influence matrix are computed through discrete time differences of the ``eta_function`` given by:

.. math::
\eta(\tau) = \int_0^{\infty} \frac{J(\omega)}{\omega^2} \
\left[ ( \cos(\omega \tau) - 1 ) \
\coth\left( \frac{\omega}{2 T}\right) \
- i ( \sin(\omega \tau) - \omega \tau ) \right] \
\mathrm{d}\omega.

with memory-time ``tau`` :math:`\tau`. ``tcut`` is the highest value used for ``tau`` in this expression. Thus, high memory lengths lead to highly oscillating terms in this integral due to the :math:`\cos(\tau\omega)` and :math:`\sin(\tau\omega)` terms. The default integrator used for this integral can struggle to give converged results in these cases. However, it is possible to specify whether to use a weighted integrator instead which is better suited for these oscillatory behaviours with ``alt_integrator`` in ``CustomSD`` (or any of its subclasses).

In more detail, this keyword changes the computation by splitting the integral in two: it uses the default integrator for the divergent part near 0 and the weighted sine or cosine versions of ``scipy.integrate.quad`` after. By default, the point where the integral is split is chosen automatically by setting a maximum number of oscillations for the first integral. It is possible to change the default number of oscillations through the ``num_oscillations`` parameter. It either takes an integer, float or a function of ``tau`` which should return the maxium number of oscillations for each ``tau``. The results are often quite accurate with the default values, but rough guidelines for modifying it are to decrease the number of oscillations if the integrator struggles with the first integral (too oscillatory for the default integrator), and increase it if it struggles with the second integral (too divergent for the cos/sin weighted versions).



----------------------
Choosing dt and epsrel
----------------------
Expand Down
239 changes: 229 additions & 10 deletions oqupy/bath_correlations.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,16 @@
Module for environment correlations.
"""

from typing import Callable, Optional, Text
from typing import Callable, Optional, Union, Text, Dict
from typing import Any as ArrayLike
from functools import lru_cache
import warnings

import numpy as np
from scipy import integrate

from oqupy.base_api import BaseAPIClass
from oqupy.config import INTEGRATE_EPSREL, SUBDIV_LIMIT
from oqupy.config import INTEGRATE_EPSREL, SUBDIV_LIMIT, INTEGRATION_PARAMS
from oqupy.util import check_true

#np.seterr(all='warn')
Expand Down Expand Up @@ -309,6 +310,30 @@ def _gaussian_cutoff(omega: ArrayLike, omega_c: float) -> ArrayLike:

# --- the spectral density classes --------------------------------------------

def _weighted_integral(
re_integrand: Callable[[float], float],
im_integrand: Callable[[float], float],
w: float,
a: Optional[float] = 0.0,
b: Optional[float] = 1.0,
epsrel: Optional[float] = INTEGRATE_EPSREL,
limit: Optional[int] = SUBDIV_LIMIT) -> complex:
re_int = integrate.quad(re_integrand,
a=a,
b=b,
epsrel=epsrel,
limit=limit,
weight='cos',
wvar=w)[0]
im_int = integrate.quad(im_integrand,
a=a,
b=b,
epsrel=epsrel,
limit=limit,
weight='sin',
wvar=w)[0]
return re_int + 1j * im_int

def _complex_integral(
integrand: Callable[[float], complex],
a: Optional[float] = 0.0,
Expand Down Expand Up @@ -366,6 +391,35 @@ class CustomSD(BaseCorrelations):
An optional name for the correlations.
description: str
An optional description of the correlations.
integration_params: dict
Optional dictionary to pass integration parameters. Possible
parameters include:

- **epsrel** (*float*) -- Relative error tolerance.
- **subdiv_limit** (*int*) -- Maximal number of interval subdivisions
for numerical integration.
- **alt_integrator** (*bool*) -- Whether to use an alternative
integration scheme leveraging sine/cosine weighted versions of
`scipy.integrate.quad` to handle rapid oscillations. This may improve
numerical accuracy and stability, especially for long memory times.

The integral computed in :func:`eta_function` is cut in two: one
handling the divergent part near 0 using the default integrator
and the other taking care of the fast oscillations at angular
frequency :math:`\tau` using an appropriate integration scheme.
The cutoff point can be modified with ``num_oscillations``. This
isn't compatible with imaginary time computations.
- **num_oscillations** (*int, float, callable*) -- When
``alt_integrator`` is ``True``, specifies how many oscillations to
keep for the non-weighted part of the integral of
:func:`eta_function`. This should be either an integer or specified
as a function of :math:`\tau`. To help choose: the higher this number
is, the more the default integrator will struggle to integrate the
non-weighted part. The lower it is, the more the weighted integrators
will struggle to integrate the near divergent part close to 0. A
constant value is a good choice for most use cases. Otherwise, it is
possible to input a function to target possible issues at specific
memory times :math:`\tau`.
"""

def __init__(
Expand All @@ -375,7 +429,8 @@ def __init__(
cutoff_type: Optional[Text] = 'exponential',
temperature: Optional[float] = 0.0,
name: Optional[Text] = None,
description: Optional[Text] = None) -> None:
description: Optional[Text] = None,
integration_params: Optional[Dict] = None) -> None:
"""Create a CustomFunctionSD (spectral density) object. """

# check input: j_function
Expand Down Expand Up @@ -408,6 +463,37 @@ def __init__(
raise ValueError("Temperature must be >= 0.0 (but is {})".format(
tmp_temperature))
self.temperature = tmp_temperature
if integration_params is None:
integration_params = INTEGRATION_PARAMS
elif isinstance(integration_params, dict):
integration_params = INTEGRATION_PARAMS | integration_params
else:
raise AssertionError("integration_params should be "\
"a dictionary")

# input check for alt_integrator
alt_integrator = integration_params["alt_integrator"]
num_oscillations = integration_params["num_oscillations"]
assert isinstance(alt_integrator, bool)
if alt_integrator is False and num_oscillations is not None:
warnings.warn("num_oscillations will be discarded since "\
"alt_integrator is False", UserWarning)
self._alt_integrator = alt_integrator
# create the first_cutoff function and input check for num_oscillations
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to assert that, if num_oscillations is provided, alt_integrator should be true? Or at least provide a warning that it will be discarded when alt_integrator is False?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a warning L478.

if isinstance(num_oscillations, (int, float)) and num_oscillations > 0:
# This form is a step version of the first cutoff which prevents
# useless computations of _truncated_eta
self.first_cutoff = lambda t: 2*np.pi*num_oscillations/(1
+1/num_oscillations)**np.floor(np.log(t)/
np.log(1+1/num_oscillations))
else:
try:
float(num_oscillations(1.0))
except Exception as e:
raise AssertionError("num_oscillations must be a positive" \
"float, int or a function returning floats or ints.") from e
self.first_cutoff = lambda t: max(min(2*np.pi*num_oscillations(t)/t,
self.cutoff), 0.0)

self._cutoff_function = \
lambda omega: CUTOFF_DICT[self.cutoff_type](omega, self.cutoff)
Expand Down Expand Up @@ -515,23 +601,125 @@ def integrand(w):
integral = integral.real
return integral

def _eta_function_alt(self,
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this function for computation of eta_function with the new method so that the main function is less cluttered.

tau: float,
integrand: Callable,
epsrel: float,
subdiv_limit: int) -> complex:
"""Computes `eta_function` when `alt_integrator` is set to True."""
im_integrand = lambda w: - self._spectral_density(w) / w ** 2
if self.temperature == 0.0:
re_integrand = lambda w: self._spectral_density(w) / w ** 2
else:
def re_integrand(w):
# this is to stop overflow
if np.exp(-w / self.temperature) > np.finfo(float).eps:
inte = self._spectral_density(w) / w ** 2 \
* 1/np.tanh(w / (2*self.temperature))
else:
inte = self._spectral_density(w) / w ** 2
return inte

omega_tilde = self.first_cutoff(tau)

# compute tau independant parts of the full eta integral
re_constant, im_constant = self._truncated_eta(a=omega_tilde,
epsrel=epsrel,
limit=subdiv_limit)
# first cut of integral on [0, omega_tilde] (non-weighted integral)
integral = _complex_integral(integrand,
a=0.0,
b=omega_tilde,
epsrel=epsrel,
limit=subdiv_limit)

# second cut of integral on [omega_tilde, omega_c] (weighted integral)
integral += _weighted_integral(re_integrand, im_integrand,
w=tau,
a=omega_tilde,
b=self.cutoff,
epsrel=epsrel,
limit=subdiv_limit)

if self.cutoff_type != "hard":
# last cut of integral on [omega_c, infinity] (weighted integral)
integral += _weighted_integral(re_integrand, im_integrand,
w=tau,
a=self.cutoff,
b=np.inf,
epsrel=epsrel,
limit=subdiv_limit)
return - (integral + 1.j*tau*im_constant - re_constant)

@lru_cache(maxsize=2 ** 10, typed=False)
def _truncated_eta(self,
a: float,
epsrel: float,
limit: int) -> complex:
"""Computes the parts of `eta_function` that are tau independant when
`alt_integrator` is set to True.
"""
if self.temperature == 0.0:
def re_integrand(w):
return self._spectral_density(w) / w ** 2
else:
def re_integrand(w):
# this is to stop overflow
if np.exp(-w / self.temperature) > np.finfo(float).eps:
inte = self._spectral_density(w) / w ** 2 \
* 1/np.tanh(w / (2*self.temperature))
else:
inte = self._spectral_density(w) / w ** 2
return inte

def im_integrand(w):
return self._spectral_density(w)/w

im_constant = integrate.quad(im_integrand,
a=a,
b=self.cutoff,
epsrel=epsrel,
limit=limit)[0]

re_constant = integrate.quad(re_integrand,
a=a,
b=self.cutoff,
epsrel=epsrel,
limit=limit)[0]

if self.cutoff_type != "hard":
im_constant += integrate.quad(im_integrand,
a=self.cutoff,
b=np.inf,
epsrel=epsrel,
limit=limit)[0]

re_constant += integrate.quad(re_integrand,
a=self.cutoff,
b=np.inf,
epsrel=epsrel,
limit=limit)[0]
return re_constant, im_constant

@lru_cache(maxsize=2 ** 10, typed=False)
def eta_function(
self,
tau: ArrayLike,
epsrel: Optional[float] = INTEGRATE_EPSREL,
subdiv_limit: Optional[int] = SUBDIV_LIMIT,
alt_integrator: Optional[Union[bool, None]] = None,
matsubara: Optional[bool] = False) -> ArrayLike:
r"""
Auto-correlation function associated to the spectral density at the
given temperature :math:`T`
:math:`\eta` function associated to the spectral density at the
given temperature :math:`T` as given in [Strathearn2017]

.. math::

C(\tau) = \int_0^{\infty} J(\omega) \
\left[ \cos(\omega \tau) \
\eta(\tau) = \int_0^{\infty} \frac{J(\omega)}{\omega^2} \
\left[ ( \cos(\omega \tau) - 1 ) \
\coth\left( \frac{\omega}{2 T}\right) \
- i \sin(\omega \tau) \right] \mathrm{d}\omega .
- i ( \sin(\omega \tau) - \omega \tau ) \right] \
\mathrm{d}\omega .

with time difference `tau` :math:`\tau`.

Expand All @@ -543,12 +731,24 @@ def eta_function(
Relative error tolerance.
subdiv_limit: int
Maximal number of interval subdivisions for numerical integration.

alt_integrator: Union[bool, None]
Whether to use an alternative integration scheme leveraging
sine/cosine weighted versions of scipy.integrate.quad to handle
rapid oscillations. See ``alt_integrator`` in :class:`CustomSD`.
If the value is ``None``, the value of ``alt_integrator`` set
during initialization of the :class:`CustomSD` object is used
instead.
Returns
-------
correlation : ndarray
The auto-correlation function :math:`C(\tau)` at time :math:`\tau`.
The function :math:`\eta(\tau)` at time :math:`\tau`.
"""
if alt_integrator is None:
alt_integrator = self._alt_integrator
check_true(not (matsubara is True and alt_integrator is True),
"Can't use weighted integrator with matsubara")
check_true(isinstance(alt_integrator, bool),
"alt_integrator has to be either True, False or None")
# real and imaginary part of the integrand
if matsubara:
tau = -1j * tau
Expand All @@ -574,6 +774,14 @@ def integrand(w):
* (np.exp(-1j * w * tau) - 1 + 1j * w * tau)
return inte

# Alternative computation with weighted integrators
if alt_integrator and tau*self.cutoff > 1.:
return self._eta_function_alt(tau=tau,
integrand=integrand,
epsrel=epsrel,
subdiv_limit=subdiv_limit)

# Default computation of eta_function when alt_integrator is False
integral = _complex_integral(integrand,
a=0.0,
b=self.cutoff,
Expand All @@ -598,6 +806,7 @@ def correlation_2d_integral(
shape: Optional[Text] = 'square',
epsrel: Optional[float] = INTEGRATE_EPSREL,
subdiv_limit: Optional[int] = SUBDIV_LIMIT,
alt_integrator: Optional[Union[bool, None]] = None,
matsubara: Optional[bool] = False) -> complex:
r"""
2D integrals of the correlation function
Expand Down Expand Up @@ -632,6 +841,10 @@ def correlation_2d_integral(
Relative error tolerance.
subdiv_limit: int
Maximal number of interval subdivisions for numerical integration.
alt_integrator: Union[bool, None]
Whether or not to use a sine/cosine weighted version of the
integrals that could be better suited if you're encountering
numerical difficulties, especially for long times.

Returns
-------
Expand All @@ -642,6 +855,7 @@ def correlation_2d_integral(
kwargs = {
'epsrel': epsrel,
'subdiv_limit': subdiv_limit,
'alt_integrator': alt_integrator,
'matsubara': matsubara}

if shape == 'upper-triangle':
Expand Down Expand Up @@ -701,6 +915,9 @@ class PowerLawSD(CustomSD):
``'gaussian'``}
temperature: float
The environment's temperature.
integration_params: dict
Optional dictionary to pass integration parameters. See
``integration_params`` in :class:`CustomSD`.
name: str
An optional name for the correlations.
description: str
Expand All @@ -714,6 +931,7 @@ def __init__(
cutoff: float,
cutoff_type: Text = 'exponential',
temperature: Optional[float] = 0.0,
integration_params: Optional[Dict] = None,
name: Optional[Text] = None,
description: Optional[Text] = None) -> None:
"""Create a StandardSD (spectral density) object. """
Expand Down Expand Up @@ -747,6 +965,7 @@ def __init__(
cutoff=cutoff,
cutoff_type=cutoff_type,
temperature=temperature,
integration_params=integration_params,
name=name,
description=description)

Expand Down
8 changes: 8 additions & 0 deletions oqupy/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,11 @@

# default tolerance for degeneracy checking (how many decimals to round to)
DEFAULT_TOLERANCE_DEGENERACY = 12


# -- BATH_CORRELATIONS -------------------------------------------------------

INTEGRATION_PARAMS = {"epsrel": INTEGRATE_EPSREL,
"subdiv_limit": SUBDIV_LIMIT,
"alt_integrator": False,
"num_oscillations": 3}
Loading