Statistical Downscaling and Bias Adjustment

The xclim.sdba submodule provides bias-adjustment methods and will eventually provide statistical downscaling algorithms. Adjustment algorithms all conform to the train - adjust scheme, formalized within Adjustment classes. Given a reference time series (ref), historical simulations (hist) and simulations to be adjusted (sim), any bias-adjustment method would be applied by first estimating the adjustment factors between the historical simulation and the observations series, and then applying these factors to sim, which could be a future simulation:

Adj = Adjustment(group="time.month")
Adj.train(ref, hist)
scen = Adj.adjust(sim, interp="linear")
Adj.ds.af  # adjustment factors.

The group argument allows adjustment factors to be estimated independently for different periods: the full time series, months, seasons or day of the year. The interp argument then allows for interpolation between these adjustment factors to avoid discontinuities in the bias-adjusted series (only applicable for monthly grouping).

Warning

If grouping according to the day of the year is needed, the xclim.core.calendar submodule contains useful tools to manage the different calendars that the input data can have. By default, if 2 different calendars are passed, the adjustment factors will always be interpolated to the largest range of day of the years but this can lead to strange values and we recommend converting the data beforehand to a common calendar.

The same interpolation principle is also used for quantiles. Indeed, for methods extracting adjustment factors by quantile, interpolation is also done between quantiles. This can help reduce discontinuities in the adjusted time series, and possibly reduce the number of quantile bins used.

Modular approach

This module adopts a modular approach instead of implementing published and named methods directly. A generic bias adjustment process is laid out as follows:

  • preprocessing on ref, hist and sim (using methods in xclim.sdba.processing or xclim.sdba.detrending)

  • creating the adjustment object Adj = Adjustment(**kwargs) (from xclim.sdba.adjustment)

  • training Adj.train(obs, sim)

  • adjustment scen = Adj.adjust(sim, **kwargs)

  • post-processing on scen (for example: re-trending)

The train-adjust approach allows to inspect the trained adjustment object. The training information is stored in the underlying Adj.ds dataset and always has a af variable with the adjustment factors. Its layout and the other available variables vary between the different algorithm, refer to Available methods.

Parameters needed by the training and the adjustment are saved to the Adj.ds dataset as a adj_params attribute. Other parameters, those only needed by the adjustment are passed in the adjust call and written to the history attribute in the output scenario dataarray.

Grouping

For basic time period grouping (months, day of year, season), passing a string to the methods needing it is sufficient. Most methods acting on grouped data also accept a window int argument to pad the groups with data from adjacent ones. Units of window are the sampling frequency of the main grouping dimension (usually time). For more complex grouping, one can pass a xclim.sdba.base.Grouper directly.

Available methods

Adjustment objects.

class xclim.sdba.adjustment.BaseAdjustment[source]

Bases: xclim.sdba.base.ParametrizableWithDataset

Base class for adjustment objects.

Children classes should implement these methods:

__init__(**kwargs)

Patameters should be set either by passing kwargs to the base class. with super().__init__(**kwarga), or through bracket access (self[‘abc’] = abc). All parameters should be simple python literals or other Parametrizable subclasses instances. See doc of Parametrizable.

_train(ref, hist)

Receiving the training target and data, returning a training dataset.

_adjust(sim, **kwargs)

Receiving the projected data and some arguments, returning the scen dataarray.

adjust(sim: xarray.core.dataarray.DataArray, **kwargs)[source]

Return bias-adjusted data. Refer to the class documentation for the algorithm details.

Parameters
  • sim (DataArray) – Time series to be bias-adjusted, usually a model output.

  • kwargs – Algorithm-specific keyword arguments, see class doc.

set_dataset(ds: xarray.core.dataset.Dataset)[source]

Stores an xarray dataset in the ds attribute.

Useful with custom object initialization or if some external processing was performed.

train(ref: xarray.core.dataarray.DataArray, hist: xarray.core.dataarray.DataArray)[source]

Train the adjustment object. Refer to the class documentation for the algorithm details.

Parameters
  • ref (DataArray) – Training target, usually a reference time series drawn from observations.

  • hist (DataArray) – Training data, usually a model output whose biases are to be adjusted.

class xclim.sdba.adjustment.DetrendedQuantileMapping(*, nquantiles: int = 20, kind: str = '+', group: Union[str, xclim.sdba.base.Grouper] = 'time', norm_window: int = 1)[source]

Bases: xclim.sdba.adjustment.EmpiricalQuantileMapping

Quantile mapping using normalized and detrended data.

__init__(*, nquantiles: int = 20, kind: str = '+', group: Union[str, xclim.sdba.base.Grouper] = 'time', norm_window: int = 1)[source]

Detrended Quantile Mapping bias-adjustment.

The algorithm follows these steps, 1-3 being the ‘train’ and 4-6, the ‘adjust’ steps.

  1. A scaling factor that would make the mean of hist match the mean of ref is computed.

  2. ref and hist are normalized by removing the “dayofyear” mean.

  3. Adjustment factors are computed between the quantiles of the normalized ref and hist.

  4. sim is corrected by the scaling factor, and either normalized by “dayofyear” and detrended group-wise or directly detrended per “dayofyear”, using a linear fit (modifiable).

  5. Values of detrended sim are matched to the corresponding quantiles of normalized hist and corrected accordingly.

  6. The trend is put back on the result.

\[F^{-1}_{ref}\left\{F_{hist}\left[\frac{\overline{hist}\cdot sim}{\overline{sim}}\right]\right\}\frac{\overline{sim}}{\overline{hist}}\]

where \(F\) is the cumulative distribution function (CDF) and \(\overline{xyz}\) is the linear trend of the data. This equation is valid for multiplicative adjustment. Based on the DQM method of [Cannon2015].

Parameters
  • At instantiation

  • nquantiles (int) – The number of quantiles to use. Two endpoints at 1e-6 and 1 - 1e-6 will be added.

  • kind ({‘+’, ‘’}*) – The adjustment kind, either additive or multiplicative.

  • group (Union[str, Grouper]) – The grouping information used in the quantile mapping process. See xclim.sdba.base.Grouper for details. the normalization step is always performed on each day of the year.

  • norm_window (1) – The window size used in the normalization grouping. Defaults to 1.

  • In adjustment

  • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use when interpolating the adjustment factors. Defaults to “nearest”.

  • detrend (int or BaseDetrend instance) – The method to use when detrending. If an int is passed, it is understood as a PolyDetrend (polynomial detrending) degree. Defaults to 1 (linear detrending)

  • extrapolation ({‘constant’, ‘nan’}) – The type of extrapolation to use. See xclim.sdba.utils.extrapolate_qm() for details. Defaults to “constant”.

  • normalize_sim (bool) – If True, scaled sim is normalized by its “dayofyear” mean and then detrended using group. The norm is broadcasted and added back on scen using interp=’nearest’, ignoring the passed interp. If False, scaled sim is detrended per “dayofyear”. This is useful on large datasets using dask, in which case “dayofyear” is a very small division, because normalization is a more efficient operation than detrending for similarly sized groups.

References

Cannon2015(1,2)

Cannon, A. J., Sobie, S. R., & Murdock, T. Q. (2015). Bias correction of GCM precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes? Journal of Climate, 28(17), 6938–6959. https://doi.org/10.1175/JCLI-D-14-00754.1

class xclim.sdba.adjustment.EmpiricalQuantileMapping(*, nquantiles: int = 20, kind: str = '+', group: Union[str, xclim.sdba.base.Grouper] = 'time')[source]

Bases: xclim.sdba.adjustment.BaseAdjustment

Conventional quantile mapping adjustment.

__init__(*, nquantiles: int = 20, kind: str = '+', group: Union[str, xclim.sdba.base.Grouper] = 'time')[source]

Empirical Quantile Mapping bias-adjustment.

Adjustment factors are computed between the quantiles of ref and sim. Values of sim are matched to the corresponding quantiles of hist and corrected accordingly.

\[F^{-1}_{ref} (F_{hist}(sim))\]

where \(F\) is the cumulative distribution function (CDF) and mod stands for model data.

Parameters
  • At instantiation

  • nquantiles (int) – The number of quantiles to use. Two endpoints at 1e-6 and 1 - 1e-6 will be added.

  • kind ({‘+’, ‘’}*) – The adjustment kind, either additive or multiplicative.

  • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details.

  • In adjustment

  • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use when interpolating the adjustment factors. Defaults to “nearset”.

  • extrapolation ({‘constant’, ‘nan’}) – The type of extrapolation to use. See xclim.sdba.utils.extrapolate_qm() for details. Defaults to “constant”.

References

Dequé, M. (2007). Frequency of precipitation and temperature extremes over France in an anthropogenic scenario: Model results and statistical correction according to observed values. Global and Planetary Change, 57(1–2), 16–26. https://doi.org/10.1016/j.gloplacha.2006.11.030

class xclim.sdba.adjustment.ExtremeValues(cluster_thresh: str, *, q_thresh: float = 0.95)[source]

Bases: xclim.sdba.adjustment.BaseAdjustment

Second order adjustment for extreme values.

__init__(cluster_thresh: str, *, q_thresh: float = 0.95)[source]

Adjustement correction for extreme values.

The tail of the distribution of adjusted data is corrected according to the parametric Generalized Pareto distribution of the reference data, [RRJF2021]. The distributions are composed of the maximal values of clusters of “large” values. With “large” values being those above cluster_thresh. Only extreme values, whose quantile within the pool of large values are above q_thresh, are re-adjusted. See Notes.

Parameters
  • At instantiation

  • cluster_thresh (Quantity (str with units)) – The threshold value for defining clusters.

  • q_thresh (float) – The quantile of “extreme” values, [0, 1[.

  • In training

  • ref_params (xr.DataArray) – Distribution parameters to use in place of a fitted dist on ref.

  • In adjustment

  • frac (float) – Fraction where the cutoff happens between the original scen and the corrected one. See Notes, ]0, 1].

  • power (float) – Shape of the correction strength, see Notes.

  • Extra diagnostics

  • —————–

  • In training

  • nclusters (Number of extreme value clusters found for each gridpoint.)

Notes

Extreme values are extracted from ref, hist and sim by finding all “clusters”, i.e. runs of consecutive values above cluster_thresh. The q_thresh`th percentile of these values is taken on `ref and hist and becomes thresh, the extreme value threshold. The maximal value of each cluster of ref, if it exceeds that new threshold, is taken and Generalized Pareto distribution is fitted to them. Similarly with sim. The cdf of the extreme values of sim is computed in reference to the distribution fitted on sim and then the corresponding values (quantile / ppf) in reference to the distribution fitted on ref are taken as the new bias-adjusted values.

Once new extreme values are found, a mixture from the original scen and corrected scen is used in the result. For each original value \(S_i\) and corrected value \(C_i\) the final extreme value \(V_i\) is:

\[V_i = C_i * \tau + S_i * (1 - \tau)\]

Where \(\tau\) is a function of sim’s extreme values \(F\) and of arguments frac (\(f\)) and power (\(p\)):

\[\tau = \left(\frac{1}{f}\frac{S - min(S)}{max(S) - min(S)}\right)^p\]

Code based on the biascorrect_extremes function of the julia package [ClimateTools].

References

ClimateTools

https://juliaclimate.github.io/ClimateTools.jl/stable/

RRJF2021

Roy, P., Rondeau-Genesse, G., Jalbert, J., Fournier, É. 2021. Climate Scenarios of Extreme Precipitation Using a Combination of Parametric and Non-Parametric Bias Correction Methods. Submitted to Climate Services, April 2021.

adjust(scen: xarray.core.dataarray.DataArray, sim: xarray.core.dataarray.DataArray, frac: float = 0.25, power: float = 1.0)[source]

Return second order bias-adjusted data. Refer to the class documentation for the algorithm details.

Parameters
  • scen (DataArray) – Bias-adjusted time series.

  • sim (DataArray) – Time series to be bias-adjusted, source of scen.

  • kwargs – Algorithm-specific keyword arguments, see class doc.

train(ref, hist, ref_params=None)[source]

Train the second-order adjustment object. Refer to the class documentation for the algorithm details.

Parameters
  • ref (DataArray) – Training target, usually a reference time series drawn from observations.

  • hist (DataArray) – Training data, usually a model output whose biases are to be adjusted.

  • ref_params (DataArray, optional) – Distribution parameters to use inplace of a Generalized Pareto fitted on ref. Must be similar to the output of xclim.indices.stats.fit called on ref. If the scipy_dist attribute is missing, genpareto is assumed. Only genextreme and genpareto are accepted as scipy_dist.

class xclim.sdba.adjustment.LOCI(*, group: Union[str, xclim.sdba.base.Grouper] = 'time', thresh: Optional[float] = None)[source]

Bases: xclim.sdba.adjustment.BaseAdjustment

Local intensity scaling adjustment intended for daily precipitation.

__init__(*, group: Union[str, xclim.sdba.base.Grouper] = 'time', thresh: Optional[float] = None)[source]

Local Intensity Scaling (LOCI) bias-adjustment.

This bias adjustment method is designed to correct daily precipitation time series by considering wet and dry days separately ([Schmidli2006]).

Multiplicative adjustment factors are computed such that the mean of hist matches the mean of ref for values above a threshold.

The threshold on the training target ref is first mapped to hist by finding the quantile in hist having the same exceedance probability as thresh in ref. The adjustment factor is then given by

\[s = \frac{\left \langle ref: ref \geq t_{ref} \right\rangle - t_{ref}}{\left \langle hist : hist \geq t_{hist} \right\rangle - t_{hist}}\]

In the case of precipitations, the adjustment factor is the ratio of wet-days intensity.

For an adjustment factor s, the bias-adjustment of sim is:

\[sim(t) = \max\left(t_{ref} + s \cdot (hist(t) - t_{hist}), 0\right)\]
Parameters
  • At instantiation

  • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details.

  • thresh (float) – The threshold in ref above which the values are scaled.

  • In adjustment

  • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use then interpolating the adjustment factors. Defaults to “linear”.

References

Schmidli2006

Schmidli, J., Frei, C., & Vidale, P. L. (2006). Downscaling from GCM precipitation: A benchmark for dynamical and statistical downscaling methods. International Journal of Climatology, 26(5), 679–689. DOI:10.1002/joc.1287

class xclim.sdba.adjustment.NpdfTransform(base: xclim.sdba.adjustment.BaseAdjustment = <class 'xclim.sdba.adjustment.QuantileDeltaMapping'>, base_kws: Optional[Mapping[str, Any]] = None, n_escore: int = 0, n_iter: int = 20)[source]

Bases: xclim.sdba.base.Parametrizable

N-dimensional probability density function transform.

__init__(base: xclim.sdba.adjustment.BaseAdjustment = <class 'xclim.sdba.adjustment.QuantileDeltaMapping'>, base_kws: Optional[Mapping[str, Any]] = None, n_escore: int = 0, n_iter: int = 20)[source]

A multivariate bias-adjustment algorithm described by [Cannon18], as part of the MBCn algorithm, based on a color-correction algorithm described by [Pitie05].

This algorithm in itself, when used with QuantileDeltaMapping, is NOT trend-preserving. The full MBCn algorithm includes a reordering step provided here by xclim.sdba.processing.reordering().

See notes for an explanation of the algorithm.

Parameters
  • At instantiation

  • base (BaseAdjustment) – An univariate bias-adjustment class. This is untested for anything else than QuantileDeltaMapping.

  • base_kws (dict, optional) – Arguments passed to the initialization of the univariate adjustment.

  • n_escore (int) – The number of elements to send to the escore function. The default, 0, means all elements are included. Pass -1 to skip computing the escore completely. Small numbers result in less significative scores, but the execution time goes up quickly with large values.

  • n_iter (int) – The number of iterations to perform. Defaults to 20.

  • In train-adjustment

  • pts_dim (str) – The name of the “multivariate” dimension. Defaults to “variables”, which is the normal case when using xclim.sdba.base.stack_variables().

  • adj_kws (dict, optional) – Dictionary of arguments to pass to the adjust method of the univariate adjustment.

  • rot_matrices (xr.DataArray, optional) – The rotation matrices as a 3D array (‘iterations’, <pts_dim>, <anything>), with shape (n_iter, <N>, <N>). If left empty, random rotation matrices will be automatically generated.

Notes

The historical reference (\(T\), for “target”), simulated historical (\(H\)) and simulated projected (\(S\)) datasets are constructed by stacking the timeseries of N variables together. The algoriths goes into the following steps:

  1. Rotate the datasets in the N-dimensional variable space with \(\mathbf{R}\), a random rotation NxN matrix.

..math

\tilde{\mathbf{T}} = \mathbf{T}\mathbf{R} \\
\tilde{\mathbf{H}} = \mathbf{H}\mathbf{R} \\
\tilde{\mathbf{S}} = \mathbf{S}\mathbf{R}

2. An univariate bias-adjustment \(\mathcal{F}\) is used on the rotated datasets. The adjustments are made in additive mode, for each variable \(i\).

\[\hat{\mathbf{H}}_i, \hat{\mathbf{S}}_i = \mathcal{F}\left(\tilde{\mathbf{T}}_i, \tilde{\mathbf{H}}_i, \tilde{\mathbf{S}}_i\right)\]
  1. The bias-adjusted datasets are rotated back.

\[\begin{split}\mathbf{H}' = \hat{\mathbf{H}}\mathbf{R} \\ \mathbf{S}' = \hat{\mathbf{S}}\mathbf{R}\end{split}\]

These three steps are repeated a certain number of times, prescribed by argument n_iter. At each iteration, a new random rotation matrix is generated.

The original algorithm ([Pitie05]), stops the iteration when some distance score converges. Following [Cannon18] and the MBCn implementation in [CannonR], we instead fix the number of iterations.

As done by [Cannon18], the distance score chosen is the “Energy distance” from [SkezelyRizzo] (see xclim.sdba.processing.escore()).

The random matrices are generated following a method laid out by [Mezzadri].

This is only part of the full MBCn algorithm, see The MBCn algorithm for an example on how to replicate the full method with xclim. This includes a standardization of the simulated data beforehand, an initial univariate adjustment and the reordering of those adjusted series according to the rank structure of the output of this algorithm.

References

Cannon18(1,2)

Cannon, A. J. (2018). Multivariate quantile mapping bias correction: An N-dimensional probability density function transform for climate model simulations of multiple variables. Climate Dynamics, 50(1), 31–49. https://doi.org/10.1007/s00382-017-3580-6

Pitie05(1,2)

Pitie, F., Kokaram, A. C., & Dahyot, R. (2005). N-dimensional probability density function transfer and its application to color transfer. Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, 2, 1434-1439 Vol. 2. https://doi.org/10.1109/ICCV.2005.166

SkezelyRizzo

Szekely, G. J. and Rizzo, M. L. (2004) Testing for Equal Distributions in High Dimension, InterStat, November (5)

class xclim.sdba.adjustment.PrincipalComponents(*, group: Union[str, xclim.sdba.base.Grouper] = 'time', crd_dims: Optional[Sequence[str]] = None, pts_dims: Optional[Sequence[str]] = None)[source]

Bases: xclim.sdba.adjustment.BaseAdjustment

Principal components inspired adjustment.

__init__(*, group: Union[str, xclim.sdba.base.Grouper] = 'time', crd_dims: Optional[Sequence[str]] = None, pts_dims: Optional[Sequence[str]] = None)[source]

Principal component adjustment.

This bias-correction method maps model simulation values to the observation space through principal components ([hnilica2017]). Values in the simulation space (multiple variables, or multiple sites) can be thought of as coordinates along axes, such as variable, temperature, etc. Principal components (PC) are a linear combinations of the original variables where the coefficients are the eigenvectors of the covariance matrix. Values can then be expressed as coordinates along the PC axes. The method makes the assumption that bias-corrected values have the same coordinates along the PC axes of the observations. By converting from the observation PC space to the original space, we get bias corrected values. See notes for a mathematical explanation.

Note that principal components is meant here as the algebraic operation defining a coordinate system based on the eigenvectors, not statistical principal component analysis.

Parameters
  • At instantiation

  • group (Union[str, Grouper]) – The grouping information. pts_dims can also be given through Grouper’s add_dims argument. See Notes. See xclim.sdba.base.Grouper for details. The adjustment will be performed on each group independently.

  • crd_dims (Sequence of str, optional) – The data dimension(s) along which the multiple simulation space dimensions are taken. They are flattened into “coordinate” dimension, see Notes. Default is None in which case all dimensions shared by ref and hist, except those in pts_dims are used. The training algorithm currently doesn’t support any chunking along the coordinate and point dimensions.

  • pts_dims (Sequence of str, optional) – The data dimensions to flatten into the “points” dimension, see Notes. They will be merged with those given through the add_dims property of group.

Notes

The input data is understood as a set of N points in a \(M\)-dimensional space.

  • \(N\) is taken along the data coordinates listed in pts_dims and the group (the main dim but also the add_dims).

  • \(M\) is taken along the data coordinates listed in crd_dims, the default being all except those in pts_dims.

For example, for a 3D matrix of data, say in (lat, lon, time), we could say that all spatial points are independent dimensions of the simulation space by passing crd_dims=['lat', 'lon']. For a (5, 5, 365) array, this results in a 25-dimensions space, i.e. \(M = 25\) and \(N = 365\).

Thus, the adjustment is equivalent to a linear transformation of these \(N\) points in a \(M\)-dimensional space.

The principal components (PC) of hist and ref are used to defined new coordinate systems, centered on their respective means. The training step creates a matrix defining the transformation from hist to ref:

\[scen = e_{R} + \mathrm{\mathbf{T}}(sim - e_{H})\]

Where:

\[\mathrm{\mathbf{T}} = \mathrm{\mathbf{R}}\mathrm{\mathbf{H}}^{-1}\]

\(\mathrm{\mathbf{R}}\) is the matrix transforming from the PC coordinates computed on ref to the data coordinates. Similarly, \(\mathrm{\mathbf{H}}\) is transform from the hist PC to the data coordinates (\(\mathrm{\mathbf{H}}\) is the inverse transformation). \(e_R\) and \(e_H\) are the centroids of the ref and hist distributions respectively. Upon running the adjust step, one may decide to use \(e_S\), the centroid of the sim distribution, instead of \(e_H\).

References

hnilica2017

Hnilica, J., Hanel, M. and Puš, V. (2017), Multisite bias correction of precipitation data from regional climate models. Int. J. Climatol., 37: 2934-2946. https://doi.org/10.1002/joc.4890

class xclim.sdba.adjustment.QuantileDeltaMapping(**kwargs)[source]

Bases: xclim.sdba.adjustment.EmpiricalQuantileMapping

Quantile mapping with sim’s quantiles computed independently.

__init__(**kwargs)[source]

Quantile Delta Mapping bias-adjustment.

Adjustment factors are computed between the quantiles of ref and hist. Quantiles of sim are matched to the corresponding quantiles of hist and corrected accordingly.

\[sim\frac{F^{-1}_{ref}\left[F_{sim}(sim)\right]}{F^{-1}_{hist}\left[F_{sim}(sim)\right]}\]

where \(F\) is the cumulative distribution function (CDF). This equation is valid for multiplicative adjustment. The algorithm is based on the “QDM” method of [Cannon2015].

Parameters
  • At instantiation

  • nquantiles (int) – The number of quantiles to use. Two endpoints at 1e-6 and 1 - 1e-6 will be added.

  • kind ({‘+’, ‘’}*) – The adjustment kind, either additive or multiplicative.

  • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details.

  • In adjustment

  • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use when interpolating the adjustment factors. Defaults to “nearest”.

  • extrapolation ({‘constant’, ‘nan’}) – The type of extrapolation to use. See xclim.sdba.utils.extrapolate_qm() for details. Defaults to “constant”.

  • Extra diagnostics

  • —————–

  • In adjustment

  • quantiles (The quantile of each value of sim. The adjustment factor is interpolated using this as the “quantile” axis on ds.af.)

References

Cannon, A. J., Sobie, S. R., & Murdock, T. Q. (2015). Bias correction of GCM precipitation by quantile mapping: How well do methods preserve changes in quantiles and extremes? Journal of Climate, 28(17), 6938–6959. https://doi.org/10.1175/JCLI-D-14-00754.1

class xclim.sdba.adjustment.Scaling(*, group='time', kind='+')[source]

Bases: xclim.sdba.adjustment.BaseAdjustment

Simple scaling adjustment.

__init__(*, group='time', kind='+')[source]

Scaling bias-adjustment.

Simple bias-adjustment method scaling variables by an additive or multiplicative factor so that the mean of hist matches the mean of ref.

Parameters
  • At instantiation

  • group (Union[str, Grouper]) – The grouping information. See xclim.sdba.base.Grouper for details.

  • kind ({‘+’, ‘’}*) – The adjustment kind, either additive or multiplicative.

  • In adjustment

  • interp ({‘nearest’, ‘linear’, ‘cubic’}) – The interpolation method to use then interpolating the adjustment factors. Defaults to “nearest”.

Utilities

SDBA utilities module.

xclim.sdba.utils.equally_spaced_nodes(n: int, eps: Optional[float] = 0.0001) numpy.array[source]

Return nodes with n equally spaced points within [0, 1] plus two end-points.

Parameters
  • n (int) – Number of equally spaced nodes.

  • eps (float, None) – Distance from 0 and 1 of end nodes. If None, do not add endpoints.

Returns

np.array – Nodes between 0 and 1.

Notes

For n=4, eps=0 : 0—x——x——x——x—1

Pre and post processing for bias adjustment.

xclim.sdba.processing.adapt_freq(ds, **kwargs)
xclim.sdba.processing.jitter_under_thresh(x: xarray.core.dataarray.DataArray, thresh: float)[source]

Replace values smaller than threshold by a uniform random noise.

Do not confuse with R’s jitter, which adds uniform noise instead of replacing values.

Parameters
  • x (xr.DataArray) – Values.

  • thresh (float) – Threshold under which to add uniform random noise to values.

Returns

array

Notes

If thresh is high, this will change the mean value of x.