Title: | Statistical Bias Correction Kit |
---|---|
Description: | Implementation of several recent multivariate bias correction methods with a unified interface to facilitate their use. A description and comparison between methods can be found in <doi:10.5194/esd-11-537-2020>. |
Authors: | Yoann Robin [aut, cre], Mathieu Vrac [cph] |
Maintainer: | Yoann Robin <[email protected]> |
License: | GPL-3 |
Version: | 1.0.0 |
Built: | 2025-03-05 04:01:52 UTC |
Source: | https://github.com/cran/SBCK |
Perform a multivariate (non stationary) bias correction.
Use Quantiles shuffle in calibration and projection period with CDFt
mvq
[MVQuantilesShuffle] Class to transform dependance structure
bc_method
[SBCK::] Bias correction method
bckwargs
[list] List of arguments of bias correction
bcm_
[SBCK::] Instancied bias correction method
reverse
[bool] If we apply bc_method first and then shuffle, or reverse
new()
Create a new AR2D2 object.
AR2D2$new( col_cond = base::c(1), lag_search = 1, lag_keep = 1, bc_method = SBCK::CDFt, shuffle = "quantile", reverse = FALSE, ... )
col_cond
Conditionning colum
lag_search
Number of lags to transform the dependence structure
lag_keep
Number of lags to keep
bc_method
Bias correction method
shuffle
Shuffle method used, can be quantile or rank
reverse
If we apply bc_method first and then shuffle, or reverse
...
Others named arguments passed to bc_method$new
A new 'AR2D2' object.
fit()
Fit the bias correction method. If X1 is NULL, the method is considered as stationary
AR2D2$fit(Y0, X0, X1 = NULL)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
AR2D2$predict(X1 = NULL, X0 = NULL)
X1
[matrix: n_samples * n_features or NULL] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL (and vice-versa), else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
AR2D2$clone(deep = FALSE)
deep
Whether to make a deep clone.
Vrac, M. et S. Thao (2020). “R2 D2 v2.0 : accounting for temporal dependences in multivariate bias correction via analogue rank resampling”. In : Geosci. Model Dev. 13.11, p. 5367-5387. doi :10.5194/gmd-13-5367-2020.
## Three 4-variate random variables Y0 = matrix( stats::rnorm( n = 1000 ) , ncol = 4 ) ## Biased in calibration period X0 = matrix( stats::rnorm( n = 1000 ) , ncol = 4 ) / 2 + 3 ## Reference in calibration period X1 = matrix( stats::rnorm( n = 1000 ) , ncol = 4 ) * 2 + 6 ## Biased in projection period ## Bias correction cond_col = base::c(2,4) lag_search = 6 lag_keep = 3 ## Step 1 : construction of the class AR2D2 ar2d2 = SBCK::AR2D2$new( cond_col , lag_search , lag_keep ) ## Step 2 : Fit the bias correction model ar2d2$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = ar2d2$predict(X1,X0)
## Three 4-variate random variables Y0 = matrix( stats::rnorm( n = 1000 ) , ncol = 4 ) ## Biased in calibration period X0 = matrix( stats::rnorm( n = 1000 ) , ncol = 4 ) / 2 + 3 ## Reference in calibration period X1 = matrix( stats::rnorm( n = 1000 ) , ncol = 4 ) * 2 + 6 ## Biased in projection period ## Bias correction cond_col = base::c(2,4) lag_search = 6 lag_keep = 3 ## Step 1 : construction of the class AR2D2 ar2d2 = SBCK::AR2D2$new( cond_col , lag_search , lag_keep ) ## Step 2 : Fit the bias correction model ar2d2$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = ar2d2$predict(X1,X0)
Lenght of cell to compute an histogram
bin_width_estimator(X, method = "auto")
bin_width_estimator(X, method = "auto")
X |
[matrix] A matrix containing data, nrow = n_samples, ncol = n_features |
method |
[string] Method to estimate bin_width, values are "auto", "FD" (Friedman Draconis, robust over outliners) or "Sturges". If "auto" is used and if nrow(X) < 1000, "Sturges" is used, else "FD" is used. |
[vector] Lenght of bins
X = base::cbind( stats::rnorm( n = 2000 ) , stats::rexp(2000) ) ## Friedman Draconis is used binw_width = SBCK::bin_width_estimator( X , method = "auto" ) X = stats::rnorm( n = 500 ) ## Sturges is used binw_width = SBCK::bin_width_estimator( X , method = "auto" )
X = base::cbind( stats::rnorm( n = 2000 ) , stats::rexp(2000) ) ## Friedman Draconis is used binw_width = SBCK::bin_width_estimator( X , method = "auto" ) X = stats::rnorm( n = 500 ) ## Sturges is used binw_width = SBCK::bin_width_estimator( X , method = "auto" )
Perform an univariate bias correction of X with respect to Y.
Correction is applied margins by margins.
n_features
[integer] Number of features
tol
[double] Floatting point tolerance
distY0
[ROOPSD distribution or a list of them] Describe the law of each margins. A list permit to use different laws for each margins. Default is ROOPSD::rv_histogram.
distY1
[ROOPSD distribution or a list of them] Describe the law of each margins. A list permit to use different laws for each margins. Default is ROOPSD::rv_histogram.
distX0
[ROOPSD distribution or a list of them] Describe the law of each margins. A list permit to use different laws for each margins. Default is ROOPSD::rv_histogram.
distX1
[ROOPSD distribution or a list of them] Describe the law of each margins. A list permit to use different laws for each margins. Default is ROOPSD::rv_histogram.
new()
Create a new CDFt object.
CDFt$new(...)
...
Optional arguments are: - distX0, distX1, models in calibration and projection period, see ROOPSD - distY0, distY1, observations in calibration and projection period, see ROOPSD - kwargsX0, kwargsX1, list of arguments for each respective distribution - kwargsY0, kwargsY1, list of arguments for each respective distribution - scale_left_tail [float] Scale applied on the left support (min to median) between calibration and projection period. If NULL (default), it is determined during the fit. If == 1, equivalent to the original algorithm of CDFt. - scale_right_tail [float] Scale applied on the right support (median to max) between calibration and projection period. If NULL (default), it is determined during the fit. If == 1, equivalent to the original algorithm of CDFt. - normalize_cdf [bool or vector of bool] If a normalization is applied to the data to maximize the overlap of the support. Can be a bool (True or False, applied for all colums), or a list of bool of size 'n_features' to distinguished each columns.
A new 'CDFt' object.
fit()
Fit the bias correction method
CDFt$fit(Y0, X0, X1)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
CDFt$predict(X1, X0 = NULL)
X1
[matrix: n_samples * n_features] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
CDFt$clone(deep = FALSE)
deep
Whether to make a deep clone.
Michelangeli, P.-A., Vrac, M., and Loukos, H.: Probabilistic downscaling approaches: Application to wind cumulative distribution functions, Geophys. Res. Lett., 36, L11708, https://doi.org/10.1029/2009GL038401, 2009.
## Three bivariate random variables (rnorm and rexp are inverted between ref and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class CDFt cdft = SBCK::CDFt$new() ## Step 2 : Fit the bias correction model cdft$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing ## corrections Z = cdft$predict(X1,X0) Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
## Three bivariate random variables (rnorm and rexp are inverted between ref and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class CDFt cdft = SBCK::CDFt$new() ## Step 2 : Fit the bias correction model cdft$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing ## corrections Z = cdft$predict(X1,X0) Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
Compute Chebyshev distance between two dataset or SparseHist X and Y
chebyshev(X, Y)
chebyshev(X, Y)
X |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
Y |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
[float] value of distance
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=2) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals d = SBCK::chebyshev( X , Y ) d = SBCK::chebyshev(muX , Y ) d = SBCK::chebyshev( X , muY ) d = SBCK::chebyshev(muX , muY )
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=2) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals d = SBCK::chebyshev( X , Y ) d = SBCK::chebyshev(muX , Y ) d = SBCK::chebyshev( X , muY ) d = SBCK::chebyshev(muX , muY )
Pairwise distances between X and themselves with a R function (metric). DO NOT USE, use SBCK::pairwise_distances
cpp_pairwise_distances_XCall(X,metric)
cpp_pairwise_distances_XCall(X,metric)
X |
[Rcpp::NumericMatrix] Matrix |
metric |
[Rcpp::Function] R function |
Pairwise distances between X and themselves with a compiled str_metric. DO NOT USE, use SBCK::pairwise_distances
cpp_pairwise_distances_Xstr(X,str_metric)
cpp_pairwise_distances_Xstr(X,str_metric)
X |
[Rcpp::NumericMatrix] Matrix |
str_metric |
[std::string] c++ string |
Pairwise distances between X and Y with a R function (metric). DO NOT USE, use SBCK::pairwise_distances
cpp_pairwise_distances_XYCall(X,Y,metric)
cpp_pairwise_distances_XYCall(X,Y,metric)
X |
[Rcpp::NumericMatrix] Matrix |
Y |
[Rcpp::NumericMatrix] Matrix |
metric |
[Rcpp::Function] R function |
Pairwise distances between two differents matrix X and Y with a compiled str_metric. DO NOT USE, use SBCK::pairwise_distances
cpp_pairwise_distances_XYstr(X,Y,str_metric)
cpp_pairwise_distances_XYstr(X,Y,str_metric)
X |
[Rcpp::NumericMatrix] Matrix |
Y |
[Rcpp::NumericMatrix] Matrix |
str_metric |
[std::string] c++ string |
Just a function to transform two datasets into SparseHist, if X or Y (or the both) are already a SparseHist, update just the second
data_to_hist(X, Y)
data_to_hist(X, Y)
X |
[matrix or SparseHist] |
Y |
[matrix or SparseHist] |
[list(muX,muY)] a list with the two SparseHist
X = base::cbind( stats::rnorm(2000) , stats::rexp(2000) ) Y = base::cbind( stats::rexp(2000) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four give the same result SBCK::data_to_hist( X , Y ) SBCK::data_to_hist( muX , Y ) SBCK::data_to_hist( X , muY ) SBCK::data_to_hist( muX , muY )
X = base::cbind( stats::rnorm(2000) , stats::rexp(2000) ) Y = base::cbind( stats::rexp(2000) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four give the same result SBCK::data_to_hist( X , Y ) SBCK::data_to_hist( muX , Y ) SBCK::data_to_hist( X , muY ) SBCK::data_to_hist( muX , muY )
Generate a testing dataset from bimodale random bivariate Gaussian distribution
dataset_bimodal_reverse_2d(n_samples)
dataset_bimodal_reverse_2d(n_samples)
n_samples |
[integer] numbers of samples drawn |
[list] a list containing X0, X1 (biased in calibration/projection) and Y0 (reference in calibration)
XY = SBCK::dataset_bimodal_reverse_2d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
XY = SBCK::dataset_bimodal_reverse_2d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
Generate a testing dataset from random bivariate Gaussian distribution
dataset_gaussian_2d(n_samples)
dataset_gaussian_2d(n_samples)
n_samples |
[integer] numbers of samples drawn |
[list] a list containing X0, X1 (biased in calibration/projection) and Y0 (reference in calibration)
XY = SBCK::dataset_gaussian_2d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
XY = SBCK::dataset_gaussian_2d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
Generate a testing dataset such that the biased dataset is a distribution of the the form Normal x Exp and the reference of the the form Exp x Normal.
dataset_gaussian_exp_2d(n_samples)
dataset_gaussian_exp_2d(n_samples)
n_samples |
[integer] numbers of samples drawn |
[list] a list containing X0, X1 (biased in calibration/projection) and Y0 (reference in calibration)
XY = SBCK::dataset_gaussian_exp_2d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
XY = SBCK::dataset_gaussian_exp_2d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
Generate a univariate testing dataset from a mixture of gaussian and exponential distribution
dataset_gaussian_exp_mixture_1d(n_samples)
dataset_gaussian_exp_mixture_1d(n_samples)
n_samples |
[integer] numbers of samples drawn |
[list] a list containing X0, X1 (biased in calibration/projection) and Y0 (reference in calibration)
XY = SBCK::dataset_gaussian_exp_mixture_1d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
XY = SBCK::dataset_gaussian_exp_mixture_1d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
Generate a testing dataset such that the biased dataset is a normal distribution and reference a mixture a normal with a form in "L"
dataset_gaussian_L_2d(n_samples)
dataset_gaussian_L_2d(n_samples)
n_samples |
[integer] numbers of samples drawn |
[list] a list containing X0, X1 (biased in calibration/projection) and Y0 (reference in calibration)
XY = SBCK::dataset_gaussian_L_2d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
XY = SBCK::dataset_gaussian_L_2d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
Generate a univariate testing dataset such that biased data follow an exponential law whereas reference follow a normal distribution
dataset_gaussian_VS_exp_1d(n_samples)
dataset_gaussian_VS_exp_1d(n_samples)
n_samples |
[integer] numbers of samples drawn |
[list] a list containing X0, X1 (biased in calibration/projection) and Y0 (reference in calibration)
XY = SBCK::dataset_gaussian_VS_exp_1d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
XY = SBCK::dataset_gaussian_VS_exp_1d(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
Generate a testing dataset similar to temperature and precipitation. The method is the following: - Data from a multivariate normal law (dim = 2) are drawn - The quantile mapping is used to map the last column into the exponential law - Values lower than a fixed quantile are replaced by 0
dataset_like_tas_pr(n_samples)
dataset_like_tas_pr(n_samples)
n_samples |
[integer] numbers of samples drawn |
[list] a list containing X0, X1 (biased in calibration/projection) and Y0 (reference in calibration)
XY = SBCK::dataset_like_tas_pr(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
XY = SBCK::dataset_like_tas_pr(2000) XY$X0 ## Biased in calibration period XY$Y0 ## Reference in calibration period XY$X1 ## Biased in projection period
Class used by CDFt and QM to facilitate fit, do not use
Used to parallel work for margins
dist
[ROOPSD distribution] name of class
law
[ROOPSD distribution] class set
kwargs
[list] arguments of dist
new()
Create a new DistHelper object.
DistHelper$new(dist, kwargs)
dist
[ROOPSD distribution or list] statistical law
kwargs
[list] arguments passed to dist
A new 'DistHelper' object.
set_features()
set the number of features
DistHelper$set_features(n_features)
n_features
[integer] numbers of features
NULL
fit()
fit the laws
DistHelper$fit(X, i)
X
[matrix] dataset to fit
i
[integer] margins to fit
NULL
is_frozen()
Test if margins i is frozen
DistHelper$is_frozen(i)
i
[integer] margins to fit
[bool]
is_parametric()
Test if margins i is parametric
DistHelper$is_parametric(i)
i
[integer] margins to fit
[bool]
clone()
The objects of this class are cloneable with this method.
DistHelper$clone(deep = FALSE)
deep
Whether to make a deep clone.
##
##
Perform a multivariate (non stationary) bias correction.
Three random variables are needed, Y0, X0 and X1. The dynamic between X0 and X1 is estimated, and applied to Y0 to estimate Y1. Finally, OTC is used between X1 and the Y1 estimated.
SBCK::OTC
-> dOTC
new()
Create a new dOTC object.
dOTC$new( bin_width = NULL, bin_origin = NULL, cov_factor = "std", ot = SBCK::OTNetworkSimplex$new() )
bin_width
[vector or NULL] A vector of lengths of the cells discretizing R^numbers of variables. If NULL, it is estimating during the fit
bin_origin
[vector or NULL] Coordinate of lower corner of one cell. If NULL, c(0,...,0) is used
cov_factor
[string or matrix] Covariance factor to correct the dynamic transferred between X0 and Y0. For string, available values are "std" and "cholesky"
ot
[OTSolver] Optimal Transport solver, default is the network simplex
A new 'dOTC' object.
fit()
Fit the bias correction method
dOTC$fit(Y0, X0, X1)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
Note: Only the center of the bins associated to the corrected points are returned, but all corrections of the form: >> bw = dotc$bin_width / 2 >> n = base::prod(base::dim(X1)) >> Z1 = dotc$predict(X1) >> Z1 = Z1 + t(matrix(stats::runif( n = n min = - bw , max = bw ) , ncol = dim(X1)[1] )) are equivalent for OTC.
dOTC$predict(X1, X0 = NULL)
X1
[matrix: n_samples * n_features] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
dOTC$clone(deep = FALSE)
deep
Whether to make a deep clone.
Robin, Y., Vrac, M., Naveau, P., Yiou, P.: Multivariate stochastic bias corrections with optimal transport, Hydrol. Earth Syst. Sci., 23, 773–786, 2019, https://doi.org/10.5194/hess-23-773-2019
## Three bivariate random variables (rnorm and rexp are inverted between ref and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bin length bin_width = c(0.2,0.2) ## Bias correction ## Step 1 : construction of the class dOTC dotc = SBCK::dOTC$new( bin_width ) ## Step 2 : Fit the bias correction model dotc$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing ## corrections Z = dotc$predict(X1,X0) Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
## Three bivariate random variables (rnorm and rexp are inverted between ref and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bin length bin_width = c(0.2,0.2) ## Bias correction ## Step 1 : construction of the class dOTC dotc = SBCK::dOTC$new( bin_width ) ## Step 2 : Fit the bias correction model dotc$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing ## corrections Z = dotc$predict(X1,X0) Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
Perform a bias correction of auto-correlation
Correct auto-correlation with a shift approach, taking into account of non stationarity.
shift
[Shift class] Shift class to shift data.
bc_method
[SBCK::BC_method] Underlying bias correction method.
method
[character] If inverse is by row or column, see class Shift
ref
[integer] reference column/row to inverse shift, see class Shift. Default is 0.5 * (lag+1)
new()
Create a new dTSMBC object.
dTSMBC$new(lag, bc_method = dOTC, method = "row", ref = "middle", ...)
lag
[integer] max lag of autocorrelation
bc_method
[SBCK::BC_METHOD] bias correction method to use after shift of data, default is OTC
method
[character] If inverse is by row or column, see class Shift
ref
[integer] reference column/row to inverse shift, see class Shift. Default is 0.5 * (lag+1)
...
[] All others arguments are passed to bc_method
A new 'dTSMBC' object.
fit()
Fit the bias correction method
dTSMBC$fit(Y0, X0, X1)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
dTSMBC$predict(X1, X0 = NULL)
X1
[matrix: n_samples * n_features] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
dTSMBC$clone(deep = FALSE)
deep
Whether to make a deep clone.
Robin, Y. and Vrac, M.: Is time a variable like the others in multivariate statistical downscaling and bias correction?, Earth Syst. Dynam. Discuss. [preprint], https://doi.org/10.5194/esd-2021-12, in review, 2021.
## arima model parameters modelX0 = list( ar = base::c( 0.6 , 0.2 , -0.1 ) ) modelX1 = list( ar = base::c( 0.4 , 0.1 , -0.3 ) ) modelY0 = list( ar = base::c( -0.3 , 0.4 , -0.2 ) ) ## arima random generator rand.genX0 = function(n){ return(stats::rnorm( n , mean = 0.2 , sd = 1 )) } rand.genX1 = function(n){ return(stats::rnorm( n , mean = 0.8 , sd = 1 )) } rand.genY0 = function(n){ return(stats::rnorm( n , mean = 0 , sd = 0.7 )) } ## Generate two AR processes X0 = stats::arima.sim( n = 1000 , model = modelX0 , rand.gen = rand.genX0 ) X1 = stats::arima.sim( n = 1000 , model = modelX1 , rand.gen = rand.genX1 ) Y0 = stats::arima.sim( n = 1000 , model = modelY0 , rand.gen = rand.genY0 ) X0 = as.vector( X0 ) X1 = as.vector( X1 ) Y0 = as.vector( Y0 + 5 ) ## And correct it with 30 lags dtsbc = SBCK::dTSMBC$new( 30 ) dtsbc$fit( Y0 , X0 , X1 ) Z = dtsbc$predict(X1,X0)
## arima model parameters modelX0 = list( ar = base::c( 0.6 , 0.2 , -0.1 ) ) modelX1 = list( ar = base::c( 0.4 , 0.1 , -0.3 ) ) modelY0 = list( ar = base::c( -0.3 , 0.4 , -0.2 ) ) ## arima random generator rand.genX0 = function(n){ return(stats::rnorm( n , mean = 0.2 , sd = 1 )) } rand.genX1 = function(n){ return(stats::rnorm( n , mean = 0.8 , sd = 1 )) } rand.genY0 = function(n){ return(stats::rnorm( n , mean = 0 , sd = 0.7 )) } ## Generate two AR processes X0 = stats::arima.sim( n = 1000 , model = modelX0 , rand.gen = rand.genX0 ) X1 = stats::arima.sim( n = 1000 , model = modelX1 , rand.gen = rand.genX1 ) Y0 = stats::arima.sim( n = 1000 , model = modelY0 , rand.gen = rand.genY0 ) X0 = as.vector( X0 ) X1 = as.vector( X1 ) Y0 = as.vector( Y0 + 5 ) ## And correct it with 30 lags dtsbc = SBCK::dTSMBC$new( 30 ) dtsbc$fit( Y0 , X0 , X1 ) Z = dtsbc$predict(X1,X0)
Perform a multivariate (non stationary) bias correction.
use Schaake shuffle
SBCK::CDFt
-> ECBC
new()
Create a new ECBC object.
ECBC$new(...)
...
This class is based to CDFt, and takes the same arguments.
A new 'ECBC' object.
fit()
Fit the bias correction method
ECBC$fit(Y0, X0, X1)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
ECBC$predict(X1, X0 = NULL)
X1
[matrix: n_samples * n_features] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
ECBC$clone(deep = FALSE)
deep
Whether to make a deep clone.
Vrac, M. and P. Friederichs, 2015: Multivariate—Intervariable, Spatial, and Temporal—Bias Correction. J. Climate, 28, 218–237, https://doi.org/10.1175/JCLI-D-14-00059.1
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class ECBC ecbc = SBCK::ECBC$new() ## Step 2 : Fit the bias correction model ecbc$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = ecbc$predict(X1,X0)
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class ECBC ecbc = SBCK::ECBC$new() ## Step 2 : Fit the bias correction model ecbc$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = ecbc$predict(X1,X0)
Compute Energy distance between two dataset or SparseHist X and Y
energy(X, Y, p = 2, metric = "euclidean")
energy(X, Y, p = 2, metric = "euclidean")
X |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
Y |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
p |
[float] power of energy distance, default is 2. |
metric |
[str or function] metric for pairwise distance, default is "euclidean", see SBCK::pairwise_distances |
[float] value of distance
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=10) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals w2 = SBCK::energy(X,Y) w2 = SBCK::energy(muX,Y) w2 = SBCK::energy(X,muY) w2 = SBCK::energy(muX,muY)
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=10) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals w2 = SBCK::energy(X,Y) w2 = SBCK::energy(muX,Y) w2 = SBCK::energy(X,muY) w2 = SBCK::energy(muX,muY)
Compute Euclidean distance between two dataset or SparseHist X and Y
euclidean(X, Y)
euclidean(X, Y)
X |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
Y |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
[float] value of distance
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=2) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals d = SBCK::euclidean( X , Y ) d = SBCK::euclidean(muX , Y ) d = SBCK::euclidean( X , muY ) d = SBCK::euclidean(muX , muY )
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=2) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals d = SBCK::euclidean( X , Y ) d = SBCK::euclidean(muX , Y ) d = SBCK::euclidean( X , muY ) d = SBCK::euclidean(muX , muY )
Always return X1 / X0 as correction.
Only for comparison.
new()
Create a new IdBC object.
IdBC$new()
A new 'IdBC' object.
fit()
Fit the bias correction method
IdBC$fit(Y0, X0, X1 = NULL)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection, can be NULL for stationary BC method
NULL
predict()
Predict the correction. Use named keywords to use stationary or non-stationary method.
IdBC$predict(X1 = NULL, X0 = NULL)
X1
[matrix: n_samples * n_features or NULL] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return X1 and / or X0
clone()
The objects of this class are cloneable with this method.
IdBC$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class IdBC idbc = SBCK::IdBC$new() ## Step 2 : Fit the bias correction model idbc$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = idbc$predict(X1,X0) ## Z$Z0 # == X0 ## Z$Z1 # == X1
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class IdBC idbc = SBCK::IdBC$new() ## Step 2 : Fit the bias correction model idbc$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = idbc$predict(X1,X0) ## Z$Z0 # == X0 ## Z$Z1 # == X1
Compute Manhattan distance between two dataset or SparseHist X and Y
manhattan(X, Y)
manhattan(X, Y)
X |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
Y |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
[float] value of distance
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=2) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals d = SBCK::manhattan( X , Y ) d = SBCK::manhattan(muX , Y ) d = SBCK::manhattan( X , muY ) d = SBCK::manhattan(muX , muY )
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=2) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals d = SBCK::manhattan( X , Y ) d = SBCK::manhattan(muX , Y ) d = SBCK::manhattan( X , muY ) d = SBCK::manhattan(muX , muY )
Perform a multivariate bias correction.
BC is performed with an alternance of rotation and univariate BC.
n_features
[integer] Numbers of features
bc
[BC class] Univariate BC method
metric
[function] distance between two datasets
iter_slope
[Stopping class criteria] class used to test when stop
bc_params
[list] Parameters of bc
ortho_mat
[array] Array of orthogonal matrix
tips
[array] Array which contains the product of ortho and inverse of next
lbc
[list] list of BC method used.
new()
Create a new MBCn object.
MBCn$new( bc = QDM, metric = wasserstein, stopping_criteria = SlopeStoppingCriteria, stopping_criteria_params = list(minit = 20, maxit = 100, tol = 0.001), ... )
bc
[BC class] Univariate bias correction method
metric
[function] distance between two datasets
stopping_criteria
[Stopping class criteria] class use to test when to stop the iterations
stopping_criteria_params
[list] parameters passed to stopping_criteria class
...
[] Others arguments passed to bc.
A new 'MBCn' object.
fit()
Fit the bias correction method
MBCn$fit(Y0, X0, X1)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
MBCn$predict(X1, X0 = NULL)
X1
[matrix: n_samples * n_features] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
MBCn$clone(deep = FALSE)
deep
Whether to make a deep clone.
Cannon, A. J., Sobie, S. R., and Murdock, T. Q.: Bias correction of simulated precipitation by quantile mapping: how well do methods preserve relative changes in quantiles and extremes?, J. Climate, 28, 6938–6959, https://doi.org/10.1175/JCLI-D-14- 00754.1, 2015.
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(200) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class MBCn mbcn = SBCK::MBCn$new() ## Step 2 : Fit the bias correction model mbcn$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing ## corrections Z = mbcn$predict(X1,X0) Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(200) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class MBCn mbcn = SBCK::MBCn$new() ## Step 2 : Fit the bias correction model mbcn$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing ## corrections Z = mbcn$predict(X1,X0) Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
Compute Minkowski distance between two dataset or SparseHist X and Y. If p = 2, it is the Euclidean distance, for p = 1, it is the manhattan distance, if p = Inf, chebyshev distance is called.
minkowski(X, Y, p = 2)
minkowski(X, Y, p = 2)
X |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
Y |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
p |
[float] power of distance |
[float] value of distance
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=2) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals d = SBCK::minkowski( X , Y , p = 3 ) d = SBCK::minkowski(muX , Y , p = 3 ) d = SBCK::minkowski( X , muY , p = 3 ) d = SBCK::minkowski(muX , muY , p = 3 )
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=2) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals d = SBCK::minkowski( X , Y , p = 3 ) d = SBCK::minkowski(muX , Y , p = 3 ) d = SBCK::minkowski( X , muY , p = 3 ) d = SBCK::minkowski(muX , muY , p = 3 )
Perform a multivariate bias correction with Gaussian assumption.
Only pearson correlations are corrected.
n_features
[integer] Numbers of features
new()
Create a new MRec object.
MRec$new(distY = NULL, distX = NULL)
distY
[A list of ROOPSD distribution or NULL] Describe the law of each margins. A list permit to use different laws for each margins. Default is empirical.
distX
[A list of ROOPSD distribution or NULL] Describe the law of each margins. A list permit to use different laws for each margins. Default is empirical.
A new 'MRec' object.
fit()
Fit the bias correction method
MRec$fit(Y0, X0, X1)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
MRec$predict(X1, X0 = NULL)
X1
[matrix: n_samples * n_features] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
MRec$clone(deep = FALSE)
deep
Whether to make a deep clone.
Bárdossy, A. and Pegram, G.: Multiscale spatial recorrelation of RCM precipitation to produce unbiased climate change scenarios over large areas and small, Water Resources Research, 48, 9502–, https://doi.org/10.1029/2011WR011524, 2012.
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class MRec mrec = SBCK::MRec$new() ## Step 2 : Fit the bias correction model mrec$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing corrections. Z = mrec$predict(X1,X0) ## X0 is optional, in this case Z0 is NULL Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class MRec mrec = SBCK::MRec$new() ## Step 2 : Fit the bias correction model mrec$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing corrections. Z = mrec$predict(X1,X0) ## X0 is optional, in this case Z0 is NULL Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
Multivariate Schaake shuffle using the quantiles.
Used to reproduce the dependence structure of a dataset to another dataset
col_cond
[vector] Conditionning columns
col_ucond
[vector] Un-conditionning columns
lag_search
[integer] Number of lags to transform the dependence structure
lag_keep
[integer] Number of lags to keep
n_features
[integer] Number of features (dimensions), internal
qY
[matrix] Quantile structure fitted, internal
bsYc
[matrix] Block search fitted, internal
new()
Create a new MVQuantilesShuffle object.
MVQuantilesShuffle$new(col_cond = base::c(1), lag_search = 1, lag_keep = 1)
col_cond
Conditionning colum
lag_search
Number of lags to transform the dependence structure
lag_keep
Number of lags to keep
A new 'MVQuantilesShuffle' object.
fit()
Fit method
MVQuantilesShuffle$fit(Y)
Y
[vector] Dataset to infer the dependance structure
NULL
transform()
Transform method
MVQuantilesShuffle$transform(X)
X
[vector] Dataset to match the dependance structure with the Y fitted
Z The X with the quantiles structure of Y
clone()
The objects of this class are cloneable with this method.
MVQuantilesShuffle$clone(deep = FALSE)
deep
Whether to make a deep clone.
Vrac, M. et S. Thao (2020). “R2 D2 v2.0 : accounting for temporal dependences in multivariate bias correction via analogue rank resampling”. In : Geosci. Model Dev. 13.11, p. 5367-5387. doi :10.5194/gmd-13-5367-2020.
## Generate sample X = matrix( stats::rnorm( n = 100 ) , ncol = 4 ) Y = matrix( stats::rnorm( n = 100 ) , ncol = 4 ) ## Fit dependence structure ## Assume that the link beween column 2 and 4 is correct, and change also ## the auto-correlation structure until lag 3 = lag_keep - 1 mvq = MVQuantilesShuffle$new( base::c(2,4) , lag_search = 6 , lag_keep = 4 ) mvq$fit(Y) Z = mvq$transform(X)
## Generate sample X = matrix( stats::rnorm( n = 100 ) , ncol = 4 ) Y = matrix( stats::rnorm( n = 100 ) , ncol = 4 ) ## Fit dependence structure ## Assume that the link beween column 2 and 4 is correct, and change also ## the auto-correlation structure until lag 3 = lag_keep - 1 mvq = MVQuantilesShuffle$new( base::c(2,4) , lag_search = 6 , lag_keep = 4 ) mvq$fit(Y) Z = mvq$transform(X)
Multivariate Schaake shuffle using the ranks.
Used to reproduce the dependence structure of a dataset to another dataset
col_cond
[vector] Conditionning columns
col_ucond
[vector] Un-conditionning columns
lag_search
[integer] Number of lags to transform the dependence structure
lag_keep
[integer] Number of lags to keep
n_features
[integer] Number of features (dimensions), internal
qY
[matrix] Ranks structure fitted, internal
bsYc
[matrix] Block search fitted, internal
new()
Create a new MVRanksShuffle object.
MVRanksShuffle$new(col_cond = base::c(1), lag_search = 1, lag_keep = 1)
col_cond
Conditionning colum
lag_search
Number of lags to transform the dependence structure
lag_keep
Number of lags to keep
A new 'MVRanksShuffle' object.
fit()
Fit method
MVRanksShuffle$fit(Y)
Y
[vector] Dataset to infer the dependance structure
NULL
transform()
Transform method
MVRanksShuffle$transform(X)
X
[vector] Dataset to match the dependance structure with the Y fitted
Z The X with the quantiles structure of Y
clone()
The objects of this class are cloneable with this method.
MVRanksShuffle$clone(deep = FALSE)
deep
Whether to make a deep clone.
Vrac, M. et S. Thao (2020). “R2 D2 v2.0 : accounting for temporal dependences in multivariate bias correction via analogue rank resampling”. In : Geosci. Model Dev. 13.11, p. 5367-5387. doi :10.5194/gmd-13-5367-2020.
## Generate sample X = matrix( stats::rnorm( n = 100 ) , ncol = 4 ) Y = matrix( stats::rnorm( n = 100 ) , ncol = 4 ) ## Fit dependence structure ## Assume that the link beween column 2 and 4 is correct, and change also ## the auto-correlation structure until lag 3 = lag_keep - 1 mvr = MVRanksShuffle$new( base::c(2,4) , lag_search = 6 , lag_keep = 4 ) mvr$fit(Y) Z = mvr$transform(X)
## Generate sample X = matrix( stats::rnorm( n = 100 ) , ncol = 4 ) Y = matrix( stats::rnorm( n = 100 ) , ncol = 4 ) ## Fit dependence structure ## Assume that the link beween column 2 and 4 is correct, and change also ## the auto-correlation structure until lag 3 = lag_keep - 1 mvr = MVRanksShuffle$new( base::c(2,4) , lag_search = 6 , lag_keep = 4 ) mvr$fit(Y) Z = mvr$transform(X)
Perform a multivariate bias correction of X0 with respect to Y0.
Joint distribution, i.e. all dependence are corrected.
bin_width
[vector or NULL] A vector of lengths of the cells discretizing R^numbers of variables. If NULL, it is estimating during the fit
bin_origin
[vector or NULL] Coordinate of lower corner of one cell. If NULL, c(0,...,0) is used
muX
[SparseHist] Histogram of the data from the model
muY
[SparseHist] Histogram of the data from the observations
ot
[OTSolver] Optimal Transport solver, default is the network simplex
plan
[matrix] The plan computed by the ot solver.
n_features
[integer] Numbers of features
new()
Create a new OTC object.
OTC$new(bin_width = NULL, bin_origin = NULL, ot = SBCK::OTNetworkSimplex$new())
bin_width
[vector or NULL] A vector of lengths of the cells discretizing R^numbers of variables. If NULL, it is estimating during the fit
bin_origin
[vector or NULL] Coordinate of lower corner of one cell. If NULL, c(0,...,0) is used
ot
[OTSolver] Optimal Transport solver, default is the network simplex
A new 'OTC' object.
fit()
Fit the bias correction method
OTC$fit(Y0, X0)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
NULL
predict()
Predict the correction
Note: Only the center of the bins associated to the corrected points are returned, but all corrections of the form: >> bw = otc$bin_width / 2 >> n = base::prod(base::dim(X0)) >> Z0 = otc$predict(X0) >> Z0 = Z0 + t(matrix(stats::runif( n = n min = - bw , max = bw ) , ncol = dim(X0)[1] )) are equivalent for OTC.
OTC$predict(X0)
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix] Return the corrections of X0
clone()
The objects of this class are cloneable with this method.
OTC$clone(deep = FALSE)
deep
Whether to make a deep clone.
Robin, Y., Vrac, M., Naveau, P., Yiou, P.: Multivariate stochastic bias corrections with optimal transport, Hydrol. Earth Syst. Sci., 23, 773–786, 2019, https://doi.org/10.5194/hess-23-773-2019
## Two bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period ## Bin length bin_width = SBCK::bin_width_estimator( list(X0,Y0) ) ## Bias correction ## Step 1 : construction of the class OTC otc = SBCK::OTC$new( bin_width ) ## Step 2 : Fit the bias correction model otc$fit( Y0 , X0 ) ## Step 3 : perform the bias correction, Z0 is the correction of ## X0 with respect to the estimation of Y0 Z0 = otc$predict(X0)
## Two bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period ## Bin length bin_width = SBCK::bin_width_estimator( list(X0,Y0) ) ## Bias correction ## Step 1 : construction of the class OTC otc = SBCK::OTC$new( bin_width ) ## Step 2 : Fit the bias correction model otc$fit( Y0 , X0 ) ## Step 3 : perform the bias correction, Z0 is the correction of ## X0 with respect to the estimation of Y0 Z0 = otc$predict(X0)
Histogram
Just a generic class which contains two arguments, p (probability) and c (center of bins)
p
[vector] Vector of probability
c
[matrix] Vector of center of bins, with nrow = n_samples and ncol = n_features
bin_width
[vector or NULL] A vector of lengths of the cells discretizing R^numbers of variables. If NULL, it is estimating during the fit
bin_origin
[vector or NULL] Coordinate of lower corner of one cell. If NULL, c(0,...,0) is used
new()
Create a new OTHist object.
OTHist$new(p, c)
p
[vector] Vector of probability
c
[matrix] Vector of center of bins, with nrow = n_samples and ncol = n_features
A new 'OTHist' object.
clone()
The objects of this class are cloneable with this method.
OTHist$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Build a random discrete probability distribution p = stats::rnorm(100) p = p / base::sum(p) c = base::seq( -1 , 1 , length = 100 ) mu = OTHist$new( p , c )
## Build a random discrete probability distribution p = stats::rnorm(100) p = p / base::sum(p) c = base::seq( -1 , 1 , length = 100 ) mu = OTHist$new( p , c )
Solve the optimal transport problem with the package 'transport'
use the network simplex algorithm
p
[double] Power of the plan
plan
[matrix] transport plan
success
[bool] If the fit is a success or not
C
[matrix] Cost matrix
new()
Create a new OTNetworkSimplex object.
OTNetworkSimplex$new(p = 2)
p
[double] Power of the plan
A new 'OTNetworkSimplex' object.
fit()
Fit the OT plan
OTNetworkSimplex$fit(muX0, muX1, C = NULL)
muX0
[SparseHist or OTHist] Source histogram to move
muX1
[SparseHist or OTHist] Target histogram
C
[matrix or NULL] Cost matrix (without power p) between muX0 and muX1, if NULL pairwise_distances is called with Euclidean distance.
NULL
clone()
The objects of this class are cloneable with this method.
OTNetworkSimplex$clone(deep = FALSE)
deep
Whether to make a deep clone.
Bazaraa, M. S., Jarvis, J. J., and Sherali, H. D.: Linear Programming and Network Flows, 4th edn., John Wiley & Sons, 2009.
## Define two dataset X = stats::rnorm(2000) Y = stats::rnorm(2000 , mean = 5 ) bw = base::c(0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## Find solution ot = OTNetworkSimplex$new() ot$fit( muX , muY ) print( sum(ot$plan) ) ## Must be equal to 1 print( ot$success ) ## If solve is success print( sqrt(sum(ot$plan * ot$C)) ) ## Cost of plan
## Define two dataset X = stats::rnorm(2000) Y = stats::rnorm(2000 , mean = 5 ) bw = base::c(0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## Find solution ot = OTNetworkSimplex$new() ot$fit( muX , muY ) print( sum(ot$plan) ) ## Must be equal to 1 print( ot$success ) ## If solve is success print( sqrt(sum(ot$plan * ot$C)) ) ## Cost of plan
Compute the matrix of pairwise distances between a matrix X and a matrix Y
pairwise_distances(X,Y,metric)
pairwise_distances(X,Y,metric)
X |
[matrix] A first matrix (samples in row, features in columns). |
Y |
[matrix] A second matrix (samples in row, features in columns). If Y = NULL, then pairwise distances is computed between X and X |
metric |
[string or callable] The metric used. If metric is a string, then metric is compiled (so faster). Available string are: "euclidean", "sqeuclidean" (Square of Euclidean distance), "logeulidean" (log of the Euclidean distance) and "chebyshev" (max). Callable must be a function taking two vectors and returning a double. |
distXY [matrix] Pairwise distances. distXY[i,j] is the distance between X[i,] and Y[j,]
X = matrix( stats::rnorm(200) , ncol = 100 , nrow = 2 ) Y = matrix( stats::rexp(300) , ncol = 150 , nrow = 2 ) distXY = SBCK::pairwise_distances( X , Y )
X = matrix( stats::rnorm(200) , ncol = 100 , nrow = 2 ) Y = matrix( stats::rexp(300) , ncol = 150 , nrow = 2 ) distXY = SBCK::pairwise_distances( X , Y )
Apply the diff w.r.t. a ref transformation.
Transform a dataset such that all 'lower' dimensions are replaced by the 'ref' dimension minus the 'lower'; and all 'upper' dimensions are replaced by 'upper' minus 'ref'.
SBCK::PrePostProcessing
-> PPPDiffRef
ref
[integer] The reference column
lower
[vector integer] Dimensions lower than ref
upper
[vector integer] Dimensions upper than ref
new()
Create a new PPPDiffRef object.
PPPDiffRef$new(ref, lower = NULL, upper = NULL, ...)
ref
The reference column
lower
Dimensions lower than ref
upper
Dimensions upper than ref
...
Others arguments are passed to PrePostProcessing
A new 'PPPDiffRef' object.
transform()
Apply the DiffRef transform.
PPPDiffRef$transform(X)
X
Data to transform
Xt a transformed matrix
itransform()
Apply the DiffRef inverse transform.
PPPDiffRef$itransform(Xt)
Xt
Data to transform
X a transformed matrix
clone()
The objects of this class are cloneable with this method.
PPPDiffRef$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Parameters size = 2000 nfeat = 5 sign = base::sample( base::c(-1,1) , nfeat - 1 , replace = TRUE ) ## Build data X = matrix( stats::rnorm( n = size ) , ncol = 1 ) for( s in sign ) { X = base::cbind( X , X[,1] + s * base::abs(matrix( stats::rnorm(n = size) , ncol = 1 )) ) } ## PPP lower = which( sign == 1 ) + 1 upper = which( sign == -1 ) + 1 ppp = SBCK::PPPDiffRef$new( ref = 1 , lower = lower , upper = upper ) Xt = ppp$transform(X) Xti = ppp$itransform(Xt) print( base::max( base::abs( X - Xti ) ) )
## Parameters size = 2000 nfeat = 5 sign = base::sample( base::c(-1,1) , nfeat - 1 , replace = TRUE ) ## Build data X = matrix( stats::rnorm( n = size ) , ncol = 1 ) for( s in sign ) { X = base::cbind( X , X[,1] + s * base::abs(matrix( stats::rnorm(n = size) , ncol = 1 )) ) } ## PPP lower = which( sign == 1 ) + 1 upper = which( sign == -1 ) + 1 ppp = SBCK::PPPDiffRef$new( ref = 1 , lower = lower , upper = upper ) Xt = ppp$transform(X) Xti = ppp$itransform(Xt) print( base::max( base::abs( X - Xti ) ) )
Base class to build link function pre-post processing class. See also the PrePostProcessing documentation
This class is used to define pre/post processing class with a link function and its inverse. See example.
SBCK::PrePostProcessing
-> PPPFunctionLink
new()
Create a new PPPFunctionLink object.
PPPFunctionLink$new(transform_, itransform_, cols = NULL, ...)
transform_
The transform function
itransform_
The inverse transform function
cols
Columns to apply the link function
...
Others arguments are passed to PrePostProcessing
A new 'PPPFunctionLink' object.
transform()
Apply the transform.
PPPFunctionLink$transform(X)
X
Data to transform
Xt a transformed matrix
itransform()
Apply the inverse transform.
PPPFunctionLink$itransform(Xt)
Xt
Data to transform
X a transformed matrix
clone()
The objects of this class are cloneable with this method.
PPPFunctionLink$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define the link function transform = function(x) { return(x^3) } itransform = function(x) { return(x^(1/3)) } ## And the PPP method ppp = PPPFunctionLink$new( bc_method = CDFt , transform = transform , itransform = itransform ) ## And now the correction ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define the link function transform = function(x) { return(x^3) } itransform = function(x) { return(x^(1/3)) } ## And the PPP method ppp = PPPFunctionLink$new( bc_method = CDFt , transform = transform , itransform = itransform ) ## And now the correction ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
Log linear link function. See also the PrePostProcessing documentation.
Log linear link function. The transform is log(x) if 0 < x < 1, else x -1, and the inverse transform exp(x) if x < 0, else x + 1.
SBCK::PrePostProcessing
-> SBCK::PPPFunctionLink
-> PPPLogLinLink
new()
Create a new PPPLogLinLink object.
PPPLogLinLink$new(s = 1e-05, cols = NULL, ...)
s
The value where the function jump from exp to linear
cols
Columns to apply the link function
...
Others arguments are passed to PrePostProcessing
A new 'PPPLogLinLink' object.
clone()
The objects of this class are cloneable with this method.
PPPLogLinLink$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define the PPP method ppp = PPPLogLinLink$new( bc_method = CDFt , cols = 2 , pipe = list(PPPSSR), pipe_kwargs = list(list(cols=2)) ) ## And now the correction ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define the PPP method ppp = PPPLogLinLink$new( bc_method = CDFt , cols = 2 , pipe = list(PPPSSR), pipe_kwargs = list(list(cols=2)) ) ## And now the correction ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
Set an order between cols, and preserve it by swapping values after the correction
Set an order between cols, and preserve it by swapping values after the correction
SBCK::PrePostProcessing
-> PPPPreserveOrder
new()
Create a new PPPPreserveOrder object.
PPPPreserveOrder$new(cols = NULL, ...)
cols
The columns to keep the order
...
Others arguments are passed to PrePostProcessing
A new 'PPPPreserveOrder' object.
transform()
nothing occur here
PPPPreserveOrder$transform(X)
X
Data to transform
Xt a transformed matrix
itransform()
sort along cols
PPPPreserveOrder$itransform(Xt)
Xt
Data to transform
X a transformed matrix
clone()
The objects of this class are cloneable with this method.
PPPPreserveOrder$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Build data X = matrix( stats::rnorm( n = 20 ) , ncol = 2 ) ## PPP ppp = SBCK::PPPPreserveOrder$new( cols = base::c(1,2) ) Xt = ppp$transform(X) ## Nothing Xti = ppp$itransform(Xt) ## Order
## Build data X = matrix( stats::rnorm( n = 20 ) , ncol = 2 ) ## PPP ppp = SBCK::PPPPreserveOrder$new( cols = base::c(1,2) ) Xt = ppp$transform(X) ## Nothing Xti = ppp$itransform(Xt) ## Order
Square link function. See also the PrePostProcessing documentation.
Square link function. The transform is x^2, and the sign(x)*sqrt(abs(x)) its inverse.
SBCK::PrePostProcessing
-> SBCK::PPPFunctionLink
-> PPPSquareLink
new()
Create a new PPPSquareLink object.
PPPSquareLink$new(cols = NULL, ...)
cols
Columns to apply the link function
...
Others arguments are passed to PrePostProcessing
A new 'PPPSquareLink' object.
clone()
The objects of this class are cloneable with this method.
PPPSquareLink$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define the PPP method ppp = PPPSquareLink$new( bc_method = CDFt , cols = 2 ) ## And now the correction ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define the PPP method ppp = PPPSquareLink$new( bc_method = CDFt , cols = 2 ) ## And now the correction ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
Apply the SSR transformation.
Apply the SSR transformation. The SSR transformation replace the 0 by a random values between 0 and the minimal non zero value (the threshold). The inverse transform replace all values lower than the threshold by 0. The threshold used for inverse transform is given by the keyword 'isaved', which takes the value 'Y0' (reference in calibration period), or 'X0' (biased in calibration period), or 'X1' (biased in projection period)
SBCK::PrePostProcessing
-> PPPSSR
Xn
[vector] Threshold
new()
Create a new PPPSSR object.
PPPSSR$new(cols = NULL, isaved = "Y0", ...)
cols
Columns to apply the SSR
isaved
Choose the threshold used for the inverse transform. Can be "Y0", "X0" and "X1".
...
Others arguments are passed to PrePostProcessing
A new 'PPPSSR' object.
transform()
Apply the SSR transform, i.e. all 0 are replaced by random values between 0 (excluded) and the minimal non zero value.
PPPSSR$transform(X)
X
Data to transform
Xt a transformed matrix
itransform()
Apply the inverse SSR transform, i.e. all values lower than the threshold found in the transform function are replaced by 0.
PPPSSR$itransform(Xt)
Xt
Data to transform
X a transformed matrix
clone()
The objects of this class are cloneable with this method.
PPPSSR$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define the PPP method ppp = PPPSSR$new( bc_method = CDFt , cols = 2 ) ## And now the correction ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define the PPP method ppp = PPPSSR$new( bc_method = CDFt , cols = 2 ) ## And now the correction ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
Base class to pre/post process data before/after a bias correction
This base class can be considered as the identity pre-post processing, and
is used to be herited by others pre/post processing class. The key ideas are:
- A PrePostProcessing based class contains a bias correction method, initalized
by the 'bc_method' argument, always available for all herited class
- The 'pipe' keyword is a list of pre/post processing class, applied one after
the other.
Try with an example, start with a dataset similar to tas/pr:
>>> XY = SBCK::dataset_like_tas_pr(2000)
>>> X0 = XY$X0
>>> X1 = XY$X1
>>> Y0 = XY$Y0
The first column is Gaussian, but the second is an exponential law with a Dirac
mass at 0, represented the 0 of precipitations. For a quantile mapping
correction in the calibration period, we just apply
>>> qm = SBCK::QM$new()
>>> qm$fit(Y0,X0)
>>> Z0 = qm$predict(X0)
Now, if we want to pre-post process with the SSR method (0 are replaced by
random values between 0 (excluded) and the minimal non zero value), we write:
>>> ppp = SBCK::PPPSSR$new( bc_method = QM , cols = 2 )
>>> ppp$fit(Y0,X0)
>>> Z0 = ppp$predict(X0)
The SSR approach is applied only on the second column (the precipitation), and
the syntax is the same than for a simple bias correction method.
Imagine now that we want to apply the SSR, and to ensure the positivity of CDFt
for precipitation, we also want to use the LogLinLink pre-post processing
method. This can be done with the following syntax:
>>> ppp = PPPLogLinLink$new( bc_method = CDFt , cols = 2 ,
>>> pipe = list(PPPSSR) ,
>>> pipe_kwargs = list( list(cols = 2) ) )
>>> ppp$fit(Y0,X0,X1)
>>> Z = ppp$predict(X1,X0)
With this syntax, the pre processing operation is
PPPLogLinLink$transform(PPPSSR$transform(data)) and post processing operation
PPPSSR$itransform(PPPLogLinLink$itransform(bc_data)). So the formula can read
from right to left (as the mathematical composition). Note it is equivalent
to define:
>>> ppp = PrePostProcessing$new( bc_method = CDFt,
>>> pipe = list(PPPLogLinLink,PPPSSR),
>>> pipe_kwargs = list( list(cols=2) , list(cols=2) ) )
new()
Create a new PrePostProcessing object.
PrePostProcessing$new( bc_method = NULL, bc_method_kwargs = list(), pipe = list(), pipe_kwargs = list() )
bc_method
The bias correction method
bc_method_kwargs
Dict of keyword arguments passed to bc_method
pipe
list of others PrePostProcessing class to pipe
pipe_kwargs
list of list of keyword arguments passed to each elements of pipe
A new 'PrePostProcessing' object.
transform()
Transformation applied to data before the bias correction. Just the identity for this class
PrePostProcessing$transform(X)
X
[matrix: n_samples * n_features]
Xt [matrix: n_samples * n_features]
itransform()
Transformation applied to data after the bias correction. Just the identity for this class
PrePostProcessing$itransform(Xt)
Xt
[matrix: n_samples * n_features]
X [matrix: n_samples * n_features]
fit()
Apply the pre processing and fit the bias correction method. If X1 is NULL, the method is considered as stationary
PrePostProcessing$fit(Y0, X0, X1 = NULL)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction, apply pre-processing before, and post-processing after
PrePostProcessing$predict(X1 = NULL, X0 = NULL)
X1
[matrix: n_samples * n_features or NULL] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL (and vice-versa), else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
PrePostProcessing$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define pre/post processing method ppp = PrePostProcessing$new( bc_method = CDFt, pipe = list(PPPLogLinLink,PPPSSR), pipe_kwargs = list( list(cols=2) , list(cols=2) ) ) ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
## Start with data XY = SBCK::dataset_like_tas_pr(2000) X0 = XY$X0 X1 = XY$X1 Y0 = XY$Y0 ## Define pre/post processing method ppp = PrePostProcessing$new( bc_method = CDFt, pipe = list(PPPLogLinLink,PPPSSR), pipe_kwargs = list( list(cols=2) , list(cols=2) ) ) ## Bias correction ppp$fit(Y0,X0,X1) Z = ppp$predict(X1,X0)
Perform a bias correction.
Mix of delta and quantile method
new()
Create a new QDM object.
QDM$new(delta = "additive", ...)
delta
[character or list] If character : "additive" or "multiplicative". If a list is given, delta[[1]] is the delta transform operator, and delta[[2]] its inverse.
...
[] Named arguments passed to quantile mapping
A new 'QDM' object.
fit()
Fit the bias correction method
QDM$fit(Y0, X0, X1)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
QDM$predict(X1, X0 = NULL)
X1
[matrix: n_samples * n_features] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
QDM$clone(deep = FALSE)
deep
Whether to make a deep clone.
Cannon, A. J., Sobie, S. R., and Murdock, T. Q.: Bias correction of simulated precipitation by quantile mapping: how well do methods preserve relative changes in quantiles and extremes?, J. Climate, 28, 6938–6959, https://doi.org/10.1175/JCLI-D-14- 00754.1, 2015.
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class QDM qdm = SBCK::QDM$new() ## Step 2 : Fit the bias correction model qdm$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing ## corrections Z = qdm$predict(X1,X0) Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class QDM qdm = SBCK::QDM$new() ## Step 2 : Fit the bias correction model qdm$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction, Z is a list containing ## corrections Z = qdm$predict(X1,X0) Z$Z0 ## Correction in calibration period Z$Z1 ## Correction in projection period
Perform an univariate bias correction of X0 with respect to Y0
Correction is applied margins by margins.
distX0
[ROOPSD distribution or a list of them] Describe the law of each margins. A list permit to use different laws for each margins. Default is ROOPSD::rv_histogram.
distY0
[ROOPSD distribution or a list of them] Describe the law of each margins. A list permit to use different laws for each margins. Default is ROOPSD::rv_histogram.
n_features
[integer] Numbers of features
tol
[double] Floatting point tolerance
new()
Create a new QM object.
QM$new(distX0 = ROOPSD::rv_histogram, distY0 = ROOPSD::rv_histogram, ...)
distX0
[ROOPSD distribution or a list of them] Describe the law of model
distY0
[ROOPSD distribution or a list of them] Describe the law of observations
...
[] kwargsX0 or kwargsY0, arguments passed to distX0 and distY0
A new 'QM' object.
fit()
Fit the bias correction method
QM$fit(Y0 = NULL, X0 = NULL)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
NULL
predict()
Predict the correction
QM$predict(X0)
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix] Return the corrections of X0
clone()
The objects of this class are cloneable with this method.
QM$clone(deep = FALSE)
deep
Whether to make a deep clone.
Panofsky, H. A. and Brier, G. W.: Some applications of statistics to meteorology, Mineral Industries Extension Services, College of Mineral Industries, Pennsylvania State University, 103 pp., 1958.
Wood, A. W., Leung, L. R., Sridhar, V., and Lettenmaier, D. P.: Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs, Clim. Change, 62, 189–216, https://doi.org/10.1023/B:CLIM.0000013685.99609.9e, 2004.
Déqué, M.: Frequency of precipitation and temperature extremes over France in an anthropogenic scenario: Model results and statistical correction according to observed values, Global Planet. Change, 57, 16–26, https://doi.org/10.1016/j.gloplacha.2006.11.030, 2007.
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period ## Bias correction ## Step 1 : construction of the class QM qm = SBCK::QM$new() ## Step 2 : Fit the bias correction model qm$fit( Y0 , X0 ) ## Step 3 : perform the bias correction, Z0 is the correction of ## X0 with respect to the estimation of Y0 Z0 = qm$predict(X0) # ## But in fact the laws are known, we can fit parameters: distY0 = list( ROOPSD::Exponential , ROOPSD::Normal ) distX0 = list( ROOPSD::Normal , ROOPSD::Exponential ) qm_fix = SBCK::QM$new( distY0 = distY0 , distX0 = distX0 ) qm_fix$fit( Y0 , X0 ) Z0 = qm_fix$predict(X0)
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period ## Bias correction ## Step 1 : construction of the class QM qm = SBCK::QM$new() ## Step 2 : Fit the bias correction model qm$fit( Y0 , X0 ) ## Step 3 : perform the bias correction, Z0 is the correction of ## X0 with respect to the estimation of Y0 Z0 = qm$predict(X0) # ## But in fact the laws are known, we can fit parameters: distY0 = list( ROOPSD::Exponential , ROOPSD::Normal ) distX0 = list( ROOPSD::Normal , ROOPSD::Exponential ) qm_fix = SBCK::QM$new( distY0 = distY0 , distX0 = distX0 ) qm_fix$fit( Y0 , X0 ) Z0 = qm_fix$predict(X0)
Perform a multivariate bias correction of X with respect to Y
Dependence is corrected with multi_schaake_shuffle.
SBCK::QM
-> QMrs
irefs
[vector of int] Indexes for shuffle. Defaults is base::c(1)
new()
Create a new QMrs object.
QMrs$new(irefs = base::c(1), ...)
irefs
[vector of int] Indexes for shuffle. Defaults is base::c(1) model
...
[] all others arguments are passed to QM class.
A new 'QMrs' object.
fit()
Fit the bias correction method
QMrs$fit(Y0, X0)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
NULL
predict()
Predict the correction
QMrs$predict(X0)
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix] Return the corrections of X0
clone()
The objects of this class are cloneable with this method.
QMrs$clone(deep = FALSE)
deep
Whether to make a deep clone.
Vrac, M.: Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2 D2 ) bias correction, Hydrol. Earth Syst. Sci., 22, 3175–3196, https://doi.org/10.5194/hess-22-3175-2018, 2018.
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period ## Bias correction ## Step 1 : construction of the class QMrs qmrs = SBCK::QMrs$new() ## Step 2 : Fit the bias correction model qmrs$fit( Y0 , X0 ) ## Step 3 : perform the bias correction Z0 = qmrs$predict(X0)
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period ## Bias correction ## Step 1 : construction of the class QMrs qmrs = SBCK::QMrs$new() ## Step 2 : Fit the bias correction model qmrs$fit( Y0 , X0 ) ## Step 3 : perform the bias correction Z0 = qmrs$predict(X0)
Perform a multivariate (non stationary) bias correction.
Use rankshuffle in calibration and projection period with CDFt
SBCK::CDFt
-> R2D2
irefs
[vector of int] Indexes for shuffle. Defaults is base::c(1)
new()
Create a new R2D2 object.
R2D2$new(irefs = base::c(1), ...)
irefs
[vector of int] Indexes for shuffle. Defaults is base::c(1) model
...
[] all others arguments are passed to CDFt class.
A new 'R2D2' object.
fit()
Fit the bias correction method
R2D2$fit(Y0, X0, X1)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection
NULL
predict()
Predict the correction
R2D2$predict(X1, X0 = NULL)
X1
[matrix: n_samples * n_features] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
R2D2$clone(deep = FALSE)
deep
Whether to make a deep clone.
Vrac, M.: Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2 D2 ) bias correction, Hydrol. Earth Syst. Sci., 22, 3175–3196, https://doi.org/10.5194/hess-22-3175-2018, 2018.
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class R2D2 r2d2 = SBCK::R2D2$new() ## Step 2 : Fit the bias correction model r2d2$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = r2d2$predict(X1,X0)
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class R2D2 r2d2 = SBCK::R2D2$new() ## Step 2 : Fit the bias correction model r2d2$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = r2d2$predict(X1,X0)
Perform a multivariate bias correction of X with respect to Y randomly.
Only for comparison.
new()
Create a new RBC object.
RBC$new()
A new 'RBC' object.
fit()
Fit the bias correction method
RBC$fit(Y0, X0, X1 = NULL)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
X1
[matrix: n_samples * n_features] Model in projection, can be NULL for stationary BC method
NULL
predict()
Predict the correction. Use named keywords to use stationary or non-stationary method.
RBC$predict(X1 = NULL, X0 = NULL)
X1
[matrix: n_samples * n_features or NULL] Model in projection
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix or list] Return the matrix of correction of X1 if X0 is NULL, else return a list containing Z1 and Z0, the corrections of X1 and X0
clone()
The objects of this class are cloneable with this method.
RBC$clone(deep = FALSE)
deep
Whether to make a deep clone.
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class RBC rbc = SBCK::RBC$new() ## Step 2 : Fit the bias correction model rbc$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = rbc$predict(X1,X0) ## Z$Z0 # BC of X0 ## Z$Z1 # BC of X1
## Three bivariate random variables (rnorm and rexp are inverted between ref ## and bias) XY = SBCK::dataset_gaussian_exp_2d(2000) X0 = XY$X0 ## Biased in calibration period Y0 = XY$Y0 ## Reference in calibration period X1 = XY$X1 ## Biased in projection period ## Bias correction ## Step 1 : construction of the class RBC rbc = SBCK::RBC$new() ## Step 2 : Fit the bias correction model rbc$fit( Y0 , X0 , X1 ) ## Step 3 : perform the bias correction Z = rbc$predict(X1,X0) ## Z$Z0 # BC of X0 ## Z$Z1 # BC of X1
Statistical Bias Correction Kit
Yoann Robin Maintainer: Yoann Robin <[email protected]>
Apply the Schaake shuffle to transform the rank of X0 such that its correspond to the rank of Y0
schaake_shuffle(Y0,X0)
schaake_shuffle(Y0,X0)
Y0 |
[vector] The reference vector |
X0 |
[vector] The vector to transform the rank |
Z0 [vector] X shuffled.
X0 = stats::runif(10) Y0 = stats::runif(10) Z0 = SBCK::schaake_shuffle( Y0 , X0 )
X0 = stats::runif(10) Y0 = stats::runif(10) Z0 = SBCK::schaake_shuffle( Y0 , X0 )
Perform the Schaake Shuffle
as fit/predict mode
new()
Create a new ShaakeShuffle object.
SchaakeShuffle$new(Y0 = NULL)
Y0
[vector] The reference vector
A new 'ShaaleShuffle' object.
fit()
Fit the model
SchaakeShuffle$fit(Y0)
Y0
[vector] The reference vector
NULL
predict()
Fit the model
SchaakeShuffle$predict(X0)
X0
[vector] The vector to apply shuffle
Z0 [vector] data shuffled
clone()
The objects of this class are cloneable with this method.
SchaakeShuffle$clone(deep = FALSE)
deep
Whether to make a deep clone.
X0 = matrix( stats::runif(20) , ncol = 2 ) Y0 = matrix( stats::runif(20) , ncol = 2 ) ss = SchaakeShuffle$new() ss$fit(Y0) Z0 = ss$predict(X0)
X0 = matrix( stats::runif(20) , ncol = 2 ) Y0 = matrix( stats::runif(20) , ncol = 2 ) ss = SchaakeShuffle$new() ss$fit(Y0) Z0 = ss$predict(X0)
Match the rank structure of X with them of Y by reordering X.
Can keep multiple features to keep the structure of X.
cond_cols
[vector of integer] The conditioning columns
lag_search
[integer] Number of lag to take into account
lag_keep
[integer] Number of lag to keep
Y0
[matrix] Reference data
new()
Create a new ShaakeShuffleMultiRef object.
SchaakeShuffleMultiRef$new(lag_search, lag_keep, cond_cols = base::c(1))
lag_search
[integer] Number of lag to take into account
lag_keep
[integer] Number of lag to keep
cond_cols
[vector of integer] The conditioning columns
A new 'ShaaleShuffleMultiRef' object.
fit()
Fit the model
SchaakeShuffleMultiRef$fit(Y0)
Y0
[vector] The reference vector
NULL
predict()
Fit the model
SchaakeShuffleMultiRef$predict(X0)
X0
[vector] The vector to apply shuffle
Z0 [vector] data shuffled
clone()
The objects of this class are cloneable with this method.
SchaakeShuffleMultiRef$clone(deep = FALSE)
deep
Whether to make a deep clone.
X0 = matrix( stats::runif(50) , ncol = 2 ) Y0 = matrix( stats::runif(50) , ncol = 2 ) ssmr = SchaakeShuffleMultiRef$new( lag_search = 3 , lag_keep = 1 , cond_cols = 1 ) ssmr$fit(Y0) Z0 = ssmr$predict(X0)
X0 = matrix( stats::runif(50) , ncol = 2 ) Y0 = matrix( stats::runif(50) , ncol = 2 ) ssmr = SchaakeShuffleMultiRef$new( lag_search = 3 , lag_keep = 1 , cond_cols = 1 ) ssmr$fit(Y0) Z0 = ssmr$predict(X0)
Match the rank structure of X with them of Y by reordering X.
Fix one features to keep the structure of X.
SBCK::SchaakeShuffle
-> SchaakeShuffleRef
ref
[integer] Reference
new()
Create a new ShaakeShuffleRef object.
SchaakeShuffleRef$new(ref, Y0 = NULL)
ref
[integer] Reference
Y0
[vector] The reference vector
A new 'ShaaleShuffleRef' object.
fit()
Fit the model
SchaakeShuffleRef$fit(Y0)
Y0
[vector] The reference vector
NULL
predict()
Fit the model
SchaakeShuffleRef$predict(X0)
X0
[vector] The vector to apply shuffle
Z0 [vector] data shuffled
clone()
The objects of this class are cloneable with this method.
SchaakeShuffleRef$clone(deep = FALSE)
deep
Whether to make a deep clone.
X0 = matrix( stats::runif(20) , ncol = 2 ) Y0 = matrix( stats::runif(20) , ncol = 2 ) ss = SchaakeShuffleRef$new( ref = 1 ) ss$fit(Y0) Z0 = ss$predict(X0)
X0 = matrix( stats::runif(20) , ncol = 2 ) Y0 = matrix( stats::runif(20) , ncol = 2 ) ss = SchaakeShuffleRef$new( ref = 1 ) ss$fit(Y0) Z0 = ss$predict(X0)
Class to shift a dataset.
R6Class
object.
Transform autocorrelations to intervariables correlations
Object of R6Class
new(lag,method,ref,)
This method is used to create object of this class with Shift
transform(X)
Method to shift a dataset
inverse(Xs)
Method to inverse the shift of a dataset
lag
[integer] max lag for autocorrelations
method
[character] If inverse is by row or column.
ref
[integer] reference column/row to inverse shift.
new()
Create a new Shift object.
Shift$new(lag, method = "row", ref = 1)
lag
[integer] max lag for autocorrelations
method
[character] If "row" inverse by row, else by column
ref
[integer] starting point for inverse transform
A new 'Shift' object.
transform()
Shift the data
Shift$transform(X)
X
[matrix: n_samples * n_features] Data to shift
[matrix] Matrix shifted
inverse()
Inverse the shift of the data
Shift$inverse(Xs)
Xs
[matrix] Data Shifted
[matrix] Matrix un shifted
clone()
The objects of this class are cloneable with this method.
Shift$clone(deep = FALSE)
deep
Whether to make a deep clone.
X = base::t(matrix( 1:20 , nrow = 2 , ncol = 10 )) sh = Shift$new(1) Xs = sh$transform(X) Xi = sh$inverse(Xs)
X = base::t(matrix( 1:20 , nrow = 2 , ncol = 10 )) sh = Shift$new(1) Xs = sh$transform(X) Xi = sh$inverse(Xs)
Class which send a stop signal when a time series stay constant.
Test the slope.
minit
[integer] Minimal number of iterations. At least 3.
maxit
[integer] Maximal number of iterations.
nit
[integer] Number of iterations.
tol
[float] Tolerance to control if slope is close to zero
stop
[bool] If we stop
criteria
[vector] State of criteria
slope
[vector] Values of slope
new()
Create a new SlopeStoppingCriteria object.
SlopeStoppingCriteria$new(minit, maxit, tol)
minit
[integer] Minimal number of iterations. At least 3.
maxit
[integer] Maximal number of iterations.
tol
[float] Tolerance to control if slope is close to zero
A new 'SlopeStoppingCriteria' object.
reset()
Reset the class
SlopeStoppingCriteria$reset()
NULL
append()
Add a new value
SlopeStoppingCriteria$append(value)
value
[double] New metrics
NULL
clone()
The objects of this class are cloneable with this method.
SlopeStoppingCriteria$clone(deep = FALSE)
deep
Whether to make a deep clone.
stop_slope = SlopeStoppingCriteria$new( 20 , 500 , 1e-3 ) x = 0 while(!stop_slope$stop) { stop_slope$append(base::exp(-x)) x = x + 0.1 } print(stop_slope$nit)
stop_slope = SlopeStoppingCriteria$new( 20 , 500 , 1e-3 ) x = 0 while(!stop_slope$stop) { stop_slope$append(base::exp(-x)) x = x + 0.1 } print(stop_slope$nit)
Return the Rcpp Class SparseHistBase initialized
SparseHist(X, bin_width = NULL, bin_origin = NULL)
SparseHist(X, bin_width = NULL, bin_origin = NULL)
X |
[matrix] Dataset to find the SparseHist |
bin_width |
[vector] Width of a bin for each dimension |
bin_origin |
[vector] Coordinate of the "0" bin |
[SparseHist] SparseHist class
## Data X = base::matrix( stats::rnorm( n = 10000 ) , nrow = 5000 , ncol = 2 ) muX = SparseHist(X) print(muX$p) ## Vector of probabilities print(muX$c) ## Matrix of coordinates of each bins print(muX$argwhere(X)) ## Index of bins of dataset X
## Data X = base::matrix( stats::rnorm( n = 10000 ) , nrow = 5000 , ncol = 2 ) muX = SparseHist(X) print(muX$p) ## Vector of probabilities print(muX$c) ## Matrix of coordinates of each bins print(muX$argwhere(X)) ## Index of bins of dataset X
Perform a bias correction of auto-correlation
Correct auto-correlation with a shift approach.
shift
[Shift class] Shift class to shift data.
bc_method
[SBCK::BC_method] Underlying bias correction method.
method
[character] If inverse is by row or column, see class Shift
ref
[integer] reference column/row to inverse shift, see class
new()
Create a new TSMBC object.
TSMBC$new(lag, bc_method = OTC, method = "row", ref = "middle", ...)
lag
[integer] max lag of autocorrelation
bc_method
[SBCK::BC_METHOD] bias correction method to use after shift of data, default is OTC
method
[character] If inverse is by row or column, see class Shift
ref
[integer] reference column/row to inverse shift, see class Shift. Default is 0.5 * (lag+1)
...
[] All others arguments are passed to bc_method
A new 'TSMBC' object.
fit()
Fit the bias correction method
TSMBC$fit(Y0, X0)
Y0
[matrix: n_samples * n_features] Observations in calibration
X0
[matrix: n_samples * n_features] Model in calibration
NULL
predict()
Predict the correction
TSMBC$predict(X0)
X0
[matrix: n_samples * n_features or NULL] Model in calibration
[matrix] Return the corrections of X0
clone()
The objects of this class are cloneable with this method.
TSMBC$clone(deep = FALSE)
deep
Whether to make a deep clone.
Robin, Y. and Vrac, M.: Is time a variable like the others in multivariate statistical downscaling and bias correction?, Earth Syst. Dynam. Discuss. [preprint], https://doi.org/10.5194/esd-2021-12, in review, 2021.
## arima model parameters modelX0 = list( ar = base::c( 0.6 , 0.2 , -0.1 ) ) modelY0 = list( ar = base::c( -0.3 , 0.4 , -0.2 ) ) ## arima random generator rand.genX0 = function(n){ return(stats::rnorm( n , mean = 0.2 , sd = 1 )) } rand.genY0 = function(n){ return(stats::rnorm( n , mean = 0 , sd = 0.7 )) } ## Generate two AR processes X0 = stats::arima.sim( n = 1000 , model = modelX0 , rand.gen = rand.genX0 ) Y0 = stats::arima.sim( n = 1000 , model = modelY0 , rand.gen = rand.genY0 ) X0 = as.vector( X0 ) Y0 = as.vector( Y0 + 5 ) ## And correct it with 30 lags tsbc = SBCK::TSMBC$new( 30 ) tsbc$fit( Y0 , X0 ) Z0 = tsbc$predict(X0)
## arima model parameters modelX0 = list( ar = base::c( 0.6 , 0.2 , -0.1 ) ) modelY0 = list( ar = base::c( -0.3 , 0.4 , -0.2 ) ) ## arima random generator rand.genX0 = function(n){ return(stats::rnorm( n , mean = 0.2 , sd = 1 )) } rand.genY0 = function(n){ return(stats::rnorm( n , mean = 0 , sd = 0.7 )) } ## Generate two AR processes X0 = stats::arima.sim( n = 1000 , model = modelX0 , rand.gen = rand.genX0 ) Y0 = stats::arima.sim( n = 1000 , model = modelY0 , rand.gen = rand.genY0 ) X0 = as.vector( X0 ) Y0 = as.vector( Y0 + 5 ) ## And correct it with 30 lags tsbc = SBCK::TSMBC$new( 30 ) tsbc$fit( Y0 , X0 ) Z0 = tsbc$predict(X0)
Compute wasserstein distance between two dataset or SparseHist X and Y
wasserstein(X, Y, p = 2, ot = SBCK::OTNetworkSimplex$new())
wasserstein(X, Y, p = 2, ot = SBCK::OTNetworkSimplex$new())
X |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
Y |
[matrix or SparseHist] If matrix, dim = ( nrow = n_samples, ncol = n_features) |
p |
[float] Power of the metric (default = 2) |
ot |
[Optimal transport solver] |
[float] value of distance
Wasserstein, L. N. (1969). Markov processes over denumerable products of spaces describing large systems of automata. Problems of Information Transmission, 5(3), 47-52.
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=10) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals w2 = SBCK::wasserstein(X,Y) w2 = SBCK::wasserstein(muX,Y) w2 = SBCK::wasserstein(X,muY) w2 = SBCK::wasserstein(muX,muY)
X = base::cbind( stats::rnorm(2000) , stats::rnorm(2000) ) Y = base::cbind( stats::rnorm(2000,mean=10) , stats::rnorm(2000) ) bw = base::c(0.1,0.1) muX = SBCK::SparseHist( X , bw ) muY = SBCK::SparseHist( Y , bw ) ## The four are equals w2 = SBCK::wasserstein(X,Y) w2 = SBCK::wasserstein(muX,Y) w2 = SBCK::wasserstein(X,muY) w2 = SBCK::wasserstein(muX,muY)
This function return a vector / matrix / array of the same shape than cond / x / y such that if(cond) values are x, and else y.
where(cond,x,y)
where(cond,x,y)
cond |
[vector/matrix/array] Boolean values |
x |
[vector/matrix/array] Values if cond is TRUE |
y |
[vector/matrix/array] Values if cond is FALSE |
z [vector/matrix/array].
x = base::seq( -2 , 2 , length = 100 ) y = where( x < 1 , x , exp(x) ) ## y = x if x < 1, else exp(x)
x = base::seq( -2 , 2 , length = 100 ) y = where( x < 1 , x , exp(x) ) ## y = x if x < 1, else exp(x)