Fit Vector Autoregressive (VAR) Model Parameters using Lasso Regularization
Source:R/RcppExports.R
FitVARLasso.Rd
This function estimates the parameters of a VAR model using the Lasso regularization method with cyclical coordinate descent. The Lasso method is used to estimate the autoregressive and cross-regression coefficients with sparsity.
Arguments
- YStd
Numeric matrix. Matrix of standardized dependent variables (Y).
- XStd
Numeric matrix. Matrix of standardized predictors (X).
XStd
should not include a vector of ones in column one.- lambda
Numeric. Lasso hyperparameter. The regularization strength controlling the sparsity.
- max_iter
Integer. The maximum number of iterations for the coordinate descent algorithm (e.g.,
max_iter = 10000
).- tol
Numeric. Convergence tolerance. The algorithm stops when the change in coefficients between iterations is below this tolerance (e.g.,
tol = 1e-5
).
Details
The FitVARLasso()
function estimates the parameters
of a Vector Autoregressive (VAR) model
using the Lasso regularization method.
Given the input matrices YStd
and XStd
,
where YStd
is the matrix of standardized dependent variables,
and XStd
is the matrix of standardized predictors,
the function computes the autoregressive and cross-regression coefficients
of the VAR model with sparsity induced by the Lasso regularization.
The steps involved in estimating the VAR model parameters using Lasso are as follows:
Initialization: The function initializes the coefficient matrix
beta
with OLS estimates. Thebeta
matrix will store the estimated autoregressive and cross-regression coefficients.Coordinate Descent Loop: The function performs the cyclical coordinate descent algorithm to estimate the coefficients iteratively. The loop iterates
max_iter
times, or until convergence is achieved. The outer loop iterates over the predictor variables (columns ofXStd
), while the inner loop iterates over the outcome variables (columns ofYStd
).Coefficient Update: For each predictor variable (column of
XStd
), the function iteratively updates the corresponding column ofbeta
using the coordinate descent algorithm with L1 norm regularization (Lasso). The update involves calculating the soft-thresholded valuec
, which encourages sparsity in the coefficients. The algorithm continues until the change in coefficients between iterations is below the specified tolerancetol
or when the maximum number of iterations is reached.Convergence Check: The function checks for convergence by comparing the current
beta
matrix with the previous iteration'sbeta_old
. If the maximum absolute difference betweenbeta
andbeta_old
is below the tolerancetol
, the algorithm is considered converged, and the loop exits.
See also
Other Fitting Autoregressive Model Functions:
FitMLVARDynr()
,
FitMLVARMplus()
,
FitVARDynr()
,
FitVARLassoSearch()
,
FitVARMplus()
,
FitVAROLS()
,
LambdaSeq()
,
ModelVARP1Dynr()
,
ModelVARP2Dynr()
,
OrigScale()
,
PBootVARExoLasso()
,
PBootVARExoOLS()
,
PBootVARLasso()
,
PBootVAROLS()
,
RBootVARExoLasso()
,
RBootVARExoOLS()
,
RBootVARLasso()
,
RBootVAROLS()
,
SearchVARLasso()
,
StdMat()
Examples
YStd <- StdMat(dat_p2_yx$Y)
XStd <- StdMat(dat_p2_yx$X[, -1]) # remove the constant column
lambda <- 73.90722
FitVARLasso(
YStd = YStd,
XStd = XStd,
lambda = lambda,
max_iter = 10000,
tol = 1e-5
)
#> [,1] [,2] [,3] [,4] [,5] [,6]
#> [1,] 0.3440823 0.0000000 0.0000000 0.08601767 0.0000000 0.0000000
#> [2,] 0.0000000 0.4580172 0.0000000 0.00000000 0.2130337 0.0000000
#> [3,] 0.0000000 0.0000000 0.5889874 0.00000000 0.0000000 0.2746474