MultipleRegression Class |
Namespace: MathNet.Numerics.LinearRegression
public static class MultipleRegression
Name | Description | |
---|---|---|
DirectMethodT(MatrixT, MatrixT, DirectRegressionMethod) |
Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals.
| |
DirectMethodT(MatrixT, VectorT, DirectRegressionMethod) |
Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals.
| |
DirectMethodT(IEnumerableTupleT, T, Boolean, DirectRegressionMethod) |
Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals.
Uses the cholesky decomposition of the normal equations.
| |
DirectMethodT(T, T, Boolean, DirectRegressionMethod) |
Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals.
| |
NormalEquationsT(MatrixT, MatrixT) |
Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals.
Uses the cholesky decomposition of the normal equations.
| |
NormalEquationsT(MatrixT, VectorT) |
Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals.
Uses the cholesky decomposition of the normal equations.
| |
NormalEquationsT(IEnumerableTupleT, T, Boolean) |
Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals.
Uses the cholesky decomposition of the normal equations.
| |
NormalEquationsT(T, T, Boolean) |
Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals.
Uses the cholesky decomposition of the normal equations.
| |
QRT(MatrixT, MatrixT) |
Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals.
Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower.
| |
QRT(MatrixT, VectorT) |
Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals.
Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower.
| |
QRT(IEnumerableTupleT, T, Boolean) |
Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals.
Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower.
| |
QRT(T, T, Boolean) |
Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals.
Uses an orthogonal decomposition and is therefore more numerically stable than the normal equations but also slower.
| |
SvdT(MatrixT, MatrixT) |
Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals.
Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower.
| |
SvdT(MatrixT, VectorT) |
Find the model parameters β such that X*β with predictor X becomes as close to response Y as possible, with least squares residuals.
Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower.
| |
SvdT(IEnumerableTupleT, T, Boolean) |
Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals.
Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower.
| |
SvdT(T, T, Boolean) |
Find the model parameters β such that their linear combination with all predictor-arrays in X become as close to their response in Y as possible, with least squares residuals.
Uses a singular value decomposition and is therefore more numerically stable (especially if ill-conditioned) than the normal equations or QR but also slower.
|