Regression#

process_improve.regression.methods.fit_robust_lm(x, y)[source]#

Fits a robust linear model between Numpy vectors x and y, with an intercept. Returns a length-2 array [intercept, slope] (the params attribute returned by statsmodels.RLM); no extra checking on data consistency is done.

See also: regression.repeated_median_slope

Parameters:
Return type:

ndarray

process_improve.regression.methods.repeated_median_slope(x, y, nowarn=False)[source]#

Robust slope calculation.

https://en.wikipedia.org/wiki/Repeated_median_regression

An elegant (simple) method to compute the robust slope between a vector x and y.

INVESTIGATE: algorithm speed-ups via these articles: https://link.springer.com/article/10.1007/PL00009190 http://www.sciencedirect.com/science/article/pii/S0020019003003508

Parameters:
Return type:

float

process_improve.regression.methods.robust_regression(x, y, fit_intercept=True, na_rm=True, conflevel=0.95, nowarn=False, pi_resolution=50)[source]#

Perform the Simple robust regression analysis between x and y variables.

Parameters - x, y: Sequences of numerical values. - fit_intercept: If True, fits an intercept term. If False, forces regression through origin. - na_rm: If True, removes all observations with one or more missing values. - conflevel: Confidence level for confidence intervals, default is 0.95. - nowarn: If True, suppresses warnings. Users should ensure data validity beforehand. - pi_resolution: The resolution of prediction intervals, default is 50.

Simple robust regression between an x and a y using the repeated_median_slope method to calculate the slope. The intercept is the median intercept, when using that slope and the provided x and y values, or forced to zero if fit_intercept=False.

Returns a dictionary of outputs with these keys:

coefficients:             a vector of K coefficients, one for each column in X
intercept:                returned if fit_intercept==True, otherwise 0
standard_errors:          a vector of K standard errors, one per column in X
standard_error_intercept: standard error for the intercept (np.nan if fit_intercept=False)
R2:                       the R^2 value
SE:                       the model's standard error
fitted_values:            the N predicted values, one per row in y
residuals:                the N residuals
t_value:                  the t-values for the standard errors
conf_intervals:           K rows x 2 columns (lower, upper) confidence intervals
pi_range:                 prediction intervals above and below, over the range of data
Parameters:
Return type:

dict

process_improve.regression.methods.multiple_linear_regression(X, y, fit_intercept=True, na_rm=True, conflevel=0.95, pi_resolution=50)[source]#

Linear regression of the N rows and K columns of matrix X onto the single column ‘y’.

Backwards-compatible wrapper around OLS. New code should use the OLS estimator directly, which exposes the same statistics as sklearn-style attributes and prints an R-like summary(lm(...)).

Notes and limitations:
  • does not handle weighting

  • N >= K at least as many rows as columns in X

Returns a dictionary of outputs with these keys:

coefficients:             a vector of K coefficients, one for each column in X
intercept:                returned if fit_intercept==True
standard_errors:          a vector of K standard errors, one per column in X
standard_error_intercept: standard error for the intercept
R2:                       the R^2 value
SE:                       the model's standard error
fitted_values:            the N predicted values, one per row in y
residuals:                the N residuals
t_value:                  the t-values for the standard errors
conf_intervals:           K rows x 2 columns (lower, upper) confidence intervals
pi_range:                 prediction intervals above and below, over the range of data
Parameters:
Return type:

dict

class process_improve.regression.methods.OLS(fit_intercept=True, na_rm=True, conflevel=0.95, pi_resolution=50)[source]#

Bases: RegressorMixin, BaseEstimator

Ordinary Least Squares regression with statistical diagnostics.

A scikit-learn-compatible estimator that fits an OLS model and exposes inferential statistics (standard errors, t-values, p-values, confidence intervals, F-statistic) and influence diagnostics (leverage, Cook’s distance). Calling print(model) after fitting renders a summary similar to R’s summary(lm(...)).

Parameters:
  • fit_intercept (bool, default=True) – If True, fits an intercept term. If False, the regression is forced through the origin.

  • na_rm (bool, default=True) – If True, drops rows with one or more missing values before fitting.

  • conflevel (float, default=0.95) – Confidence level for confidence and prediction intervals.

  • pi_resolution (int, default=50) – Number of grid points at which to compute prediction intervals over the range of x. Only used when X has a single column and an intercept is fitted.

coefficients_#

Fitted slope coefficients (excludes the intercept).

Type:

np.ndarray of shape (K,)

intercept_#

Fitted intercept (np.nan if fit_intercept is False).

Type:

float

standard_errors_#

Standard errors of coefficients_.

Type:

np.ndarray of shape (K,)

standard_error_intercept_#

Standard error of the intercept.

Type:

float

t_values_#

t-statistics for each coefficient.

Type:

np.ndarray of shape (K,)

t_value_intercept_#

t-statistic for the intercept.

Type:

float

p_values_#

Two-sided p-values for each coefficient.

Type:

np.ndarray of shape (K,)

p_value_intercept_#

p-value for the intercept.

Type:

float

conf_intervals_#

Lower and upper bounds of the coefficient confidence intervals.

Type:

np.ndarray of shape (K, 2)

conf_interval_intercept_#

Lower and upper bounds of the intercept confidence interval.

Type:

np.ndarray of shape (2,)

r2_#

Coefficient of determination.

Type:

float

adj_r2_#

Adjusted R-squared.

Type:

float

se_#

Residual standard error (sqrt of residual variance).

Type:

float

df_resid_#

Residual degrees of freedom.

Type:

int

df_model_#

Model degrees of freedom (number of slope coefficients).

Type:

int

f_statistic_#

F-statistic for the overall regression.

Type:

float

f_pvalue_#

p-value associated with the F-statistic.

Type:

float

fitted_values_#

In-sample predictions.

Type:

np.ndarray of shape (N,)

residuals_#

In-sample residuals (NaN at rows removed by na_rm).

Type:

np.ndarray of shape (N_original,)

leverage_#

Hat-matrix diagonal (only computed for single-feature X).

Type:

np.ndarray of shape (N,)

influence_#

Cook’s distance (only computed for single-feature X with intercept).

Type:

np.ndarray of shape (N,)

pi_range_#

Columns are x-grid, lower bound, upper bound of the prediction interval. np.nan if not applicable.

Type:

np.ndarray of shape (pi_resolution, 3) or float

feature_names_in_#

Column names of the feature matrix.

Type:

list[str]

target_name_#

Name of the target variable.

Type:

str

n_samples_#

Number of samples used in the fit (after na_rm).

Type:

int

n_features_in_#

Number of input features.

Type:

int

is_fitted_#

Whether fit() has been called successfully.

Type:

bool

Examples

>>> import numpy as np
>>> from process_improve.regression.methods import OLS
>>> rng = np.random.default_rng(0)
>>> X = rng.standard_normal((50, 2))
>>> y = X @ [1.5, -2.0] + 0.5 + 0.1 * rng.standard_normal(50)
>>> model = OLS().fit(X, y)
>>> print(model)
Call:
OLS(fit_intercept=True, na_rm=True, conflevel=0.95)
...

See also

multiple_linear_regression

Backwards-compatible function returning a dict.

robust_regression

Robust regression via repeated-median slope.

fit(X, y)[source]#

Fit the OLS model.

Parameters:
  • X (array-like of shape (N, K)) – Feature matrix. Pandas and NumPy inputs are both accepted.

  • y (array-like of shape (N,) or (N, 1)) – Target vector.

Returns:

self – Fitted estimator.

Return type:

OLS

predict(X)[source]#

Predict target values for X.

Parameters:

X (array-like of shape (N, K))

Returns:

y_pred

Return type:

np.ndarray of shape (N,)

summary()[source]#

Return an R-style summary(lm(...)) string for the fitted model.

Return type:

str

to_dict()[source]#

Return the legacy dictionary representation used by multiple_linear_regression.

Return type:

dict

set_score_request(*, sample_weight='$UNCHANGED$')#

Configure whether metadata should be requested to be passed to the score method.

Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with enable_metadata_routing=True (see sklearn.set_config()). Please check the User Guide on how the routing mechanism works.

The options for each parameter are:

  • True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.

  • False: metadata is not requested and the meta-estimator will not pass it to score.

  • None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.

  • str: metadata should be passed to the meta-estimator with this given alias instead of the original name.

The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.

Added in version 1.3.

Parameters:
  • sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for sample_weight parameter in score.

  • self (OLS)

Returns:

self – The updated object.

Return type:

object