09
Regression
Regularization
Ridge & Lasso Regression
When ordinary least squares overfits or when features are correlated, regularization helps. Ridge regression adds an L2 penalty that shrinks coefficients toward zero without eliminating them. Lasso regression adds an L1 penalty that drives some coefficients exactly to zero, performing feature selection.
Deepbox Modules Used
deepbox/datasetsdeepbox/mldeepbox/metricsdeepbox/preprocessWhat You Will Learn
- Ridge adds L2 penalty — shrinks all coefficients but keeps them non-zero
- Lasso adds L1 penalty — drives some coefficients to exactly zero (feature selection)
- Higher alpha means stronger regularization (more bias, less variance)
- Scale features before regularized regression for fair penalty distribution
Source Code
09-ridge-lasso/index.ts
1import { loadDiabetes } from "deepbox/datasets";2import { mse, r2Score } from "deepbox/metrics";3import { Lasso, LinearRegression, Ridge } from "deepbox/ml";4import { StandardScaler, trainTestSplit } from "deepbox/preprocess";56console.log("=== Ridge & Lasso Regression ===\n");78// Load diabetes dataset for regression9const diabetes = loadDiabetes();10console.log(`Dataset: ${diabetes.data.shape[0]} samples, ${diabetes.data.shape[1]} features\n`);1112// Split data into training and testing sets13const [X_train, X_test, y_train, y_test] = trainTestSplit(diabetes.data, diabetes.target, {14 testSize: 0.2,15 randomState: 42,16});1718// Scale features19const scaler = new StandardScaler();20scaler.fit(X_train);21const X_train_scaled = scaler.transform(X_train);22const X_test_scaled = scaler.transform(X_test);2324console.log("Training models...\n");2526// Train different models27const models = [28 { name: "Linear Regression", model: new LinearRegression() },29 { name: "Ridge (α=0.1)", model: new Ridge({ alpha: 0.1 }) },30 // Ridge adds penalty proportional to square of coefficients31 { name: "Ridge (α=1.0)", model: new Ridge({ alpha: 1.0 }) },32 { name: "Ridge (α=10.0)", model: new Ridge({ alpha: 10.0 }) },33 { name: "Lasso (α=0.1)", model: new Lasso({ alpha: 0.1 }) },34 // Lasso adds penalty proportional to absolute value of coefficients35 { name: "Lasso (α=1.0)", model: new Lasso({ alpha: 1.0 }) },36];3738// Compare the two models39console.log("\nComparison:");40console.log("-".repeat(50));4142for (const { name, model } of models) {43 model.fit(X_train_scaled, y_train);44 const y_pred = model.predict(X_test_scaled);4546 const r2 = r2Score(y_test, y_pred);47 const mseValue = mse(y_test, y_pred);4849 console.log(`${name.padEnd(25)} R²: ${r2.toFixed(4)} MSE: ${mseValue.toFixed(2)}`);50}5152// Explain when to use each method53console.log("\nKey Differences:");54console.log("• Ridge regression shrinks coefficients smoothly");55console.log("• Lasso can zero out coefficients (feature selection)");5657console.log("\n✓ Regularized regression complete!");Console Output
$ npx tsx 09-ridge-lasso/index.ts
=== Ridge & Lasso Regression ===
Dataset: 442 samples, 10 features
Training models...
Comparison:
--------------------------------------------------
Linear Regression R²: -0.0248 MSE: 7387.79
Ridge (α=0.1) R²: -0.0248 MSE: 7387.72
Ridge (α=1.0) R²: -0.0247 MSE: 7387.05
Ridge (α=10.0) R²: -0.0238 MSE: 7380.66
Lasso (α=0.1) R²: -0.0240 MSE: 7382.11
Lasso (α=1.0) R²: -0.0182 MSE: 7339.68
Key Differences:
• Ridge regression shrinks coefficients smoothly
• Lasso can zero out coefficients (feature selection)
✓ Regularized regression complete!