6. REGULARIZATION STRATEGIES (MEDIUM IMPORTANCE - Sixth Priority) (4/5) │ ├── Linear Model Regularization (4/5) │ │ ├── L1 Regularization (Lasso) (5/5) │ │ │ ├── sklearn.linear_model.Lasso │ │ │ └── ✓ Feature selection property, drives weights to zero │ │ ├── L2 Regularization (Ridge) (4/5) │ │ │ ├── sklearn.linear_model.Ridge │ │ │ └── ✓ Shrinks coefficients, prevents overfitting, no explicit selection │ │ ├── Elastic Net (L1+L2) (4/5) │ │ │ ├── sklearn.linear_model.ElasticNet │ │ │ └── ✓ Hybrid of L1 and L2, handles correlated features well │ │ └── Group Lasso (2/5) │ │ ├── sklearn_learn_contrib.group_lasso │ │ └── ✓ Selects/eliminates groups of features together │ │ │ ├── Tree-Based Regularization (3/5) │ │ ├── Max Depth Control (4/5) │ │ │ ├── sklearn.tree.DecisionTreeClassifier(max_depth) │ │ │ ├── xgboost.XGBClassifier(max_depth) │ │ │ └── ✓ Limits tree growth, prevents overfitting │ │ ├── Min Samples Split/Leaf (4/5) │ │ │ ├── sklearn.tree.DecisionTreeClassifier(min_samples_split) │ │ │ ├── sklearn.tree.DecisionTreeClassifier(min_samples_leaf) │ │ │ └── ✓ Controls minimum samples required to split/form a leaf │ │ └── Feature Subsetting (e.g., Random Forest) (3/5) │ │ ├── sklearn.ensemble.RandomForestClassifier(max_features) │ │ └── ✓ Randomly selects a subset of features for each tree │ │ │ └── Neural Network Regularization (3/5) │ ├── Dropout (4/5) │ │ ├── tensorflow.keras.layers.Dropout │ │ ├── torch.nn.Dropout │ │ └── ✓ Randomly sets a fraction of input units to 0 at each update │ ├── Batch Normalization (4/5) │ │ ├── tensorflow.keras.layers.BatchNormalization │ │ │ └── ✓ Normalizes layer inputs, reduces internal covariate shift │ │ └── Weight Decay (L1/L2 penalty) (3/5) │ │ ├── tensorflow.keras.regularizers.l1_l2 │ │ ├── torch.optim.Adam(weight_decay) │ │ └── ✓ Adds penalty to weights, similar to L1/L2 regularization
← Back to Main Pipeline