AI Blog

AI Blog

by Michele Laurelli

Regularization

/ˌrɛɡjʊləraɪˈzeɪʃən/
Technique
Definition

Techniques to prevent overfitting by adding constraints or penalties to the model during training.

Regularization methods include L1 (Lasso), L2 (Ridge), dropout, and early stopping. They reduce model complexity and improve generalization to new data.

Examples

1

L2 regularization in linear regression

2

Dropout in neural networks

3

Early stopping to prevent overtraining

Michele Laurelli - AI Research & Engineering