Bias and Variance are the deepest concepts in ML which drives the decision making of a ML project. Regularization is a solution for the high variance problem. This was one of the lectures of a full course I taught in University of Moratuwa, Sri Lanka on 2023 second half of the year.
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Β
Lecture 7 - Bias, Variance and Regularization, a lecture in subject module Statistical & Machine Learning
1. DA 5230 β Statistical & Machine Learning
Lecture 7 β Bias, Variance and Regularization
Maninda Edirisooriya
manindaw@uom.lk
2. ML Process
β’ You split your dataset into 2
β’ Large proportion to training and rest for testing
β’ Then train the Training dataset with a suitable learning algorithm
β’ Once trained, evaluate that model with the Test set and get the
performance numbers (e.g.: accuracy)
β’ Repeat the Data Collection, EDA, ML Algorithm Selection and Training
phases iteratively till you get the expected level of performance
3. Model Fit
β’ The same training dataset can be trained differently by different
learning algorithms that will fit differently to the data
β’ Even for a given algorithm, the level of explainability achieved by the
model on the given dataset can be different depending on,
β’ The number model parameters
β’ Amount of data used for training
β’ Number of iterations used for training
β’ Regularization techniques used (will discuss later)
5. Bias and Variance
β’ When a ML model cannot correctly make the predictions due to the
simplicity of the model, it is known as a Bias Problem
β’ When a ML model becomes very good at making predictions on its
training dataset but bad (larger error) at the real world data (unseen
data while training), it is a Variance Problem
β’ As test data represents the unseen data to the model this causes higher error
for test data
β’ A good ML model should reduce both Bias and the Variance to an
acceptable level
6. Bias and Variance as forms of Errors
Source: https://towardsdatascience.com/bias-and-variance-in-linear-models-e772546e0c30
7. Bias β Variance Comparison
Underfitting (i.e. Bias Problem) Overfitting (i.e. Variance Problem)
Can happen when the model is not
complex enough to understand the
dataset. (i.e. small number of
parameters for larger dataset)
Can happen when the model is too
complex for the dataset (i.e. large
number of parameters for a smaller
dataset)
Can be due to Undertraining (i.e.
trained for lesser number of iterations)
Can be due to Overtraining
Results lower performance (e.g. lower
accuracy)
Results higher performance for the
training dataset but much lower
performance for the testing dataset
Problem is lower accuracy Problems is the model not being
generalized for the real world data
9. Bias
β’ Bias is caused by not learning enough of the insights of the dataset by
the model
β’ Either due to the lesser expressive power of the model (i.e. lower number of
parameters)
β’ Or due to the smaller training dataset which does not contain enough
information of the data distribution
β’ When trained using an iterative method like Gradient Descent this
may due to finishing the training process before completion (i.e.
before the cost is reduced sufficiently)
10. Bias
β’ Bias is defined as,
Bias[f(X)] = E[ΰ·‘
π] β Y
β’ Bias can be reduced by,
β’ Using a better ML algorithm
β’ Using a larger model (i.e. with more parameters)
β’ Training for more iterations if training was stopped earlier
β’ Using a larger training data set
β’ Reducing regularization (if exists)
β’ Example for high bias,
β’ Using a straight line to model a quadratic polynomial distribution
11. Variance
β’ Variance is introduced when the model is too much fitting to the
training dataset
β’ The model can get highly optimized on the dataset that is being
trained, including the noise
β’ As the model is highly fitting to the noise information in the dataset,
the model will perform poorly for real world data that are different
from the training set
12. Variance
β’ Variance is defined as,
Variance[f(X)] = E[ {E(ΰ·
π)β ΰ·
π}2 ]
β’ Variance can be reduced by
β’ Using a larger training dataset can reduce the variance as the errors get cancelled out
β’ Reducing the number of parameters can also reduce variance as the less significant
insights (like noise) will not be included in the model
β’ Can use Dimensionality Reduction and Feature Selection (will be discussed in future)
β’ Using Early Stopping to stop training at an optimal point
β’ Dropout is used in Deep Learning models (not relevant to out subject module βΊ)
β’ Increasing (or introducing, if not at the moment) regularization
β’ Example for high variance,
β’ Using a 8 degree polynomial to model a linear distribution
13. Error Composition
β’ Mean Square Error,
MSE{α
π π± } = [Bias{ΰ·‘
π}]2 + Var{ΰ·‘
π}
β’ Error in prediction,
E(ΰ·‘
π-Y)2 = MSE{α
π π } + π
Where π is the irreducible error
Source: https://www.geeksforgeeks.org/bias-vs-variance-in-machine-learning/
14. Bias-Variance Tradeoff
β’ ML algorithm, number of model parameters, amount of data, number
of training iterations and regularization can be tried to tune to reduce
both bias and variance
β’ But this is not possible as when bias is reduced variance is increased
and when variance is reduced bias is increased
β’ This is known as the Bias Variance Tradeoff
β’ Therefore, a better balance between bias and variance is used to
create a better model
15. Early Stopping
β’ When training with iterative
methods like Gradient Descent,
β’ Training error reduces
monotonically due to increased
fitting
β’ But testing error reduces up to a
certain level and starts to increase
again due to the increased
variance
β’ Training the model can be
stopped where the test error is
minimum
β’ This is known as Early Stopping Source: https://pub.towardsai.net/keras-earlystopping-callback-to-train-the-neural-networks-perfectly-2a3f865148f7
16. Regularization
β’ When the high variance is observed during ML we may try to find
more data to train. But that may be expensive
β’ Then we may try to reduce the number of parameters in the model
instead. Identifying the parameters to be reduced may not be obvious
β’ The next best option is to apply Regularization during the training
process
β’ Regularization is a technique to penalize some information in the
model, assuming they are relevant to the noise
17. Regularization
β’ For the regularization, a penalty is added to the loss
Loss := Loss + π * ΰ·π=1
π
|π½π
π|
β’ Where π·πis the ith parameter of the model. k is 1 or 2 in general
β’ This penalty (or regularization term) has a factor π known as the
Regularization Strength
β’ Best value for π is found using Cross Validation (will learn in future)
β’ There are 2 common regularization techniques, L1 (Lasso Regression)
and L2 (Ridge Regression)
β’ For L1, take k=1 and for L2 take k=2
18. L1 (Lasso) Regression
β’ Loss function will be, Loss := Loss + π * Οπ=π
π
|π·π|
β’ Penalty is proportional to the sum of parameter weights
β’ Selects Features: most less-significant parameters end up as zero
β’ Used when only few of the parameters are believed to be relevant to
the the model, among the existing parameters, where other
parameters should be eliminated from the model
β’ When π is getting larger, more features will become zero
β’ When π is very large only the bias π·π will remain non-zero
19. L2 (Ridge) Regression
β’ Loss function will be, Loss := Loss + π * ΰ·π=π
π
π·π
π
β’ Penalty is proportional to the sum of square of parameter weights
β’ Weight Decay: reduces the weights of parameters with higher values
β’ Used when all the parameters are believed to be contributing to the
model, so need to significantly reduce the weights of excessively large
parameters
β’ When π is getting larger, all the parameters π·π will get reduced but
will not get equal to zero
20. Elastic Net Regression
β’ Both L1 and L2 functionalities can be used by weighting each of its
values, which results Elastic Net Regression
β’ This will bring some small parameters to zero (due to the L1 effect)
and reduce some larger parameters (due to the L2 effect)
β’ Select πΆ to adjust the balance of the effect between L1 and L2
Loss := Loss + πΌ βπ * Οπ=1
π
|π½π| + (1-πΌ) βπ *ΰ·π=1
π
π½π
2
Where 0 < πΌ < 1
Loss := Loss + π * [πΌ β Οπ=1
π
|π½π| + (1-Ξ±) β ΰ·π=1
m
π½π
2
]
21. Linear Regression with L1
As cost (total loss) function for Linear Regression is Mean Square Error
(MSE), after L1 regularization,
J Ξ² = MSE + π * Οπ=1
π
|π½π|
J Ξ² =
1
2
ΰ·
i=1
n
Yi β ΰ·‘
Yi
2
+ π * Οπ=1
π
|π½π|
πJ Ξ²
πΞ²j
= ΰ·i=1
n
ΰ·‘
Yi β Yi * Xi,j + π
22. Linear Regression with L2
As cost function for Linear Regression is Mean Square Error (MSE), after
L2 regularization,
J Ξ² = MSE +
π
2
* ΰ·π=1
π
π½π
2
J Ξ² =
1
2
ΰ·
i=1
n
Yi β ΰ·‘
Yi
2
+
π
2
* ΰ·j=1
π
π½π
2
πJ Ξ²
πΞ²j
= ΰ·i=1
n
ΰ·‘
Yi β Yi * Xi,j + π * π½π
23. Logistic Regression with L1 & L2
Though the cost function for the Logistic Regression is the Cross
Entropy function, still the Cost functions and their derivatives seem the
same. (difference lies on ΰ·‘
π=α
π(X) which is sigmoid for logistic regression)
L1 L2
J Ξ² =
1
2
ΰ·
i=1
n
Yi β ΰ·‘
Yi
2
+ π * Οπ=1
π
|π½π| J Ξ² =
1
2
ΰ·
i=1
n
Yi β ΰ·‘
Yi
2
+
π
2
* ΰ·π=1
π
π½π
2
πJ Ξ²
πΞ²j
= ΰ·i=1
n
ΰ·‘
Yi β Yi * Xi,j + π
πJ Ξ²
πΞ²j
= ΰ·i=1
n
ΰ·‘
Yi β Yi * Xi,j + π * π½π
24. One Hour Homework
β’ Officially we have one more hour to do after the end of the lectures
β’ Therefore, for this weekβs extra hour you have a homework
β’ Bias and Variance are very important concepts in ML and regularization is
widely used especially in Deep Learning
β’ Go through the slides and get a clear understanding on Bias-Variance concept
and familiar with regularization
β’ Refer external sources to clarify all the ambiguities related to it
β’ Good Luck!