Linear Regression

Table of contents

  1. Introduction
  2. Intuition
  3. Why is the name Linear Regression?
  4. Assumptions in Linear Regression
  5. Model Representation
  6. Loss function in Linear Regression
  7. Optimization: Gradient Descent
  8. Overfitting and Underfitting
  9. Regularization
  10. Bias-Variance tradeoff
  11. Case Study on Boston House Prediction Dataset

1. Introduction

2. Intuition

3. Why is the name Linear Regression?

4. Assumptions in Linear Regression

4.1 Homoscedasticity:

4.2 Linear relationship:

4.3 No Multicollinearity:

4.4 Multivariate normal distribution:

4.5 Autocorrelation between the errors:

5. Model Representation:

y=mx + c

y^ = W1X1 + W0

6. Loss function in Linear Regression

7. Optimization: Gradient Descent

8. Overfitting and Underfitting

9. Bias-Variance tradeoff

10. Regularization

11. Case study on Boston House Prediction Dataset

12. Conclusion

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Anubhav Gupta

Anubhav Gupta

Machine Learning , Computer Vision and Deep Learning Enthusiast