# 19. Saddle Points Continued, Maxmin Principle | Summary and Q&A

19.8K views
May 16, 2019
by
MIT OpenCourseWare
19. Saddle Points Continued, Maxmin Principle

## TL;DR

A lecture on saddle points, statistics, and covariance, with a focus on deep learning and the concepts of mean, variance, and covariance.

## Install to Summarize YouTube Videos and Get Transcripts

### Q: What are saddle points and why are they important in deep learning?

Saddle points are points in a function where the derivatives are zero, but the second derivatives are not all zero. They are important in deep learning because they affect the gradient descent algorithm used to find the minimum of a cost function.

### Q: What is the maximum value of the Rayleigh quotient and how can it be achieved?

The maximum value of the Rayleigh quotient in the example given is 5. It can be achieved by setting the vector (u, v, w) to (1, 0, 0).

### Q: What is the minimum value of the Rayleigh quotient and how can it be achieved?

The minimum value of the Rayleigh quotient in the example given is 1. It can be achieved by setting the vector (u, v, w) to (0, 0, 1).

### Q: What is the relationship between eigenvalues and the Rayleigh quotient?

In the example, the eigenvalues of the matrix used in the Rayleigh quotient function correspond to the maximum, minimum, and saddle values. The eigenvectors of the matrix represent the locations where these values are reached.

## Summary & Key Takeaways

• The lecture begins with a discussion on saddle points and their relevance in deep learning, specifically in finding the minimum of a total cost function using gradient descent.

• The topic then moves on to basic ideas of statistics, such as mean and variance, and how they are used in analyzing data.

• The concept of covariance is introduced, along with its matrix representation, and its importance in understanding the relationship between multiple experiments or variables.