4.6. Generalization in Classification — Dive into Deep Learning 1.0.3 documentation thumbnail
4.6. Generalization in Classification — Dive into Deep Learning 1.0.3 documentation
d2l.ai
If we ignore the fact that this rate characterizes behavior as the test set size approaches infinity rather than when we possess finite samples, this tells us that if we want our test error � � ( � ) to approximate the population error � ( � ) such that one standard deviation corresponds to an i
2 Users
0 Comments
77 Highlights
0 Notes

Top Highlights

  • If we ignore the fact that this rate characterizes behavior as the test set size approaches infinity rather than when we possess finite samples, this tells us that if we want our test error � � ( � ) to approximate the population error � ( � ) such that one standard deviation corresponds to an interval of ± 0.01 , then we should collect roughl...
  • chosen after you observed the test set performance of
  • r proving uniform convergence,
  • The central question of learning has thus historically been framed as a tradeoff between more flexible (higher variance) model classes that better fit the training data but risk overfitting, versus more rigid (higher bias) model classes that generalize well but risk underfitting.
  • limited practical utility

Domain

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.