2.3. Linear Algebra — Dive into Deep Learning 1.0.0-beta0 documentation

d2l.ai

By default, invoking the sum function reduces a tensor along all of its axes, eventually producing a scalar. Our libraries also allow us to specify the axes along which the tensor should be reduced is a sum over the products of the elements at the same position: While we do not want to get too far a

3 Users

0 Comments

62 Highlights

0 Notes

- By default, invoking the sum function reduces a tensor along all of its axes, eventually producing a scalar. Our libraries also allow us to specify the axes along which the tensor should be reduced
- is a sum over the products of the elements at the same position:
- While we do not want to get too far ahead of ourselves, we can plant some intuition already about why these concepts are useful. In deep learning, we are often trying to solve optimization problems: maximize the probability assigned to observed data; maximize the revenue associated with a recommender model; minimize the distance between predictions...
- in Python, as in most programming languages, vector indices start at 0 , also known as zero-based indexing, whereas in linear algebra subscripts begin at 1 (one-based indexing).
- Matrices are useful for representing datasets. Typically, rows correspond to individual records and columns correspond to distinct attributes.

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.