Data representations for neural networks  Summary and Q&A
TL;DR
Learn about different types of tensors and data sets used in neural networks, including scalar, vector, matrix, and higher rank tensors.
Questions & Answers
Q: What are tensors in the context of neural networks?
Tensors are multidimensional containers for numerical data used in neural networks. They can be scalar, vector, matrix, or higher rank tensors, representing different types of data.
Q: How are vector tensors different from matrix tensors?
Vector tensors are rank 1 tensors and represent a list of numbers, while matrix tensors are rank 2 tensors and represent a collection of rows and columns. Vector tensors have one axis, while matrix tensors have two axes.
Q: How are higher rank tensors represented?
Higher rank tensors can be represented as a stack of matrices. For example, a rank 3 tensor can be visualized as a cube of numbers, with each matrix representing a different dimension.
Q: What are the three key attributes of a tensor?
The three key attributes of a tensor are the number of axes (rank), shape, and data type. The rank defines the number of axes, the shape describes the dimensions of the tensor, and the data type specifies the type of data stored in the tensor.
Summary & Key Takeaways

Tensors are containers for numerical data used in neural networks, with different types such as scalar, vector, matrix, and higher rank tensors.

Scalar tensors are rank 0 tensors and represent a single number.

Vector tensors are rank 1 tensors and represent a list of numbers, with each number being an entry in the vector.

Matrix tensors are rank 2 tensors and represent a collection of rows and columns, with each element being a value in the matrix.