10.6. The Encoder–Decoder Architecture — Dive into Deep Learning 1.0.3 documentation thumbnail
10.6. The Encoder–Decoder Architecture — Dive into Deep Learning 1.0.3 documentation
d2l.ai
Encoder-decoder architectures can handle inputs and outputs that both consist of variable-length sequences and thus are suitable for sequence-to-sequence problems such as machine translation. The encoder takes a variable-length sequence as input and transforms it into a state with a fixed shape. The
2 Users
0 Comments
2 Highlights
0 Notes

Top Highlights

  • Encoder-decoder architectures can handle inputs and outputs that both consist of variable-length sequences and thus are suitable for sequence-to-sequence problems such as machine translation. The encoder takes a variable-length sequence as input and transforms it into a state with a fixed shape. The decoder maps the encoded state of a fixed shape t...
  • The standard approach to handling this sort of data is to design an encoder-decoder architecture (Fig. 10.6.1) consisting of two major components: an encoder that takes a variable-length sequence as input, and a decoder that acts as a conditional language model, taking in the encoded input and the leftwards context of the target sequence and predic...

Domain

Ready to highlight and find good content?

Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.