The context vector represents a compact, fixed-size representation of the input sequence. In modern [[transformer]]-based [[encoder]] models, the context vector is often replaced by contextualized [[word embeddings]].
In early sequence-to-sequence models (such as those using RNNs or LSTMs), the context vector is generated by the encoder and used as the input to the [[decoder]] for generating the output sequence. The context vector is typically the final hidden state of the encoder.