Transformer decoder stack processes encoder output to generate the sentence "The capital of India is New Delhi" step by step.

This image represents the internal structure and data flow of a Transformer decoder block used in natural language processing. The decoder stack includes layers such as Output Embedding, Positional Encoding, Masked Attention, Self Attention, Feed Forward, and normalization steps. The process begins with the output from the encoder and ends with the final sentence generation: “The capital of India is New Delhi.” This visual demonstrates how each component in the decoder contributes to generating coherent, context-aware language predictions.

×

Table Of Content