Simple diagram showing an Encoder-Decoder transformer model converting a question into a text-based answer.

This image illustrates the fundamental flow of a Transformer model architecture. The Encoder processes the input text “What is the capital of India,” and the Decoder generates the output “The capital of India is New Delhi.” It represents the core mechanism behind many large language models like ChatGPT, where understanding and generation are handled by distinct yet connected components.

×

Table Of Content