This image illustrates how positional vectors are assigned to each word in a sentence such as “What is the capital of India?” to preserve the order of words. These vectors enable the transformer model to understand the sequence and relationships between tokens using learned mathematical representations.