Attention
Understanding the Self Attention Mechanism
Self-attention allows tokens in an input text to incorporate and derive meaning from other relevant and close-by tokens in a sequence. It is similar to a person reading a sentence and understanding each word by relating it to the broader context.
Imagine reading a sentence and coming across a word like "it". When encountering the word "it," you must look back in the sentence to see what noun or concept "it" refers to. Self-attention allows a Transformer model to weigh the importance of different words in a sentence when understanding or encoding a particular word. It assigns attention scores to each word, indicating how much attention should be given to it or neighboring words. These attention scores are dynamic and depend on the context of the sentence. For example, if "it" refers to "the cat," the attention mechanism would give high scores to "the" and "cat" when encoding "it."
Self Attention
The self-attention mechanism computes attention scores for each token in the input sequence. It considers all other tokens and determines how much attention to assign to them by calculating a weighted sum of the embeddings of all tokens, where the attention scores determine the weights. This mechanism is applied to all tokens simultaneously and in parallel, making it efficient.
To compute the attention scores, the self-attention mechanism uses three sets of vectors: Query , Key , and Value . These vectors are linear projections of the input embeddings:
- where is the input embedding matrix, and , , and are learned projection matrices.
The Query vector represents the token we are trying to encode, while the Key vectors represent all other tokens. The Value vectors store the information that will be used to create the output.
The attention scores are calculated by measuring the similarity between the Query and Key vectors. High similarity results in higher attention scores. This similarity is computed using a dot product followed by scaling and softmax normalization:
- where are Query, Key, and Value vectors
- is the dimensionality of the query/key vectors
The softmax is given by
Where:
- is the input to the softmax function for the -th element in a vector of length .
- is the exponential of , which maps the input to a positive value.
- is the sum of the exponentials of all the elements in the input vector.
- The softmax function normalizes the exponential of by dividing it by the sum of the exponentials of all the elements in the vector.
The softmax function takes a vector of arbitrary real numbers and maps it to a probability distribution, where each element is in the range (0, 1) and the sum of all elements is equal to 1. This is commonly used in the output layer of a neural network for multi-class classification tasks.
The intuition behind this equation is that each token (represented by its Query vector) is compared with all other tokens (represented by their Key vectors) to determine their relevance or similarity. The dot product measures this similarity, and the softmax normalization ensures that the attention scores sum up to 1, representing a probability distribution over the tokens. The weighted sum of the Value vectors, where the weights are the attention scores, represents the information about the token being encoded, considering its context within the input sequence.
Each token is a query to softly search through the entire input context, identifying relevant keywords. The model learns cues of what relevant keywords to expect given queries of certain types. From these dynamically queried relevance clues, the model updates its representation of the original query token with pertinent information extracted from the entire context.
Multi-Head Attention
Multi-head attention enhances the expressiveness of the self-attention mechanism by splitting the Query , Key , and Value vectors into multiple smaller vectors (heads) and computing attention in parallel for each head.
The results from all heads are then concatenated and linearly transformed to obtain the final output:
where:
- , , and are the query, key, and value matrices, respectively. They are typically derived from the input embeddings or the output of the previous layer.
- represents the output of the -th attention head.
- is the concatenation operation, which concatenates the outputs of all attention heads along the feature dimension.
- is a learnable weight matrix used to linearly transform the concatenated outputs of the attention heads.
where:
- , , and are the linearly transformed query, key, and value matrices for the -th attention head, respectively.
- , , and are learnable weight matrices used to project the query, key, and value matrices into a lower-dimensional space for the -th attention head.
- is the attention function, which computes the weighted sum of the values based on the compatibility between the queries and keys.
Multi-head attention allows the model to attend to different aspects of the input sequence simultaneously, capturing diverse relationships and representations.
By leveraging self-attention and multi-head attention, Transformers can effectively model long-range dependencies and capture the contextual information necessary for various natural language processing tasks.