Hello, how is everyone, I hope you are well
- I did not understand what is the benefit of topK, topP and temperature and what are they used for? !!!
Hello, how is everyone, I hope you are well
Hi! These are what are known as sampling parameters.
Modern AI language models generate language in one direction, a token at a time (where a token is a form of language compression, equivalent to an AI “word”).
They don’t just produce the best token. Instead, probabilities of all tokens are produced, a large set of thousands. Then it has been found that always choosing the best token doesn’t make human-like or interesting language, so they are sampled, with the chance one is selected being similar to the underlying AI probability (or certainty, goodness).
These parameters constrain the output, so that a long tail of very unlikely words doesn’t occasionally corrupt the output, while there is still diversity in token choice during language inference output.
Here’s documentation for reference:
Temperature
The temperature is used for sampling during response generation, which occurs when
topP
andtopK
are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic and less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of0
is deterministic, meaning that the highest probability response is always selected.For most use cases, try starting with a temperature of
0.2
. If the model returns a response that’s too generic, too short, or the model gives a fallback response, try increasing the temperature.Top-K
Top-K changes how the model selects tokens for output. A top-K of
1
means the next selected token is the most probable among all tokens in the model’s vocabulary (also called greedy decoding), while a top-K of3
means that the next token is selected from among the three most probable tokens by using temperature.For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more random responses. The default top-K is
40
.Top-P
Top-P changes how the model selects tokens for output. Tokens are selected from the most (see top-K) to least probable until the sum of their probabilities equals the top-P value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-P value is
0.5
, then the model will select either A or B as the next token by using temperature and excludes C as a candidate.Specify a lower value for less random responses and a higher value for more random responses. The default top-P is
0.95
.
Application: Probably the best technique is to use top-p, which is called nucleus sampling. If you are needing output where second place tokens are intolerable, like a boolean output that evaluates truth, you can reduce it to nearly zero.
You can see the default values already work to limit the output to better quality. A well-trained and prompted AI already knows what to produce with high certainty, and the existing constraints prevent roll-of-the-dice nonsense output.
Hope that overview helps you decide if you want to alter them for a particular language AI application.