Class ModelParametersKeys
- Namespace
- FoundationaLLM.Common.Constants.Agents
- Assembly
- FoundationaLLM.Common.dll
Contains constants of the keys for all overridable model settings.
public static class ModelParametersKeys
- Inheritance
-
ModelParametersKeys
- Inherited Members
Fields
All
All model parameter keys.
public static readonly string[] All
Field Value
- string[]
DoSample
Whether or not to use sampling; use greedy decoding otherwise.
public const string DoSample = "do_sample"
Field Value
IgnoreEOS
Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. Defaults to False.
public const string IgnoreEOS = "ignore_eos"
Field Value
MaxNewTokens
Sets a limit on the number of tokens per model response. The API supports a maximum of 4000 tokens shared between the prompt (including system message, examples, message history, and user query) and the model's response. One token is roughly 4 characters for typical English text.
public const string MaxNewTokens = "max_new_tokens"
Field Value
ReturnFullText
Whether or not to return the full text (prompt + response) or only the generated part (response). Default value is false.
public const string ReturnFullText = "return_full_text"
Field Value
Temperature
Controls randomness. Lowering the temperature means that the model will produce more repetitive and deterministic responses. Increasing the temperature will result in more unexpected or creative responses. Try adjusting temperature or Top P but not both. This value should be a float between 0.0 and 1.0.
public const string Temperature = "temperature"
Field Value
TopK
The number of highest probability vocabulary tokens to keep for top-k-filtering. Default value is null, which disables top-k-filtering.
public const string TopK = "top_k"
Field Value
TopP
The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Top P (or Top Probabilities) is imilar to temperature, this controls randomness but uses a different method. Lowering Top P will narrow the model’s token selection to likelier tokens. Increasing Top P will let the model choose from tokens with both high and low likelihood. Try adjusting temperature or Top P but not both.
public const string TopP = "top_p"