Mistral AI
Mistral AI is a powerful tool for generating chat completions using advanced language models. It allows users to interact with AI models to generate responses based on provided prompts and parameters.
Chat Completion
Generate a chat completion using the specified model and messages.
Fields
-
Model ID: This is the identifier of the model you wish to use for generating the chat completion. You can find available models by using the List Available Models API. The default model is
mistral-large-latest
. Ensure that the model ID is valid and matches the expected format. -
Messages: These are the prompts for which you want to generate completions. They should be encoded as a list of dictionaries, each containing a 'role' and 'content'. For example,
[{"role": "user", "content": "Your question here"}]
. This field is required and must be a valid JSON array. -
Temperature: This parameter controls the randomness of the output. A higher temperature results in more random outputs, while a lower temperature makes the output more focused and deterministic. The recommended range is between 0.0 and 1.0, with a default value of 0.3.
-
Max Tokens: This defines the maximum number of tokens to generate in the completion. It is important to ensure that the total token count does not exceed the model's context length. The default value is 100 tokens.
Output
The output of the Chat Completion action is a generated text response based on the input messages and parameters. The response will be crafted by the specified model, taking into account the temperature and max tokens settings to produce a coherent and contextually relevant completion.