Skip to main content

Groq

Groq is a powerful platform that provides advanced AI capabilities, including generating model responses for chat conversations. This documentation outlines the available actions for integrating with Groq and provides detailed instructions on how to use them.

Create Chat Completion

Generates a model response for the given chat conversation.

Fields

  • What are the messages in the conversation?

    This field requires a list of messages that make up the conversation. Each message should include the role (such as 'system' or 'user') and the content of the message. The conversation typically starts with a 'system' message, followed by 'user' messages. The input should be a valid JSON array of message objects. For example:

    [
    {"role": "system", "content": "You are a helpful assistant"},
    {"role": "user", "content": "Hello"}
    ]

    Ensure that the format is correct to avoid errors.

  • Which model should be used?

    Specify the ID of the model to use for generating the chat completion. This should be a valid and available model ID from the Groq API, such as 'llama3-8b-8192'. Make sure the model ID does not contain spaces and is correctly spelled.

  • What is the maximum number of tokens to generate?

    Define the maximum number of tokens that can be generated in the chat completion. This is optional and defaults to 1024. The total number of input and output tokens is limited by the model's context length. Ensure the value is numeric.

  • What temperature setting should be used?

    Set the sampling temperature, which ranges from 0 to 2. A higher temperature (e.g., 0.8) results in more random outputs, while a lower temperature (e.g., 0.2) makes the output more focused and deterministic. The default value is 1. Ensure the temperature is within the specified range.

  • What top_p setting should be used?

    Define the top-p value for nucleus sampling, which ranges from 0 to 1. A lower value considers fewer possible options than a higher value. The default is 1. Ensure the top_p value is within the specified range.

Output

The output of this action is a model-generated response based on the provided conversation messages and parameters. The response will be in a format that can be used to continue the conversation or for further processing.