Optional
fallback_Optional
max_The maximum number of tokens to generate.
Optional
model_The ID of the model to use for the task. If not provided, the default model will be used. Please check the documentation for the model you want to use.
Optional
temperature?: numberThe temperature to use for the LLM.
Controls the LLM used for the task.