Optionalfallback_Optionalmax_The maximum number of tokens to generate.
Optionalmodel_The ID of the model to use for the task. If not provided, the default model will be used. Please check the documentation for the model you want to use.
Optionaltemperature?: numberThe temperature to use for the LLM.
Controls the LLM used for the task.