LlmProcessing: {
    fallback_strategy?: FallbackStrategy;
    max_completion_tokens?: number | null;
    model_id?: string | null;
    temperature?: number;
}

Controls the LLM used for the task.

Type declaration

  • Optionalfallback_strategy?: FallbackStrategy
  • Optionalmax_completion_tokens?: number | null

    The maximum number of tokens to generate.

  • Optionalmodel_id?: string | null

    The ID of the model to use for the task. If not provided, the default model will be used. Please check the documentation for the model you want to use.

  • Optionaltemperature?: number

    The temperature to use for the LLM.