LemurTaskParams: {
    prompt: string;
    context?: undefined | string | {
        [key: string]: unknown;
    };
    final_model?: undefined | LiteralUnion<LemurModel, string>;
    input_text?: undefined | string;
    max_output_size?: undefined | number;
    temperature?: undefined | number;
    transcript_ids?: undefined | string[];
}

Type declaration

  • prompt: string

    Your text to prompt the model to produce a desired output, including any context you want to pass into the model.

  • Optionalcontext?: undefined | string | {
        [key: string]: unknown;
    }

    Context to provide the model. This can be a string or a free-form JSON value.

  • Optionalfinal_model?: undefined | LiteralUnion<LemurModel, string>

    The model that is used for the final prompt after compression is performed.

    "default
    
  • Optionalinput_text?: undefined | string

    Custom formatted transcript data. Maximum size is the context limit of the selected model, which defaults to 100000". Use either transcript_ids or input_text as input into LeMUR.

  • Optionalmax_output_size?: undefined | number

    Max output size in tokens, up to 4000

  • Optionaltemperature?: undefined | number

    The temperature to use for the model. Higher values result in answers that are more creative, lower values are more conservative. Can be any value between 0.0 and 1.0 inclusive.

  • Optionaltranscript_ids?: undefined | string[]

    A list of completed transcripts with text. Up to a maximum of 100 files or 100 hours, whichever is lower. Use either transcript_ids or input_text as input into LeMUR.

{
"transcript_ids": [
"64nygnr62k-405c-4ae8-8a6b-d90b40ff3cce"
],
"prompt": "List all the locations affected by wildfires.",
"context": "This is an interview about wildfires.",
"final_model": "default",
"temperature": 0,
"max_output_size": 3000
}