Configuration parameters for initializing the provider.
OptionalonOptionalonOptionalonOptionalonAborts a currently running inference task.
Makes an inference based on the provided prompt and parameters.
The input text to base the inference on.
Parameters for customizing the inference behavior.
Optionaloptions: InferenceOptionsUse a specified model for inferences.
The name of the model to load.
Optionalctx: numberThe optional context window length, defaults to the model ctx.
Optionalurls: string | string[]OptionalonLoadProgress: OnLoadProgress
Creates a new instance of the OpenaiCompatibleProvider.