Configuration parameters for initializing the provider.
Optional
onOptional
onOptional
onOptional
onAborts a currently running inference task.
Makes an inference based on the provided prompt and parameters.
The input text to base the inference on.
Parameters for customizing the inference behavior.
Optional
options: InferenceOptionsUse a specified model for inferences.
The name of the model to load.
Optional
ctx: numberThe optional context window length, defaults to the model ctx.
Optional
urls: string | string[]Optional
onLoadProgress: OnLoadProgress
Creates a new instance of the OpenaiCompatibleProvider.