Aborts a currently running inference task.
API utility being used.
The key used for authentication with the provider's API.
Optional
defaultsMakes an inference based on provided prompt and parameters.
Retrieves information about available server config.
Loads a model by name, with optional context.
Active model configuration.
List of available model configurations.
Retrieves information about available models.
Identifier for the LM provider.
Optional
onCallback triggered when inference ends.
Optional
onCallback triggered on errors during inference.
Optional
onCallback triggered when inference starts.
Optional
onCallback when a new token is received
The URL endpoint for the provider's server.
Defines the structure and behavior of an LM Provider.
LmProvider
Example