Aborts a currently running inference task.
API utility being used.
The key used for authentication with the provider's API.
OptionaldefaultsMakes an inference based on provided prompt and parameters.
Retrieves information about available server config.
Loads a model by name, with optional context.
Active model configuration.
List of available model configurations.
Retrieves information about available models.
Identifier for the LM provider.
OptionalonCallback triggered when inference ends.
OptionalonCallback triggered on errors during inference.
OptionalonCallback triggered when inference starts.
OptionalonCallback when a new token is received
The URL endpoint for the provider's server.
Defines the structure and behavior of an LM Provider.
LmProvider
Example