Parameters required when creating a new LM provider instance.
LmProviderParams
Identifier for the LM provider.
The URL endpoint for the provider's server.
The key used for authentication.
Callback when a new token is received.
Callback triggered when inference starts.
Callback triggered when inference ends.
Callback triggered on errors.
Default settings.
const lmProviderParams: LmProviderParams = { name: 'koboldcpp', serverUrl: 'http://example.com/api', apiKey: 'your-api-key', onToken: (t) => console.log(t), onStartEmit: (data) => console.log(data), onEndEmit: (result) => console.log(result), onError: (err) => console.error(err)}; Copy
const lmProviderParams: LmProviderParams = { name: 'koboldcpp', serverUrl: 'http://example.com/api', apiKey: 'your-api-key', onToken: (t) => console.log(t), onStartEmit: (data) => console.log(data), onEndEmit: (result) => console.log(result), onError: (err) => console.error(err)};
Optional
Parameters required when creating a new LM provider instance.
LmProviderParams
Param: name
Identifier for the LM provider.
Param: serverUrl
The URL endpoint for the provider's server.
Param: apiKey
The key used for authentication.
Param: onToken
Callback when a new token is received.
Param: onStartEmit
Callback triggered when inference starts.
Param: onEndEmit
Callback triggered when inference ends.
Param: onError
Callback triggered on errors.
Param: defaults
Default settings.
Example