Parameters for initializing a Language Model.
LmParams
Type of provider ("llamacpp", "koboldcpp", "ollama", "openai", "browser").
The URL endpoint for the LM service.
Callback when a new token is received.
Optional API key for authentication.
Callback triggered when inference starts.
Callback triggered when inference ends.
Callback triggered on errors.
Default settings.
const lmParams: LmParams = { providerType: 'koboldcpp', serverUrl: 'http://example.com/api', onToken: (t) => console.log(t), apiKey: 'your-api-key', onStartEmit: (data) => console.log(data), onEndEmit: (result) => console.log(result), onError: (err) => console.error(err)}; Copy
const lmParams: LmParams = { providerType: 'koboldcpp', serverUrl: 'http://example.com/api', onToken: (t) => console.log(t), apiKey: 'your-api-key', onStartEmit: (data) => console.log(data), onEndEmit: (result) => console.log(result), onError: (err) => console.error(err)};
Optional
Parameters for initializing a Language Model.
LmParams
Param: providerType
Type of provider ("llamacpp", "koboldcpp", "ollama", "openai", "browser").
Param: serverUrl
The URL endpoint for the LM service.
Param: onToken
Callback when a new token is received.
Param: apiKey
Optional API key for authentication.
Param: onStartEmit
Callback triggered when inference starts.
Param: onEndEmit
Callback triggered when inference ends.
Param: onError
Callback triggered on errors.
Param: defaults
Default settings.
Example