Optionalprojects: ProjectsApiProtectedclientProtectedeip712ProtectedlistenersOptionalcompleted?: (data: ChatCompletionResult) => void[]Emitted when a chat completion finishes
Optionalerror?: (Emitted when a chat completion fails
OptionaljobState?: (data: ChatJobStateEvent) => void[]Emitted when the job state changes (queued, assigned to worker, started, etc.)
OptionalmodelsUpdated?: (data: Record<string, LLMModelInfo>) => void[]Emitted when the available LLM models list is updated from the network
Optionaltoken?: (data: ChatCompletionChunk) => void[]Emitted for each token chunk received during streaming
Tool execution API for Sogni platform tools (image, video, music generation).
Available LLM models and their worker counts
ProtectedemitDispatch an event to all listeners
Estimate the cost of a chat completion request before submitting it.
Uses the same token estimation formula as the server: input tokens ≈ ceil(JSON.stringify(messages).length / 4)
Remove an event listener
Add an event listener, returns a function that can be called to remove the listener
Add an event listener that will be called only once
Wait for available LLM models to be received from the network. Resolves immediately if models are already available.
timeout in milliseconds until the promise is rejected (default: 10000)
Chat API for LLM text generation via the Sogni Supernet.
Provides OpenAI-compatible chat completion interface using Sogni's decentralized LLM worker network.
Usage: