SocketEventMap: {
    artistCancelConfirmation: ArtistCancelConfirmation;
    authenticated: AuthenticatedData;
    balanceUpdate: Balances;
    changeNetwork: { network: SupernetType };
    connected: ServerConnectData;
    disconnected: ServerDisconnectData;
    jobError: JobErrorData;
    jobETA: JobETAData;
    jobProgress: JobProgressData;
    jobResult: JobResultData;
    jobState: JobStateData;
    jobTokens: JobTokensData;
    llmJobError: LLMJobErrorData;
    llmJobResult: LLMJobResultData;
    swarmLLMModels: Record<string, number | LLMModelInfo>;
    swarmModels: Record<string, number>;
    toastMessage: ToastMessage;
}

Type declaration

  • artistCancelConfirmation: ArtistCancelConfirmation
  • authenticated: AuthenticatedData

    WebSocketClient#authenticated - Received after successful connection to the WebSocket server

  • balanceUpdate: Balances

    WebSocketClient#balanceUpdate - Received balance update

  • changeNetwork: { network: SupernetType }

    WebSocketClient#changeNetwork - Default network changed

  • connected: ServerConnectData

    WebSocketClient#connected - WebSocket connection opened

  • disconnected: ServerDisconnectData

    WebSocketClient#disconnected - WebSocket connection was closed

  • jobError: JobErrorData

    WebSocketClient#jobError - Job error occurred

  • jobETA: JobETAData

    WebSocketClient#jobETA - Job ETA update (sent every second during inference by ComfyUI workers) Note: Only available for ComfyUI-based workers during video generation

  • jobProgress: JobProgressData

    WebSocketClient#jobProgress - Job progress update

  • jobResult: JobResultData

    WebSocketClient#jobResult - Job result received

  • jobState: JobStateData

    WebSocketClient#jobState - Job state changed

  • jobTokens: JobTokensData

    WebSocketClient#jobTokens - LLM token stream chunk received Sent by LLM workers during chat completion streaming

  • llmJobError: LLMJobErrorData

    WebSocketClient#llmJobError - LLM job error

  • llmJobResult: LLMJobResultData

    WebSocketClient#llmJobResult - LLM job completed with usage data Sent by LLM workers when a chat completion finishes

  • swarmLLMModels: Record<string, number | LLMModelInfo>

    WebSocketClient#swarmLLMModels - Available LLM models with worker counts

  • swarmModels: Record<string, number>

    WebSocketClient#swarmModels - Received swarm model count

  • toastMessage: ToastMessage

    WebSocketClient#toastMessage - Toast message received