Module: utils/activeRuns

Single source of truth for querying active runs across both execution paths (in-process runAbortControllers and BullMQ workerAbortControllers).

The chat endpoint uses hasActiveRunForProvider("local") to decide whether Ollama is busy, avoiding direct coupling to the two registry shapes and preventing the filter logic from drifting between callsites.

Why this exists

Ollama is single-threaded — a concurrent chat request while a crawl/generate/test run is making LLM calls will hang the model. We filter by the provider each run captured at start time so cloud-provider runs (Anthropic/OpenAI/Google) don't falsely block chat when the user has switched to Ollama in Settings.

Exports

  • captureProvider — Snapshot the active provider for registry entries.
  • hasActiveRunForProvider — Query whether any active run uses a given provider.
Source:

Methods

(static) captureProvider() → {string|null}

Safely snapshot the active AI provider at run-start time. Returns null when no provider is configured or getProvider() throws.

Registry entries store the captured provider so downstream checks can filter accurately even if the user switches providers mid-run.

Source:
Returns:

Provider ID ("local", "anthropic", etc.) or null.

Type
string | null

(static) hasActiveRunForProvider(provider) → {boolean}

True if any active run (in-process OR BullMQ) was started with the given provider. Used by the chat endpoint to check whether Ollama is busy.

Parameters:
Name Type Description
provider string

Provider ID to match (e.g. "local").

Source:
Returns:
Type
boolean