BullMQ Worker for durable run execution (INF-003).
Processes jobs from the sentri:runs queue. Each job contains the
serialised run parameters (project, tests, run record, options). The
worker calls crawlAndGenerateTests or runTests depending on the
job type, mirroring the logic previously inlined in route handlers.
Concurrency
Controlled by MAX_WORKERS env var (default 2). Each concurrent slot
processes one run at a time — Playwright browser instances are not shared
across jobs.
Lifecycle
startWorker— Create and start the BullMQ Worker.stopWorker— Gracefully close the worker (drain + disconnect).
When Redis is not available, both functions are no-ops.
- Source:
Members
(static, constant) workerAbortControllers :Map.<string, {controller: AbortController, provider: (string|null)}>
Registry of active BullMQ-processed runs. The chat endpoint reads .provider
to skip runs that aren't using Ollama when checking for concurrent LLM
activity (prevents false-positive 503s when a cloud run is active and the
user switches to Ollama).
Type:
- Map.<string, {controller: AbortController, provider: (string|null)}>
- Source:
(inner) _worker :Object|null
BullMQ Worker instance.
Type:
- Object | null
- Source:
Methods
(static) startWorker()
Create and start the BullMQ Worker. No-op if Redis or BullMQ is not available.
- Source:
(static) stopWorker() → {Promise.<void>}
Gracefully close the worker.
Called from the shutdown hook in index.js.
- Source:
Returns:
- Type
- Promise.<void>
(async, inner) processJob(job) → {Promise.<void>}
Process a single run job from the queue.
Parameters:
| Name | Type | Description |
|---|---|---|
job |
Object | — BullMQ Job instance. |
- Source:
Returns:
- Type
- Promise.<void>