Documentation Index
Fetch the complete documentation index at: https://docs.signa.so/llms.txt
Use this file to discover all available pages before exploring further.
I expected an alert and didn’t get one
This is the most common monitoring support question. Signa exposes self-service diagnostics so you can answer it in under a minute without filing a ticket.Step 1 — Pull the diagnostics trace
For each (watch, trademark) pair you expected an alert for, call GET /v1/watches//diagnostics:reason field gives you the answer in plain English. The endpoint
walks the candidacy chain in order and surfaces the first failure.
Step 2 — Interpret reason
reason value | Meaning | Next step |
|---|---|---|
alert fired | Alert was generated. Use outbox_event_id to chase webhook delivery. | See Step 3. |
watch does not include office {code} | The watch’s query.filters.offices (or jurisdictions) doesn’t include this trademark’s office. | PATCH the watch to add the office, then optionally POST /v1/watches/{id}/replay to backfill alerts. |
trademark evaluated more than 90 days ago; provenance no longer available | Outside the 90-day diagnostic horizon. | If you still need the alert, replay the watch from a date before the trademark’s last update. |
trademark not in candidacy window for sync_run {id} | No trademark_changes row was emitted. The trademark may not have changed in any way the evaluator considers material since the watch was created. | Run POST /v1/watches/preview with the watch’s query to confirm the trademark would match today. |
trigger event {type} not in watch.trigger_events | The watch’s trigger_events filter excluded this event type (e.g. you watch only trademark.created but this was a trademark.updated). | Widen trigger_events (PATCH) and replay if needed. |
score {n} below threshold {t} | Similarity score under threshold. | Lower query.score_threshold, or accept that the mark wasn’t similar enough. |
would alert but rolled into digest | Watch is in digest_above_threshold mode and 24h alert volume crossed the throttle threshold. | The alert is queued for the daily digest — no action needed unless you want per-alert delivery. |
no matching reason available | Fallback. | File a support ticket with the request_id — this should not happen in steady state. |
Step 3 — Cross-reference webhook delivery
Whenalert_fired=true but you never saw it on your receiver, follow the
trace to the delivery audit log:
status outcomes:
delivered— your receiver returned 2xx but maybe didn’t store it. Look atresponse_bodyand your own logs.failed— receiver returned non-2xx;error_reason(e.g.non_2xx_500) andhttp_statustell you which side broke. The dispatcher will retry up to 7 times.exhausted— all retries failed. Replay manually withPOST /v1/webhooks/{id}/deliveries/{did}/redeliver.pending— still queued. Wait or check /health/ready to confirm Signa is up.
status='disabled' plus disabled_reason (auto_consecutive_100,
auto_failure_rate_50_over_50, or manual) explains why.
Step 4 — Confirm Signa is healthy
Before assuming a Signa-side bug, hit /v1/health — it returns instantly without auth and will surface degraded readiness if Postgres or OpenSearch is impaired.status will be degraded or unhealthy
and you should retry the diagnostic flow once the page clears.
Retention
| Data | Retention |
|---|---|
trademark_changes (candidacy provenance) | 90 days |
webhook_deliveries (per-attempt audit) | 30 days |
watch_alerts | 90 days |
| Diagnostic freshness horizon | 90 days |
evaluated=false with reason explaining the
freshness limit. If you need an alert that’s older than the horizon, the
provenance trail is gone but you can still
replay the watch from a chosen
from_date to regenerate it.