Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.signa.so/llms.txt

Use this file to discover all available pages before exploring further.

I expected an alert and didn’t get one

This is the most common monitoring support question. Signa exposes self-service diagnostics so you can answer it in under a minute without filing a ticket.

Step 1 — Pull the diagnostics trace

For each (watch, trademark) pair you expected an alert for, call GET /v1/watches//diagnostics:
const trace = await signa.watches.diagnostics('wat_01HK7M...', {
  trademarkId: 'tm_01HK7N...',
});
console.log(trace.reason);
The reason field gives you the answer in plain English. The endpoint walks the candidacy chain in order and surfaces the first failure.

Step 2 — Interpret reason

reason valueMeaningNext step
alert firedAlert was generated. Use outbox_event_id to chase webhook delivery.See Step 3.
watch does not include office {code}The watch’s query.filters.offices (or jurisdictions) doesn’t include this trademark’s office.PATCH the watch to add the office, then optionally POST /v1/watches/{id}/replay to backfill alerts.
trademark evaluated more than 90 days ago; provenance no longer availableOutside the 90-day diagnostic horizon.If you still need the alert, replay the watch from a date before the trademark’s last update.
trademark not in candidacy window for sync_run {id}No trademark_changes row was emitted. The trademark may not have changed in any way the evaluator considers material since the watch was created.Run POST /v1/watches/preview with the watch’s query to confirm the trademark would match today.
trigger event {type} not in watch.trigger_eventsThe watch’s trigger_events filter excluded this event type (e.g. you watch only trademark.created but this was a trademark.updated).Widen trigger_events (PATCH) and replay if needed.
score {n} below threshold {t}Similarity score under threshold.Lower query.score_threshold, or accept that the mark wasn’t similar enough.
would alert but rolled into digestWatch is in digest_above_threshold mode and 24h alert volume crossed the throttle threshold.The alert is queued for the daily digest — no action needed unless you want per-alert delivery.
no matching reason availableFallback.File a support ticket with the request_id — this should not happen in steady state.

Step 3 — Cross-reference webhook delivery

When alert_fired=true but you never saw it on your receiver, follow the trace to the delivery audit log:
// 1. The alert exists. The outbox event ID cross-references your endpoint logs.
const trace = await signa.watches.diagnostics('wat_01HK7M...', { trademarkId });
console.log(trace.outbox_event_id); // matches the `webhook-id` header on delivery

// 2. List delivery attempts for the endpoint subscribed to `alert.created`.
const deliveries = await signa.webhooks.listDeliveries('whk_01HK...', {
  since: '2026-05-01T00:00:00Z', // narrow to the period you care about
});

// 3. Filter to the matching event_id.
for await (const d of deliveries) {
  if (d.event_id === trace.outbox_event_id) {
    console.log(d.status, d.http_status, d.error_reason, d.response_body);
  }
}
Likely status outcomes:
  • delivered — your receiver returned 2xx but maybe didn’t store it. Look at response_body and your own logs.
  • failed — receiver returned non-2xx; error_reason (e.g. non_2xx_500) and http_status tell you which side broke. The dispatcher will retry up to 7 times.
  • exhausted — all retries failed. Replay manually with POST /v1/webhooks/{id}/deliveries/{did}/redeliver.
  • pending — still queued. Wait or check /health/ready to confirm Signa is up.
If the endpoint was auto-disabled mid-flight, check GET /v1/webhooks/status='disabled' plus disabled_reason (auto_consecutive_100, auto_failure_rate_50_over_50, or manual) explains why.

Step 4 — Confirm Signa is healthy

Before assuming a Signa-side bug, hit /v1/health — it returns instantly without auth and will surface degraded readiness if Postgres or OpenSearch is impaired.
curl https://api.signa.so/v1/health
If the platform is degraded the status will be degraded or unhealthy and you should retry the diagnostic flow once the page clears.

Retention

DataRetention
trademark_changes (candidacy provenance)90 days
webhook_deliveries (per-attempt audit)30 days
watch_alerts90 days
Diagnostic freshness horizon90 days
Past the diagnostic horizon evaluated=false with reason explaining the freshness limit. If you need an alert that’s older than the horizon, the provenance trail is gone but you can still replay the watch from a chosen from_date to regenerate it.