Skip to main content
GET
/
v1
/
health
# Customer-facing liveness alias under /v1
curl https://api.signa.so/v1/health
# -> {"status":"ok"}

# Detailed readiness — what your monitoring agent should hit every 60s
curl https://api.signa.so/health/ready
# -> {"status":"ok","service":"core-api","uptime_ms":12345,"dependencies":{...}}
{
  "status": "<string>",
  "service": "<string>",
  "uptime_ms": 123,
  "dependencies": {
    "postgres": {
      "status": "<string>",
      "latency_ms": 123
    },
    "valkey": {
      "status": "<string>",
      "latency_ms": 123
    },
    "opensearch": {
      "status": "<string>",
      "latency_ms": 123
    }
  }
}

Documentation Index

Fetch the complete documentation index at: https://docs.signa.so/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Three unauthenticated endpoints power customer-side uptime monitoring:
EndpointPurposeLatency
GET /v1/healthCustomer-facing liveness alias under the /v1 prefix.< 5 ms
GET /health/liveECS/ALB liveness — process is up.< 5 ms
GET /health/readyReadiness — Postgres + Valkey + OpenSearch reachable.50–200 ms
/v1/health and /health/live are aliases — both return { "status": "ok" } with HTTP 200 if the API process is alive. /health/ready performs three real probes and surfaces dependency status with per-dep latency. All three skip authentication by design (PUBLIC_PATH_RE in the auth middleware). They cost no API quota and produce no request log entries.

Authentication

None. These endpoints are intended for customer monitoring agents (Datadog, Pingdom, your own cron job) that should not handle API keys.

Suggested poll interval

60 seconds. More frequent polling adds noise without improving signal — ALB health checks already cover the sub-minute window from inside our VPC. For Datadog Synthetics or similar SaaS monitors, 60s is also the cheapest billing tier.

Response: GET /v1/health (and /health/live)

{ "status": "ok" }
Always 200 if the process is running. If the process is down you get a connection error or a 5xx from the load balancer instead.

Response: GET /health/ready

status
string
One of:
  • ok — all three dependencies reachable.
  • degraded — non-critical dep down (Valkey or OpenSearch). API still serves most requests; some features (rate-limit cache, search) may error individually.
  • unhealthy — Postgres unreachable. API cannot serve most requests. Returns HTTP 503.
  • shutting_down — task is draining for an ECS stop. Returns HTTP 503 so load balancers route around it.
service
string
Always "core-api".
uptime_ms
integer
Process uptime in milliseconds.
dependencies
object

SLA

/health/ready reflects API + DB + cache + search readiness. It does NOT reflect:
  • SQS / SNS health (used by webhook dispatcher and watch evaluator) — those have separate Datadog dashboards customers don’t see. If your watches stop firing alerts but /health/ready is ok, that’s a sign of an ingestion or evaluator-side incident, not an API outage.
  • Specific office-connector health (e.g. USPTO TSDR). Connector outages surface as stale last_relevant_sync_run.completed_at in the watch diagnostics response, not via /health/ready.
  • Webhook receiver health. That’s by definition on your side.
For office-by-office freshness, poll GET /v1/reference/offices and inspect each office’s last successful sync run.

Examples

# Customer-facing liveness alias under /v1
curl https://api.signa.so/v1/health
# -> {"status":"ok"}

# Detailed readiness — what your monitoring agent should hit every 60s
curl https://api.signa.so/health/ready
# -> {"status":"ok","service":"core-api","uptime_ms":12345,"dependencies":{...}}

See also