Network failures, rate limits, and transient server errors are facts of life in distributed systems. This guide covers patterns for handling them gracefully when integrating with the Signa API.
Transient vs. Permanent Failures
Before retrying, determine whether the failure is recoverable.
| Status Code | Type | Action |
|---|
429 | Transient | Rate limited. Retry after the Retry-After header value. |
500 | Transient | Internal server error. Retry with exponential backoff. |
502 | Transient | Bad gateway. Retry with backoff (typically resolves within seconds). |
503 | Transient | Service temporarily unavailable. Retry with longer backoff. |
504 | Transient | Gateway timeout. Retry with backoff. |
400 | Permanent | Validation error. Fix the request payload before resending. |
401 | Permanent | Invalid or expired API key. Check your credentials. |
403 | Permanent | Insufficient scopes. Update your API key permissions. |
404 | Permanent | Resource not found. Verify the ID is correct. |
409 | Permanent | Conflict (e.g., duplicate create). Inspect the error body. |
410 | Permanent | Entity was merged. Follow the merged_into_id in the response. |
422 | Permanent | Semantic error. The request is well-formed but cannot be processed. |
Only retry on transient failures (4xx rate limits and 5xx server errors). Retrying permanent failures wastes your rate limit budget and will never succeed.
Exponential Backoff with Jitter
The standard retry strategy for transient errors. Each retry waits longer than the previous one, with random jitter to avoid thundering-herd problems when many clients retry simultaneously.
Algorithm:
delay = min(max_delay, base_delay * 2^attempt) + random(0, jitter)
interface RetryOptions {
maxRetries?: number;
baseDelayMs?: number;
maxDelayMs?: number;
}
async function fetchWithRetry(
url: string,
options: RequestInit,
{ maxRetries = 3, baseDelayMs = 1000, maxDelayMs = 60_000 }: RetryOptions = {}
): Promise<Response> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options);
// Do not retry permanent failures
if (response.status >= 400 && response.status < 500 && response.status !== 429) {
return response;
}
// Success -- return immediately
if (response.ok) {
return response;
}
// All retries exhausted
if (attempt === maxRetries) {
return response;
}
// Calculate delay
let delayMs: number;
if (response.status === 429) {
// Prefer the server's Retry-After value
const retryAfter = response.headers.get('Retry-After');
delayMs = retryAfter ? parseInt(retryAfter, 10) * 1000 : baseDelayMs * 2 ** attempt;
} else {
delayMs = baseDelayMs * 2 ** attempt;
}
// Cap at max delay, add jitter
delayMs = Math.min(delayMs, maxDelayMs);
delayMs += Math.random() * baseDelayMs;
await new Promise((resolve) => setTimeout(resolve, delayMs));
}
// Unreachable, but satisfies TypeScript
throw new Error('Retry loop exited unexpectedly');
}
Retry Backoff Schedule
With the default settings (base_delay=1s, max_retries=3), the schedule looks like this:
| Attempt | Base Delay | With Jitter (approx.) | Cumulative Wait |
|---|
| 1 | 1 s | 1.0 — 2.0 s | ~1.5 s |
| 2 | 2 s | 2.0 — 3.0 s | ~4 s |
| 3 | 4 s | 4.0 — 5.0 s | ~8.5 s |
For 429 responses, the Retry-After header overrides the calculated base delay. Always respect this value.
The Signa TypeScript SDK (@signa-so/sdk) has built-in retry logic with these defaults. If you are using the SDK, you get this behavior automatically.
Bulk Operation Retry
When using the batch endpoint, some items in a batch may succeed while others fail. The response includes per-item status codes, so you can retry only the failed items:
import { Signa } from '@signa-so/sdk';
const signa = new Signa({ api_key: 'sig_live_YOUR_KEY' });
async function fetchBatchWithRetry(ids: string[], maxRetries = 3): Promise<any[]> {
let pending = ids;
const results: any[] = [];
for (let attempt = 0; attempt <= maxRetries && pending.length > 0; attempt++) {
const response = await signa.trademarks.batch({ ids: pending });
const retryIds: string[] = [];
for (const item of response.results) {
if (item.status === 'success') {
results.push(item.data);
} else if (item.error?.type === 'rate_limited' || item.status_code >= 500) {
retryIds.push(item.id);
}
// Permanent errors are skipped (not retried)
}
pending = retryIds;
if (pending.length > 0 && attempt < maxRetries) {
const delay = 1000 * 2 ** attempt + Math.random() * 1000;
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
return results;
}
Circuit Breaker Pattern
For high-throughput integrations, wrap your API calls in a circuit breaker to stop sending requests when the API is consistently failing. This protects both your application and the API from cascading failures.
The circuit has three states:
- Closed (normal): Requests flow through. Failures are counted.
- Open (tripped): All requests fail immediately without contacting the API.
- Half-open (probing): A single test request is sent. If it succeeds, the circuit closes; if it fails, it re-opens.
class CircuitBreaker {
private state: 'closed' | 'open' | 'half-open' = 'closed';
private failureCount = 0;
private lastFailureTime = 0;
constructor(
private readonly failureThreshold: number = 5,
private readonly resetTimeoutMs: number = 30_000
) {}
async execute<T>(fn: () => Promise<T>): Promise<T> {
if (this.state === 'open') {
// Check if enough time has passed to try again
if (Date.now() - this.lastFailureTime >= this.resetTimeoutMs) {
this.state = 'half-open';
} else {
throw new Error('Circuit breaker is open -- request blocked');
}
}
try {
const result = await fn();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
private onSuccess(): void {
this.failureCount = 0;
this.state = 'closed';
}
private onFailure(): void {
this.failureCount++;
this.lastFailureTime = Date.now();
if (this.failureCount >= this.failureThreshold) {
this.state = 'open';
}
}
}
// Usage
const breaker = new CircuitBreaker(5, 30_000);
async function getTrademarkSafe(id: string) {
return breaker.execute(() =>
fetchWithRetry(`https://api.signa.so/v1/trademarks/${id}`, {
headers: { Authorization: 'Bearer sig_live_YOUR_KEY' },
})
);
}
A circuit breaker should wrap your retry logic, not replace it. The retry function handles transient blips; the circuit breaker prevents sustained outages from overwhelming your application.
Request Timeouts
Always set explicit timeouts on API calls. A reasonable default for Signa endpoints:
| Endpoint Type | Recommended Timeout |
|---|
Single resource (GET /v1/trademarks/:id) | 10 s |
List / search (POST /v1/trademarks/search) | 15 s |
Batch (POST /v1/trademarks/batch) | 30 s |
| Webhook registration | 10 s |
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 10_000);
try {
const response = await fetch('https://api.signa.so/v1/trademarks/tm_abc123', {
headers: { Authorization: 'Bearer sig_live_YOUR_KEY' },
signal: controller.signal,
});
} finally {
clearTimeout(timeout);
}
Decision Tree
Use this to determine the right strategy for any failure:
Request failed
|
|--> Status 400/401/403/404/409/410/422?
| --> Permanent failure. Do not retry. Log and handle.
|
|--> Status 429?
| --> Read Retry-After header.
| --> Wait and retry with backoff.
|
|--> Status 500/502/503/504?
| --> Retry with exponential backoff + jitter.
| --> If 3+ consecutive 5xx: trip circuit breaker.
|
|--> Network error / timeout?
--> Retry with backoff.
--> If persistent: trip circuit breaker.