update skills

This commit is contained in:
2026-03-17 16:53:22 -07:00
parent 0b0783ef8e
commit f9a530667e
389 changed files with 54512 additions and 1 deletions

View File

@@ -0,0 +1,96 @@
# Cloudflare Queues
Flexible message queuing for async task processing with guaranteed at-least-once delivery and configurable batching.
## Overview
Queues provide:
- At-least-once delivery guarantee
- Push-based (Worker) and pull-based (HTTP) consumers
- Configurable batching and retries
- Dead Letter Queues (DLQ)
- Delays up to 12 hours
**Use cases:** Async processing, API buffering, rate limiting, event workflows, deferred jobs
## Quick Start
```bash
wrangler queues create my-queue
wrangler queues consumer add my-queue my-worker
```
```typescript
// Producer
await env.MY_QUEUE.send({ userId: 123, action: 'notify' });
// Consumer (with proper error handling)
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
try {
await process(msg.body);
msg.ack();
} catch (error) {
msg.retry({ delaySeconds: 60 });
}
}
}
};
```
## Critical Warnings
**Before using Queues, understand these production mistakes:**
1. **Uncaught errors retry ENTIRE batch** (not just failed message). Always use per-message try/catch.
2. **Messages not ack'd/retry'd will auto-retry forever** until max_retries. Always explicitly handle each message.
See [gotchas.md](./gotchas.md) for detailed solutions.
## Core Operations
| Operation | Purpose | Limit |
|-----------|---------|-------|
| `send(body, options?)` | Publish message | 128 KB |
| `sendBatch(messages)` | Bulk publish | 100 msgs/256 KB |
| `message.ack()` | Acknowledge success | - |
| `message.retry(options?)` | Retry with delay | - |
| `batch.ackAll()` | Ack entire batch | - |
## Architecture
```
[Producer Worker] → [Queue] → [Consumer Worker/HTTP] → [Processing]
```
- Max 10,000 queues per account
- 5,000 msgs/second per queue
- 4-14 day retention (configurable)
## Reading Order
**New to Queues?** Start here:
1. [configuration.md](./configuration.md) - Set up queues, bindings, consumers
2. [api.md](./api.md) - Send messages, handle batches, ack/retry patterns
3. [patterns.md](./patterns.md) - Real-world examples and integrations
4. [gotchas.md](./gotchas.md) - Critical warnings and troubleshooting
**Task-based routing:**
- Setup queue → [configuration.md](./configuration.md)
- Send/receive messages → [api.md](./api.md)
- Implement specific pattern → [patterns.md](./patterns.md)
- Debug/troubleshoot → [gotchas.md](./gotchas.md)
## In This Reference
- [configuration.md](./configuration.md) - wrangler.jsonc setup, producer/consumer config, DLQ, content types
- [api.md](./api.md) - Send/batch methods, queue handler, ack/retry rules, type-safe patterns
- [patterns.md](./patterns.md) - Async tasks, buffering, rate limiting, D1/Workflows/DO integrations
- [gotchas.md](./gotchas.md) - Critical batch error handling, idempotency, error classification
## See Also
- [workers](../workers/) - Worker runtime for producers/consumers
- [r2](../r2/) - Process R2 event notifications via queues
- [d1](../d1/) - Batch write to D1 from queue consumers

View File

@@ -0,0 +1,206 @@
# Queues API Reference
## Producer: Send Messages
```typescript
// Basic send
await env.MY_QUEUE.send({ url: request.url, timestamp: Date.now() });
// Options: delay (max 43200s), contentType (json|text|bytes|v8)
await env.MY_QUEUE.send(message, { delaySeconds: 600 });
await env.MY_QUEUE.send(message, { delaySeconds: 0 }); // Override queue default
// Batch (up to 100 msgs or 256 KB)
await env.MY_QUEUE.sendBatch([
{ body: 'msg1' },
{ body: 'msg2' },
{ body: 'msg3', options: { delaySeconds: 300 } }
]);
// Non-blocking with ctx.waitUntil - send continues after response
ctx.waitUntil(env.MY_QUEUE.send({ data: 'async' }));
// Background tasks in queue consumer
export default {
async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext): Promise<void> {
for (const msg of batch.messages) {
await processMessage(msg.body);
// Fire-and-forget analytics (doesn't block ack)
ctx.waitUntil(
env.ANALYTICS_QUEUE.send({ messageId: msg.id, processedAt: Date.now() })
);
msg.ack();
}
}
};
```
## Consumer: Push-based (Worker)
```typescript
// Type-safe handler with ExportedHandler
interface Env {
MY_QUEUE: Queue;
DB: D1Database;
}
export default {
async queue(batch: MessageBatch<MessageBody>, env: Env, ctx: ExecutionContext): Promise<void> {
// batch.queue, batch.messages.length
for (const msg of batch.messages) {
// msg.id, msg.body, msg.timestamp, msg.attempts
try {
await processMessage(msg.body);
msg.ack();
} catch (error) {
msg.retry({ delaySeconds: 600 });
}
}
}
} satisfies ExportedHandler<Env>;
```
**CRITICAL WARNINGS:**
1. **Messages not explicitly ack'd or retry'd will auto-retry indefinitely** until `max_retries` is reached. Always call `msg.ack()` or `msg.retry()` for each message.
2. **Throwing uncaught errors retries the ENTIRE batch**, not just the failed message. Always wrap individual message processing in try/catch and call `msg.retry()` explicitly per message.
```typescript
// ❌ BAD: Uncaught error retries entire batch
async queue(batch: MessageBatch): Promise<void> {
for (const msg of batch.messages) {
await riskyOperation(msg.body); // If this throws, entire batch retries
msg.ack();
}
}
// ✅ GOOD: Catch per message, handle individually
async queue(batch: MessageBatch): Promise<void> {
for (const msg of batch.messages) {
try {
await riskyOperation(msg.body);
msg.ack();
} catch (error) {
msg.retry({ delaySeconds: 60 });
}
}
}
```
## Ack/Retry Precedence Rules
1. **Per-message calls take precedence**: If you call both `msg.ack()` and `msg.retry()`, last call wins
2. **Batch calls don't override**: `batch.ackAll()` only affects messages without explicit ack/retry
3. **No action = automatic retry**: Messages with no explicit action retry with configured delay
```typescript
async queue(batch: MessageBatch): Promise<void> {
for (const msg of batch.messages) {
msg.ack(); // Message marked for ack
msg.retry(); // Overrides ack - message will retry
}
batch.ackAll(); // Only affects messages not explicitly handled above
}
```
## Batch Operations
```typescript
// Acknowledge entire batch
try {
await bulkProcess(batch.messages);
batch.ackAll();
} catch (error) {
batch.retryAll({ delaySeconds: 300 });
}
```
## Exponential Backoff
```typescript
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
try {
await processMessage(msg.body);
msg.ack();
} catch (error) {
// 30s, 60s, 120s, 240s, 480s, ... up to 12h max
const delay = Math.min(30 * (2 ** msg.attempts), 43200);
msg.retry({ delaySeconds: delay });
}
}
}
```
## Multiple Queues, Single Consumer
```typescript
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
switch (batch.queue) {
case 'high-priority': await processUrgent(batch.messages); break;
case 'low-priority': await processDeferred(batch.messages); break;
case 'email': await sendEmails(batch.messages); break;
default: batch.retryAll();
}
}
};
```
## Consumer: Pull-based (HTTP)
```typescript
// Pull messages
const response = await fetch(
`https://api.cloudflare.com/client/v4/accounts/${ACCOUNT_ID}/queues/${QUEUE_ID}/messages/pull`,
{
method: 'POST',
headers: { 'authorization': `Bearer ${API_TOKEN}`, 'content-type': 'application/json' },
body: JSON.stringify({ visibility_timeout_ms: 6000, batch_size: 50 })
}
);
const data = await response.json();
// Acknowledge
await fetch(
`https://api.cloudflare.com/client/v4/accounts/${ACCOUNT_ID}/queues/${QUEUE_ID}/messages/ack`,
{
method: 'POST',
headers: { 'authorization': `Bearer ${API_TOKEN}`, 'content-type': 'application/json' },
body: JSON.stringify({
acks: [{ lease_id: msg.lease_id }],
retries: [{ lease_id: msg2.lease_id, delay_seconds: 600 }]
})
}
);
```
## Interfaces
```typescript
interface MessageBatch<Body = unknown> {
readonly queue: string;
readonly messages: Message<Body>[];
ackAll(): void;
retryAll(options?: QueueRetryOptions): void;
}
interface Message<Body = unknown> {
readonly id: string;
readonly timestamp: Date;
readonly body: Body;
readonly attempts: number;
ack(): void;
retry(options?: QueueRetryOptions): void;
}
interface QueueSendOptions {
contentType?: 'text' | 'bytes' | 'json' | 'v8';
delaySeconds?: number; // 0-43200
}
```

View File

@@ -0,0 +1,144 @@
# Queues Configuration
## Create Queue
```bash
wrangler queues create my-queue
wrangler queues create my-queue --retention-period-hours=336 # 14 days
wrangler queues create my-queue --delivery-delay-secs=300
```
## Producer Binding
**wrangler.jsonc:**
```jsonc
{
"queues": {
"producers": [
{
"queue": "my-queue-name",
"binding": "MY_QUEUE",
"delivery_delay": 60 // Optional: default delay in seconds
}
]
}
}
```
## Consumer Configuration (Push-based)
**wrangler.jsonc:**
```jsonc
{
"queues": {
"consumers": [
{
"queue": "my-queue-name",
"max_batch_size": 10, // 1-100, default 10
"max_batch_timeout": 5, // 0-60s, default 5
"max_retries": 3, // default 3, max 100
"dead_letter_queue": "my-dlq", // optional
"retry_delay": 300 // optional: delay retries in seconds
}
]
}
}
```
## Consumer Configuration (Pull-based)
**wrangler.jsonc:**
```jsonc
{
"queues": {
"consumers": [
{
"queue": "my-queue-name",
"type": "http_pull",
"visibility_timeout_ms": 5000, // default 30000, max 12h
"max_retries": 5,
"dead_letter_queue": "my-dlq"
}
]
}
}
```
## TypeScript Types
```typescript
interface Env {
MY_QUEUE: Queue<MessageBody>;
ANALYTICS_QUEUE: Queue<AnalyticsEvent>;
}
interface MessageBody {
id: string;
action: 'create' | 'update' | 'delete';
data: Record<string, any>;
}
export default {
async queue(batch: MessageBatch<MessageBody>, env: Env): Promise<void> {
for (const msg of batch.messages) {
console.log(msg.body.action);
msg.ack();
}
}
} satisfies ExportedHandler<Env>;
```
## Content Type Selection
Choose content type based on consumer type and data requirements:
| Content Type | Use When | Readable By | Supports | Size |
|--------------|----------|-------------|----------|------|
| `json` | Pull consumers, dashboard visibility, simple objects | All (push/pull/dashboard) | JSON-serializable types only | Medium |
| `v8` | Push consumers only, complex JS objects | Push consumers only | Date, Map, Set, BigInt, typed arrays | Small |
| `text` | String-only payloads | All | Strings only | Smallest |
| `bytes` | Binary data (images, files) | All | ArrayBuffer, Uint8Array | Variable |
**Decision tree:**
1. Need to view in dashboard or use pull consumer? → Use `json`
2. Need Date, Map, Set, or other V8 types? → Use `v8` (push consumers only)
3. Just strings? → Use `text`
4. Binary data? → Use `bytes`
```typescript
// JSON: Good for simple objects, pull consumers, dashboard visibility
await env.QUEUE.send({ id: 123, name: 'test' }, { contentType: 'json' });
// V8: Good for Date, Map, Set (push consumers only)
await env.QUEUE.send({
created: new Date(),
tags: new Set(['a', 'b'])
}, { contentType: 'v8' });
// Text: Simple strings
await env.QUEUE.send('process-user-123', { contentType: 'text' });
// Bytes: Binary data
await env.QUEUE.send(imageBuffer, { contentType: 'bytes' });
```
**Default behavior:** If not specified, Cloudflare auto-selects `json` for JSON-serializable objects and `v8` for complex types.
**IMPORTANT:** `v8` messages cannot be read by pull consumers or viewed in the dashboard. Use `json` if you need visibility or pull-based consumption.
## CLI Commands
```bash
# Consumer management
wrangler queues consumer add my-queue my-worker --batch-size=50 --max-retries=5
wrangler queues consumer http add my-queue
wrangler queues consumer worker remove my-queue my-worker
wrangler queues consumer http remove my-queue
# Queue operations
wrangler queues list
wrangler queues pause my-queue
wrangler queues resume my-queue
wrangler queues purge my-queue
wrangler queues delete my-queue
```

View File

@@ -0,0 +1,206 @@
# Queues Gotchas & Troubleshooting
## CRITICAL: Top Production Mistakes
### 1. "Entire Batch Retried After Single Error"
**Problem:** Throwing uncaught error in queue handler retries the entire batch, not just the failed message
**Cause:** Uncaught exceptions propagate to the runtime, triggering batch-level retry
**Solution:** Always wrap individual message processing in try/catch and call `msg.retry()` explicitly
```typescript
// ❌ BAD: Throws error, retries entire batch
async queue(batch: MessageBatch): Promise<void> {
for (const msg of batch.messages) {
await riskyOperation(msg.body); // If this throws, entire batch retries
msg.ack();
}
}
// ✅ GOOD: Catch per message, handle individually
async queue(batch: MessageBatch): Promise<void> {
for (const msg of batch.messages) {
try {
await riskyOperation(msg.body);
msg.ack();
} catch (error) {
msg.retry({ delaySeconds: 60 });
}
}
}
```
### 2. "Messages Retry Forever"
**Problem:** Messages not explicitly ack'd or retry'd will auto-retry indefinitely
**Cause:** Runtime default behavior retries unhandled messages until `max_retries` reached
**Solution:** Always call `msg.ack()` or `msg.retry()` for each message. Never leave messages unhandled.
```typescript
// ❌ BAD: Skipped messages auto-retry forever
async queue(batch: MessageBatch): Promise<void> {
for (const msg of batch.messages) {
if (shouldProcess(msg.body)) {
await process(msg.body);
msg.ack();
}
// Missing: msg.ack() for skipped messages - they will retry!
}
}
// ✅ GOOD: Explicitly handle all messages
async queue(batch: MessageBatch): Promise<void> {
for (const msg of batch.messages) {
if (shouldProcess(msg.body)) {
await process(msg.body);
msg.ack();
} else {
msg.ack(); // Explicitly ack even if not processing
}
}
}
```
## Common Errors
### "Duplicate Message Processing"
**Problem:** Same message processed multiple times
**Cause:** At-least-once delivery guarantee means duplicates are possible during retries
**Solution:** Design consumers to be idempotent by tracking processed message IDs in KV with expiration TTL
```typescript
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
const processed = await env.PROCESSED_KV.get(msg.id);
if (processed) {
msg.ack();
continue;
}
await processMessage(msg.body);
await env.PROCESSED_KV.put(msg.id, '1', { expirationTtl: 86400 });
msg.ack();
}
}
```
### "Pull Consumer Can't Decode Messages"
**Problem:** Pull consumer or dashboard shows unreadable message bodies
**Cause:** Messages sent with `v8` content type are only decodable by Workers push consumers
**Solution:** Use `json` content type for pull consumers or dashboard visibility
```typescript
// Use json for pull consumers
await env.MY_QUEUE.send(data, { contentType: 'json' });
// Use v8 only for push consumers with complex JS types
await env.MY_QUEUE.send({ date: new Date(), tags: new Set() }, { contentType: 'v8' });
```
### "Messages Not Being Delivered"
**Problem:** Messages sent but consumer not processing
**Cause:** Queue paused, consumer not configured, or consumer errors
**Solution:** Check queue status with `wrangler queues list`, verify consumer configured with `wrangler queues consumer add`, and check logs with `wrangler tail`
### "High Dead Letter Queue Rate"
**Problem:** Many messages ending up in DLQ
**Cause:** Consumer repeatedly failing to process messages after max retries
**Solution:** Review consumer error logs, check external dependency availability, verify message format matches expectations, or increase retry delay
## Error Classification Patterns
Classify errors to decide whether to retry or DLQ:
```typescript
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
try {
await processMessage(msg.body);
msg.ack();
} catch (error) {
// Transient errors: retry with backoff
if (isRetryable(error)) {
const delay = Math.min(30 * (2 ** msg.attempts), 43200);
msg.retry({ delaySeconds: delay });
}
// Permanent errors: ack to avoid infinite retries
else {
console.error('Permanent error, sending to DLQ:', error);
await env.ERROR_LOG.put(msg.id, JSON.stringify({ msg: msg.body, error: String(error) }));
msg.ack(); // Prevent further retries
}
}
}
}
function isRetryable(error: unknown): boolean {
if (error instanceof Response) {
// Retry: rate limits, timeouts, server errors
return error.status === 429 || error.status >= 500;
}
if (error instanceof Error) {
// Don't retry: validation, auth, not found
return !error.message.includes('validation') &&
!error.message.includes('unauthorized') &&
!error.message.includes('not found');
}
return false; // Unknown errors don't retry
}
```
### "CPU Time Exceeded in Consumer"
**Problem:** Consumer fails with CPU time limit exceeded
**Cause:** Consumer processing exceeding 30s default CPU time limit
**Solution:** Increase CPU limit in wrangler.jsonc: `{ "limits": { "cpu_ms": 300000 } }` (5 minutes max)
## Content Type Decision Guide
**When to use each content type:**
| Content Type | Use When | Readable By | Supports |
|--------------|----------|-------------|----------|
| `json` (default) | Pull consumers, dashboard visibility, simple objects | All (push/pull/dashboard) | JSON-serializable types only |
| `v8` | Push consumers only, complex JS objects | Push consumers only | Date, Map, Set, BigInt, typed arrays |
| `text` | String-only payloads | All | Strings only |
| `bytes` | Binary data (images, files) | All | ArrayBuffer, Uint8Array |
**Decision tree:**
1. Need to view in dashboard or use pull consumer? → Use `json`
2. Need Date, Map, Set, or other V8 types? → Use `v8` (push consumers only)
3. Just strings? → Use `text`
4. Binary data? → Use `bytes`
```typescript
// Dashboard/pull: use json
await env.QUEUE.send({ id: 123, name: 'test' }, { contentType: 'json' });
// Complex JS types (push only): use v8
await env.QUEUE.send({
created: new Date(),
tags: new Set(['a', 'b'])
}, { contentType: 'v8' });
```
## Limits
| Limit | Value | Notes |
|-------|-------|-------|
| Max queues | 10,000 | Per account |
| Message size | 128 KB | Maximum per message |
| Batch size (consumer) | 100 messages | Maximum messages per batch |
| Batch size (sendBatch) | 100 msgs or 256 KB | Whichever limit reached first |
| Throughput | 5,000 msgs/sec | Per queue |
| Retention | 4-14 days | Configurable retention period |
| Max backlog | 25 GB | Maximum queue backlog size |
| Max delay | 12 hours (43,200s) | Maximum message delay |
| Max retries | 100 | Maximum retry attempts |
| CPU time default | 30s | Per consumer invocation |
| CPU time max | 300s (5 min) | Configurable via `limits.cpu_ms` |
| Operations per message | 3 (write + read + delete) | Base cost per message |
| Pricing | $0.40 per 1M operations | After 1M free operations |
| Message charging | Per 64 KB chunk | Messages charged in 64 KB increments |

View File

@@ -0,0 +1,220 @@
# Queues Patterns & Best Practices
## Async Task Processing
```typescript
// Producer: Accept request, queue work
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const { userId, reportType } = await request.json();
await env.REPORT_QUEUE.send({ userId, reportType, requestedAt: Date.now() });
return Response.json({ message: 'Report queued', status: 'pending' });
}
};
// Consumer: Process reports
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
const { userId, reportType } = msg.body;
const report = await generateReport(userId, reportType, env);
await env.REPORTS_BUCKET.put(`${userId}/${reportType}.pdf`, report);
msg.ack();
}
}
};
```
## Buffering API Calls
```typescript
// Producer: Queue log entries
ctx.waitUntil(env.LOGS_QUEUE.send({
method: request.method,
url: request.url,
timestamp: Date.now()
}));
// Consumer: Batch write to external API
async queue(batch: MessageBatch, env: Env): Promise<void> {
const logs = batch.messages.map(m => m.body);
await fetch(env.LOG_ENDPOINT, { method: 'POST', body: JSON.stringify({ logs }) });
batch.ackAll();
}
```
## Rate Limiting Upstream
```typescript
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
try {
await callRateLimitedAPI(msg.body);
msg.ack();
} catch (error) {
if (error.status === 429) {
const retryAfter = parseInt(error.headers.get('Retry-After') || '60');
msg.retry({ delaySeconds: retryAfter });
} else throw error;
}
}
}
```
## Event-Driven Workflows
```typescript
// R2 event → Queue → Worker
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
const event = msg.body;
if (event.action === 'PutObject') {
await processNewFile(event.object.key, env);
} else if (event.action === 'DeleteObject') {
await cleanupReferences(event.object.key, env);
}
msg.ack();
}
}
};
```
## Dead Letter Queue Pattern
```typescript
// Main queue: After max_retries, goes to DLQ automatically
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
try {
await riskyOperation(msg.body);
msg.ack();
} catch (error) {
console.error(`Failed after ${msg.attempts} attempts:`, error);
}
}
}
};
// DLQ consumer: Log and store failed messages
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
await env.FAILED_KV.put(msg.id, JSON.stringify(msg.body));
msg.ack();
}
}
};
```
## Priority Queues
High priority: `max_batch_size: 5, max_batch_timeout: 1`. Low priority: `max_batch_size: 100, max_batch_timeout: 30`.
## Delayed Job Processing
```typescript
await env.EMAIL_QUEUE.send({ to, template, userId }, { delaySeconds: 3600 });
```
## Fan-out Pattern
```typescript
async fetch(request: Request, env: Env): Promise<Response> {
const event = await request.json();
// Send to multiple queues for parallel processing
await Promise.all([
env.ANALYTICS_QUEUE.send(event),
env.NOTIFICATIONS_QUEUE.send(event),
env.AUDIT_LOG_QUEUE.send(event)
]);
return Response.json({ status: 'processed' });
}
```
## Idempotency Pattern
```typescript
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
// Check if already processed
const processed = await env.PROCESSED_KV.get(msg.id);
if (processed) {
msg.ack();
continue;
}
await processMessage(msg.body);
await env.PROCESSED_KV.put(msg.id, '1', { expirationTtl: 86400 });
msg.ack();
}
}
```
## Integration: D1 Batch Writes
```typescript
async queue(batch: MessageBatch, env: Env): Promise<void> {
// Collect all inserts for single D1 batch
const statements = batch.messages.map(msg =>
env.DB.prepare('INSERT INTO events (id, data, created) VALUES (?, ?, ?)')
.bind(msg.id, JSON.stringify(msg.body), Date.now())
);
try {
await env.DB.batch(statements);
batch.ackAll();
} catch (error) {
console.error('D1 batch failed:', error);
batch.retryAll({ delaySeconds: 60 });
}
}
```
## Integration: Workflows
```typescript
// Queue triggers Workflow for long-running tasks
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
try {
const instance = await env.MY_WORKFLOW.create({
id: msg.id,
params: msg.body
});
console.log('Workflow started:', instance.id);
msg.ack();
} catch (error) {
msg.retry({ delaySeconds: 30 });
}
}
}
```
## Integration: Durable Objects
```typescript
// Queue distributes work to Durable Objects by ID
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const msg of batch.messages) {
const { userId, action } = msg.body;
// Route to user-specific DO
const id = env.USER_DO.idFromName(userId);
const stub = env.USER_DO.get(id);
try {
await stub.fetch(new Request('https://do/process', {
method: 'POST',
body: JSON.stringify({ action, messageId: msg.id })
}));
msg.ack();
} catch (error) {
msg.retry({ delaySeconds: 60 });
}
}
}
```