Architecture diagram

— Event-driven FM · business events trigger AI processing asynchronously —
SOURCE S3 object created SOURCE DDB stream change SOURCE SaaS webhook SOURCE Scheduled cron ROUTE EventBridge rule-based dispatch buffer QUEUE SQS absorbs bursts PROCESS Lambda build prompt INFER Bedrock FM inference result PERSIST DynamoDB / S3 NOTIFY SNS · webhook · SES → downstream apps → users / email → other systems KEY PROPERTY · ASYNCHRONOUS Producer fires event and moves on — no waiting for FM to finish. Consumer processes at its own pace. Decoupled, resilient, cost-efficient. Use batch inference (~50% cheaper) for non-time-sensitive workloads

How data flows

A business event (new document uploaded, support ticket filed, order placed, nightly schedule) fires. EventBridge routes it to a queue (SQS for buffering and retry). A Lambda worker pulls events, builds a prompt, calls Bedrock. The result is persisted (DynamoDB, S3) and optionally broadcast via SNS / email / webhook to downstream consumers.

Unlike the synchronous RAG pattern, the producer (the thing that fires the event) doesn't wait for the FM. It fires and forgets. The FM processing is done asynchronously — you trade immediate response for dramatically better cost profile and built-in resilience (retries from SQS, DLQ for failures).

Real-world uses Ticket auto-triage · document summarization on upload · nightly insight generation · customer email drafting · transcription post-processing · log/alert analysis · periodic re-indexing of knowledge bases.

AWS services used

Amazon EventBridgeThe event bus. Rules match event patterns and route to targets (SQS, Lambda, Step Functions). Built-in support for SaaS integrations.
Amazon SQSAbsorbs bursts, retries on failure, feeds DLQs when all retries exhaust. Essential resilience layer.
AWS LambdaPolls SQS, builds prompts, calls Bedrock. Scales automatically with queue depth.
Amazon BedrockThe FM. Use batch inference API for ~50% discount when real-time response isn't needed.
DynamoDB / S3Store FM outputs. DynamoDB for structured results with quick lookup; S3 for large payloads / long-term retention.
Amazon SNS / SESFan out the result to downstream consumers — notifications, email, webhooks.
Step Functions (optional)Replaces plain Lambda when the processing involves multiple steps, branching, or retries with business logic.
Dead Letter QueueCaptures events that failed processing after retries. Critical for auditing and recovery.

When to use this pattern

Use Event-Driven FM when…

  • The producer doesn't need to waitUser uploads a doc — they don't need the summary back in 2 seconds. Fire event, process, notify.
  • Volume is bursty or unpredictable100 tickets in an hour, then 0 for 4 hours. SQS absorbs bursts; Lambda scales. No throttling issues.
  • Cost matters more than latencyBatch inference (~50% discount) is only available for async. Use it when you can tolerate minutes-to-hours response time.
  • Retries and failure recovery are criticalSynchronous calls lose requests on failure. Queues persist them; DLQs catch the ones that can't succeed.
  • Multiple downstream consumers need the resultEventBridge + SNS fan-out to 5 different systems beats 5 synchronous HTTP calls.
  • Integration with existing event-driven architectureIf your org already uses EventBridge, adding FM as a consumer is natural.

Do NOT use Event-Driven when…

  • User needs immediate responseChat, autocomplete, real-time Q&A. Use synchronous RAG or streaming.
  • Volume is very lowA handful of events per day. The event-bus + queue + Lambda overhead isn't worth it. Direct synchronous call is simpler.
  • Strict ordering required on many itemsSQS standard is best-effort ordered; FIFO queues work but have lower throughput. Complex ordering can undermine the benefits.
  • Result must be used within the originating requestIf the calling code needs the FM answer to continue, it has to wait — defeats the async benefit.
  • Debugging is a primary concernAsync flows are harder to trace end-to-end than synchronous ones. X-Ray helps but adds setup complexity.

Exam angle

Pattern-match shortcuts When a stem mentions "process documents as they're uploaded," "react to events," "asynchronous," "burst traffic," or "handle failures and retry," event-driven is the answer. Expect EventBridge + SQS + Lambda + Bedrock in the correct option.
The "call Bedrock directly from S3 event" trap Distractor: "configure S3 to invoke Bedrock on PutObject." There's no direct S3-to-Bedrock integration. S3 event → Lambda → Bedrock is the correct chain. The queue (SQS) between adds retries and burst absorption.
Batch inference opportunity If the volume is high and processing doesn't need to be immediate, use Bedrock batch inference instead of per-event real-time calls. ~50% discount makes this worth the added orchestration.

Keywords that point here

react to events asynchronous document uploaded EventBridge bursty traffic SQS retry on failure batch inference fan-out to consumers

Related patterns

For user-facing real-time interactions, use Pattern 8: Streaming Chat.
If one event triggers a multi-step workflow, see Step Functions orchestration.
For safety wrapping, apply Pattern 10: Defense-in-Depth at the Lambda worker layer.