Bedrock Knowledge Bases supports 5+ vector store options. They're not interchangeable — each has a sweet spot. This tree routes you to the right one based on your constraints.
~ pick your vector store based on constraints, not vibes ~
Why these options
OpenSearch Serverless
The default for Bedrock Knowledge Bases. Supports true hybrid search (BM25 + k-NN) natively in one query. Scales automatically without capacity planning. Higher cost floor than pgvector but pays off at mid-to-large scale.
Aurora PostgreSQL + pgvector
Best if you already run Aurora for transactional data — you get to keep SQL joins between vectors and relational rows. Lower cost at small-to-medium scale. You run ops (index tuning, vacuum). No native hybrid search without extensions.
Amazon Kendra
Not a raw vector store — a managed enterprise search service with built-in connectors to SharePoint, Confluence, Salesforce, S3, etc. Use when your corpus is scattered across SaaS tools and you want zero indexing work. The priciest option.
S3 Vectors
Newest option (launched 2024). Stores vectors directly in S3 with k-NN search. Cheapest for read-light workloads. Good for prototypes, small corpora, or use cases where you want to stay in S3.
Exam angle
When a stem mentions "hybrid search,""keyword + semantic," or "high scale" → OpenSearch. When it mentions "already running Aurora" or "cost-sensitive with relational data" → pgvector. When it mentions "SharePoint / Confluence / enterprise doc search" → Kendra. These keywords are the tell.