Skip to content

Commit fb36101

Browse files
cevianclaude
andauthored
Add pgvector and hybrid text search skills (#65)
## Summary - Add **pgvector-semantic-search** skill: comprehensive guide for pgvector setup, HNSW indexes, quantization strategies, filtering, and troubleshooting - Add **hybrid-text-search** skill: combining pg_textsearch (BM25) with pgvector using RRF fusion, with Python and TypeScript examples ## Skills Added ### pgvector-semantic-search (323 lines) - Golden path defaults (halfvec, HNSW, cosine distance) - HNSW and IVFFlat index configuration - Binary quantization for large datasets - Filtering best practices (iterative scan, partial indexes, partitioning) - Monitoring, debugging, and common issues ### hybrid-text-search (241 lines) - When to use hybrid vs semantic-only vs keyword-only - Parallel query pattern with client-side RRF fusion - Weighting and reranking with cross-encoders - Python and TypeScript code examples - BM25 configuration notes (k1/b tuning, partitioned tables) 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
1 parent 44f5a47 commit fb36101

File tree

2 files changed

+605
-0
lines changed

2 files changed

+605
-0
lines changed
Lines changed: 327 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,327 @@
1+
---
2+
name: pgvector-semantic-search
3+
description: pgvector setup and best practices for semantic search with text embeddings in PostgreSQL
4+
---
5+
6+
# pgvector for Semantic Search
7+
8+
Semantic search finds content by meaning rather than exact keywords. An embedding model converts text into high-dimensional vectors, where similar meanings map to nearby points. pgvector stores these vectors in PostgreSQL and uses approximate nearest neighbor (ANN) indexes to find the closest matches quickly—scaling to millions of rows without leaving the database. Store your text alongside its embedding, then query by converting your search text to a vector and returning the rows with the smallest distance.
9+
10+
This guide covers pgvector setup and tuning—not embedding model selection or text chunking, which significantly affect search quality. Requires pgvector 0.8.0+ for all features (`halfvec`, `binary_quantize`, iterative scan).
11+
12+
## Golden Path (Default Setup)
13+
14+
Use this configuration unless you have a specific reason not to.
15+
- Embedding column data type: `halfvec(N)` where `N` is your embedding dimension (must match everywhere). Examples use 1536; replace with your dimension `N`.
16+
- Distance: cosine (`<=>`)
17+
- Index: HNSW (`m = 16`, `ef_construction = 64`). Use `halfvec_cosine_ops` and query with `<=>`.
18+
- Query-time recall: `SET hnsw.ef_search = 100` (good starting point from published benchmarks, increase for higher recall at higher latency)
19+
- Query pattern: `ORDER BY embedding <=> $1::halfvec(N) LIMIT k`
20+
21+
This setup provides a strong speed–recall tradeoff for most text-embedding workloads.
22+
23+
## Core Rules
24+
25+
- **Enable the extension** in each database: `CREATE EXTENSION IF NOT EXISTS vector;`
26+
- **Use HNSW indexes by default**—superior speed-recall tradeoff, can be created on empty tables, no training step required. Only consider IVFFlat for write-heavy or memory-bound workloads.
27+
- **Use `halfvec` by default**—store and index as `halfvec` for 50% smaller storage and indexes with minimal recall loss.
28+
- **Index after bulk loading** initial data for best build performance.
29+
- **Create indexes concurrently** in production: `CREATE INDEX CONCURRENTLY ...`
30+
- **Use cosine distance by default** (`<=>`): For non-normalized embeddings, use cosine. For unit-normalized embeddings, cosine and inner product yield identical rankings; default to cosine.
31+
- **Match query operator to index ops**: Index with `halfvec_cosine_ops` requires `<=>` in queries; `halfvec_l2_ops` requires `<->`; mismatched operators won't use the index.
32+
- **Always cast query vectors explicitly** (`$1::halfvec(N)`) to avoid implicit-cast failures in prepared statements.
33+
- **Always use the same embedding model for data and queries**. Similarity search only works when the model generating the vectors is the same.
34+
35+
## Type Rules
36+
37+
- Store embeddings as `halfvec(N)`
38+
- Cast query vectors to `halfvec(N)`
39+
- Store binary quantized vectors as `bit(N)` in a generated column
40+
- Do not mix `vector` / `halfvec` / `bit` without explicit casts
41+
- Never call `binary_quantize()` on table columns inside `ORDER BY`; store it instead
42+
- Dimensions must match: a `halfvec(1536)` column requires query vectors cast as `::halfvec(1536)`.
43+
44+
## Standard Pattern
45+
46+
```sql
47+
-- Store and index as halfvec
48+
CREATE TABLE items (
49+
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
50+
contents TEXT NOT NULL,
51+
embedding halfvec(1536) NOT NULL -- NOT NULL requires embeddings generated before insert, not async
52+
);
53+
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
54+
55+
-- Query: returns 10 closest items. $1 is the embedding of your search text.
56+
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
57+
```
58+
59+
For other distance operators (L2, inner product, etc.), see the [pgvector README](https://github.com/pgvector/pgvector).
60+
61+
## HNSW Index
62+
63+
The recommended index type. Creates a multilayer navigable graph with superior speed-recall tradeoff. Can be created on empty tables (no training step required).
64+
65+
```sql
66+
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
67+
68+
-- With tuning parameters
69+
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops) WITH (m = 16, ef_construction = 64);
70+
```
71+
72+
### HNSW Parameters
73+
74+
| Parameter | Default | Description |
75+
|-----------|---------|-------------|
76+
| `m` | 16 | Max connections per layer. Higher = better recall, more memory |
77+
| `ef_construction` | 64 | Build-time candidate list. Higher = better graph quality, slower build |
78+
| `hnsw.ef_search` | 40 | Query-time candidate list. Higher = better recall, slower queries. Should be ≥ LIMIT. |
79+
80+
**ef_search tuning (rough guidelines—actual results vary by dataset):**
81+
82+
| ef_search | Approx Recall | Relative Speed |
83+
|-----------|---------------|----------------|
84+
| 40 | lower (~95% on some benchmarks) | 1x (baseline) |
85+
| 100 | higher | ~2x slower |
86+
| 200 | very-high | ~4x slower |
87+
| 400 | near-exact | ~8x slower |
88+
89+
```sql
90+
-- Set search parameter for session
91+
SET hnsw.ef_search = 100;
92+
93+
-- Set for single query
94+
BEGIN;
95+
SET LOCAL hnsw.ef_search = 100;
96+
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
97+
COMMIT;
98+
```
99+
100+
## IVFFlat Index (Generally Not Recommended)
101+
102+
Default to HNSW. Use IVFFlat only when HNSW’s operational costs matter more than peak recall.
103+
104+
Choose IVFFlat if:
105+
- Write-heavy or constantly changing data AND you're willing to rebuild the index frequently
106+
- You rebuild indexes often and want predictable build time and memory usage
107+
- Memory is tight and you cannot keep an HNSW graph mostly resident
108+
- Data is partitioned or tiered, and this index lives on colder partitions
109+
110+
Avoid IVFFlat if you need:
111+
- highest recall at low latency
112+
- minimal tuning
113+
- a “set and forget” index
114+
115+
Notes:
116+
- IVFFlat requires data to exist before index creation.
117+
- Recall depends on `lists` and `ivfflat.probes`; higher probes = better recall, slower queries.
118+
119+
Starter config:
120+
```sql
121+
CREATE INDEX ON items
122+
USING ivfflat (embedding halfvec_cosine_ops)
123+
WITH (lists = 1000);
124+
125+
SET ivfflat.probes = 10;
126+
```
127+
128+
## Quantization Strategies
129+
130+
- Quantization is a memory decision, not a recall decision.
131+
- Use `halfvec` by default for storage and indexing.
132+
- Estimate HNSW index footprint as ~4–6 KB per 1536-dim `halfvec` (m=16) (order-of-magnitude); 3072-dim is ~2×; m=32 roughly doubles HNSW link/graph overhead.
133+
- If p95/p99 latency rises while CPU is mostly idle, the HNSW index is likely no longer resident in memory.
134+
- If `halfvec` doesn’t fit, use binary quantization + re-ranking.
135+
136+
### Guidelines for 1536-dim vectors
137+
138+
Approximate `halfvec` capacity at `m=16`, 1536-dim (assumes RAM mostly available for index caching):
139+
140+
| RAM | Approx max halfvec vectors |
141+
|-----|----------------------------|
142+
| 16 GB | ~2–3M vectors |
143+
| 32 GB | ~4–6M vectors |
144+
| 64 GB | ~8–12M vectors |
145+
| 128 GB | ~16–25M vectors |
146+
147+
For 3072-dim embeddings, divide these numbers by ~2.
148+
For `m=32`, also divide capacity by ~2.
149+
150+
If the index cannot fit in memory at this scale, use binary quantization.
151+
152+
These are ranges, not guarantees. Validate by monitoring cache residency and p95/p99 latency under load.
153+
154+
### Binary Quantization (For Very Large Datasets)
155+
156+
32× memory reduction. Use with re-ranking for acceptable recall.
157+
158+
```sql
159+
-- Table with generated column for binary quantization
160+
CREATE TABLE items (
161+
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
162+
contents TEXT NOT NULL,
163+
embedding halfvec(1536) NOT NULL,
164+
embedding_bq bit(1536) GENERATED ALWAYS AS (binary_quantize(embedding)::bit(1536)) STORED
165+
);
166+
167+
CREATE INDEX ON items USING hnsw (embedding_bq bit_hamming_ops);
168+
169+
-- Query with re-ranking for better recall
170+
-- ef_search must be >= inner LIMIT to retrieve enough candidates
171+
SET hnsw.ef_search = 800;
172+
WITH q AS (
173+
SELECT binary_quantize($1::halfvec(1536))::bit(1536) AS qb
174+
)
175+
SELECT *
176+
FROM (
177+
SELECT i.id, i.contents, i.embedding
178+
FROM items i, q
179+
ORDER BY i.embedding_bq <~> q.qb -- computes binary distance using index
180+
LIMIT 800
181+
) candidates
182+
ORDER BY candidates.embedding <=> $1::halfvec(1536) -- computes halfvec distance (no index), more accurate than binary
183+
LIMIT 10;
184+
```
185+
186+
The 80× oversampling ratio (800 candidates for 10 results) is a reasonable starting point. Binary quantization loses precision, so more candidates are needed to find true nearest neighbors during re-ranking. Increase if recall is insufficient; decrease if re-ranking latency is too high.
187+
188+
## Performance by Dataset Size
189+
190+
| Scale | Vectors | Config | Notes |
191+
|-------|---------|--------|-------|
192+
| Small | <100K | Defaults | Index optional but improves tail latency |
193+
| Medium | 100K–5M | Defaults | Monitor p95 latency; most common production range |
194+
| Large | 5M+ | `ef_construction=100+` | Memory residency critical |
195+
| Very Large | 10M+ | Binary quantization + re-ranking | Add RAM or partition first if possible |
196+
197+
Tune `ef_search` first for recall; only increase `m` if recall plateaus and memory allows. Under concurrency, tail latency spikes when the index doesn't fit in memory. Binary quantization is an escape hatch—prefer adding RAM or partitioning first.
198+
199+
## Filtering Best Practices
200+
201+
Filtered vector search requires care. Depending on filter selectivity and query shape, filters can cause early termination (too few rows, missing results) or increase work (latency).
202+
203+
### Iterative scan (recommended when filters are selective)
204+
205+
By default, HNSW may stop early when a WHERE clause is present, which can lead to fewer results than expected. Iterative scan allows HNSW to continue searching until enough filtered rows are found.
206+
207+
Enable iterative scan when filters materially reduce the result set.
208+
209+
```sql
210+
-- Enable iterative scans for filtered queries
211+
SET hnsw.iterative_scan = relaxed_order;
212+
213+
SELECT id, contents
214+
FROM items
215+
WHERE category_id = 123
216+
ORDER BY embedding <=> $1::halfvec(1536)
217+
LIMIT 10;
218+
```
219+
220+
If results are still sparse, increase the scan budget:
221+
222+
```sql
223+
SET hnsw.max_scan_tuples = 50000;
224+
```
225+
226+
Trade-off: increasing `hnsw.max_scan_tuples` improves recall but can significantly increase latency.
227+
228+
**When iterative scan is not needed:**
229+
- The filter matches a large portion of the table (low selectivity)
230+
- You are prefiltering via a B-tree index
231+
- You are querying a single partition or partial index
232+
233+
### Choose the right filtering strategy
234+
235+
**Highly selective filters (under ~10k rows)**
236+
Use a B-tree index on the filter column so Postgres can prefilter before ANN.
237+
238+
```sql
239+
CREATE INDEX ON items (category_id);
240+
```
241+
242+
**Low-cardinality filters (few distinct values)**
243+
Use partial HNSW indexes per filter value.
244+
245+
```sql
246+
CREATE INDEX ON items
247+
USING hnsw (embedding halfvec_cosine_ops)
248+
WHERE category_id = 11;
249+
```
250+
251+
**Many filter values or large datasets**
252+
Partition by the filter key to keep each ANN index small.
253+
254+
```sql
255+
CREATE TABLE items (
256+
embedding halfvec(1536),
257+
category_id int
258+
) PARTITION BY LIST (category_id);
259+
```
260+
261+
### Key rules
262+
263+
- Filters that match few rows require prefiltering, partitioning, or iterative scan.
264+
- Always validate filtered queries by measuring p95/p99 latency and tuples visited under realistic load.
265+
266+
### Alternative: pgvectorscale for label-based filtering
267+
268+
For large datasets with label-based filters, [pgvectorscale](https://github.com/timescale/pgvectorscale)'s StreamingDiskANN index supports filtered indexes on `smallint[]` columns. Labels are indexed alongside vectors, enabling efficient filtered search without the accuracy tradeoffs of HNSW post-filtering. See the pgvectorscale documentation for setup details.
269+
270+
## Bulk Loading
271+
272+
```sql
273+
-- COPY is fastest; binary format is faster but requires proper encoding
274+
-- Text format: '[0.1, 0.2, ...]'
275+
COPY items (contents, embedding) FROM STDIN;
276+
-- Binary format (if your client supports it):
277+
COPY items (contents, embedding) FROM STDIN WITH (FORMAT BINARY);
278+
279+
-- Add indexes AFTER loading
280+
SET maintenance_work_mem = '4GB';
281+
SET max_parallel_maintenance_workers = 7;
282+
CREATE INDEX ON items USING hnsw (embedding halfvec_cosine_ops);
283+
```
284+
285+
## Maintenance
286+
287+
- **VACUUM regularly** after updates/deletes—stale entries may persist until vacuumed
288+
- **REINDEX** if performance degrades after high churn (rebuilds the graph from scratch)
289+
- For write-heavy workloads with frequent deletes, consider IVFFlat or partitioning by time using hypertables
290+
291+
## Monitoring & Debugging
292+
293+
```sql
294+
-- Check index size
295+
SELECT pg_size_pretty(pg_relation_size('items_embedding_idx'));
296+
297+
-- Debug query performance
298+
EXPLAIN (ANALYZE, BUFFERS) SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
299+
300+
-- Monitor index build progress
301+
SELECT phase, round(100.0 * blocks_done / nullif(blocks_total, 0), 1) AS "%"
302+
FROM pg_stat_progress_create_index;
303+
304+
-- Compare approximate vs exact recall
305+
BEGIN;
306+
SET LOCAL enable_indexscan = off; -- Force exact search
307+
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
308+
COMMIT;
309+
310+
-- Force index use for debugging
311+
BEGIN;
312+
SET LOCAL enable_seqscan = off;
313+
SELECT id, contents FROM items ORDER BY embedding <=> $1::halfvec(1536) LIMIT 10;
314+
COMMIT;
315+
```
316+
317+
## Common Issues (Symptom → Fix)
318+
319+
| Symptom | Likely Cause | Fix |
320+
|--------|--------------|-----|
321+
| Query does not use ANN index | Missing `ORDER BY` + `LIMIT`, operator mismatch, or implicit casts | Use `ORDER BY` with a distance operator that matches the index ops class; explicitly cast query vectors |
322+
| Fewer results than expected (filtered query) | HNSW stops early due to filter | Enable iterative scan; increase `hnsw.max_scan_tuples`; or prefilter (B-tree), use partial indexes, or partition |
323+
| Fewer results than expected (unfiltered query) | ANN recall too low | Increase `hnsw.ef_search` |
324+
| High latency with low CPU usage | HNSW index not resident in memory | Use `halfvec`, reduce `m`/`ef_construction`, add RAM, partition, or use binary quantization |
325+
| Slow index builds | Insufficient build memory or parallelism | Increase `maintenance_work_mem` and `max_parallel_maintenance_workers`; build after bulk load |
326+
| Out-of-memory errors | Index too large for available RAM | Use `halfvec`, reduce index parameters, or switch to binary quantization with re-ranking |
327+
| Zero or missing results | NULL or zero vectors | Avoid NULL embeddings; do not use zero vectors with cosine distance |

0 commit comments

Comments
 (0)