Benchmarking Postgres Vector Search Approaches: Pgvector vs. Lantern

An elephant holding a lantern

Vector search in Postgres is a home that has considered very active pattern in the final few months. While Pgvector is identified to most folk, just a few weeks in the past we learned Lantern, which furthermore builds a Postgres-primarily primarily based vector database. So, we thought to be benchmarking both to match the two approaches. In this submit, we’ll duvet:

  • Immediate background on doing vector search in PostgreSQL.
  • Evaluating Pgvector and Lantern with reference to syntax and ease of employ.
  • Benchmarks evaluating index advent time, dimension, latency, throughput, and snatch.
  • Abstract of the outcomes

Intro to Vector Search in PostgreSQL

One of the most explanations for the fame of Vector Search this day has to develop with the emergence of grand embedding gadgets and their employ in AI. You comprise got potentially witnessed how many folk, startups, and big companies are exploring pointers on how to determine on good thing about vectors and incorporate them into their products.

Such enthusiasm has furthermore attracted the Postgres community. Pgvector arose with the skill to develop IVVFlat indexes on existing tables with a easy DDL assertion. And so, folks got the skill to without effort originate similarity queries.

But Pgvector is now not the particular one on this home. The community has reach up with other picks as neatly. One such instance was pg_embedding. In July, Neon published a submit where they confirmed how its extension was 20x sooner than Pgvector by utilizing HNSW indexes . Nonetheless, Pgvector quickly caught up and offered the HNSW index as neatly. Round that point, on September 29th, Neon stopped supporting pg_embedding and suggested migrating to Pgvector.

One more Postgres extension in the vector search domain is Lantern, developed by a company of the identical name. In October, a submit on their weblog claimed that their extension would possibly well outperform Pgvector by 90x in index advent time. That was an accepted fulfillment!

So, I spent a while having fun with with Lantern, and I would elevate to fragment just a few of my findings on this weblog submit. I divided it into two substances: the predominant one is a temporary qualitative comparison, and the 2d one is a quantitative comparison using a favored benchmark.

Let’s dive in.

PGVector vs Lantern: Similarities and differences

The next table summarizes some facets of both extensions:

Pgvector Lantern
Current Model (date) v0.5.1 (Oct 10, 2023) v0.0.11 (Dec 16, 2023)
Repo fame (# of stars) 7.6K 435
Index Sort IVVFlat, HNSW HNSW
Distance Metrics L2, Within, Cosine L2, Cosine, Hamming
Sample Index Introduction CREATE INDEX ON objects USING hnsw (embedding vector_l2_ops) WITH (m = 16, ef_construction = 64); CREATE INDEX ON small_world USING hnsw (vector dist_l2sq_ops) WITH (M=2, ef_construction=10, ef=4, gloomy=3);
Sample Inquire of SELECT FROM objects ORDER BY embedding <-> '[3,1,2]' LIMIT 5; SET enable_seqscan = counterfeit; SELECT FROM small_world ORDER BY vector <-> ARRAY[0,0,0] LIMIT 1;

My takeaway is that whenever it is seemingly you’ll well perhaps be accustomed to thought to be one of many extensions, it is seemingly you’ll well perhaps furthermore without effort understand pointers on how to employ the different. No foremost complications are expected on that entrance.

Leveraging ANN-Benchmarks

For a quantitative comparison, I took the ANN-Benchmarks and prolonged it to toughen the Lantern extension. It was correct a matter of making a duplicate of the Pgvector itemizing, naming it Lantern and making just a few adjustments to employ the corresponding API.

The benchmark starts a container that has Postgres 15 installed after which it permits the corresponding extension. Then, it inserts a dataset accurate into a table and builds a vector index (in both cases an HNSW index). After that, it executes a bunch of queries and evaluates the snatch.

Alongside the reach, the benchmark collects loads of metrics, similar to:

  • Manufacture Time
  • Index Size
  • Bewitch
  • Throughput
  • Latencies (p50, p95, p99 and p999)

So, we can compare the two extensions alongside these axes.

As for the HNSW parameters, I ancient the following:

ef_construction {[128], 200}
m {[8, 16,] 24}
ef_search {10, 20, 40, 80, 120, [128,] 200, 400}

Evaluating Index Introduction, Bewitch, Latency and Throughput

Though I attempted different datasets, for brevity, let us ideal focus on the outcomes of the SIFT 128 dataset, which was furthermore ancient in thought to be one of Lantern’s posts.

These are the numbers I obtained by following the instructions in the weblog submit, using the same constructing and search parameters (e.g. m={8, 16}, ef_construction=128 and ef_search=128). Right here I ancient Lantern’s exterior indexer. First, let us gaze the assemble time:

Baseline build time

Pgvector takes between 1.71x and 1.73x to assemble the identical index with the identical parameters. The following index is between 13% and 15% bigger in Pgvector:

Baseline index size

That’s wintry… And what regarding the following throughput, latency and snatch? Listed below are the outcomes:

Baseline recall
Baseline throughput
Baseline p95

Ample, the snatch is comparable, on the different hand Pgvector outperforms Lantern in QPS and latencies. Particularly, Pgvector can process between 44% and 53% more queries per 2d, and it is latencies are between 29% and 35% smaller.

So, it sounds as if Lantern sacrifices throughput in commerce of index advent time and index home.

Follow it… but is this factual for other data facets as neatly? Let’s gaze.

With the pattern parameters m = {16, 24} and ef_construction = {200}, the appearance time is accrued higher with Lantern: between 1.9X and 2.3X. And the following index is bigger in Pgvector: between 1.13X and 1.20X when in contrast to Latern’s index.

Build time
Index Size

Moreover, as sooner than, the snatch is comparable when ef_search varies (though Pgvector’s is a bit higher). Right here’s the graph for constructing parameters ef_construction=200 and m=16. Because the ef_search is increased, the snatch of both indexes fetch closer.


Pgvector has 62-84% higher throughput and 38-45% higher latencies:


Rising the m parameter to 24, we fetch the same conclusions:


And all over all yet again, Pgvector has 42-58% higher throughput and 30-39% higher latencies for all values of ef_search:


The final trends were according to my observations when using Gist-960 and Glove-200. You’d furthermore gaze more results right here.


For consolation, the following table summarizes the above results using relative numbers (i.e. pgvector/lantern):

m={8,16}; ef_construction=128; ef_search=128 m={16,24}; ef_construction=200; ef_search=[10-400] Notes
Manufacture Time 1.71X-1.73X 1.92X-2.35X Lantern yields higher results
Index Size 1.13X-1.15X 1.13X-1.20X Lantern yields higher results
Latency 0.64X-0.70X 0.54X-0.69X Pgvector yields higher results
Throughput 1.44X-1.53X 1.42X-1.84X Pgvector yields higher results
Bewitch 1.00X-1.01X 1.00X-1.09X Bewitch is comparable, Pgvector is a small bit higher

Pgvector is the most standard Postgres extension for vector search. On the time of this writing, the github repository counts 7.6K stars and is actively being mentioned on the accumulate. It is furthermore supported on most managed Postgres services (in conjunction with Tembo Cloud) so is simpler for you to fetch admission to.

Lantern is a young finishing up that leverages the typical USearch engine. As of this day, the extension has been starred larger than 400 cases in github and is in very active pattern.

Both extensions provide the same API and toughen the HNSW index. In response to my experiments (Pgvector 0.5.1 and Lantern 0.0.11), Lantern’s index advent is sooner and produces smaller indexes. Nonetheless, Pgvector provides higher snatch, latency and throughput.

We would possibly well furthermore merely accrued select up an watch on both initiatives and gaze how they evolve. I am confident that we can gaze loads of improvements in the following months.

Oh, and whenever it is seemingly you’ll well perhaps be attracted to quickly integrating vector searches on your app, be particular to study out pg_vectorize to gaze pointers on how to develop that with ideal two characteristic calls in Postgres.


The experiments on this submit were carried out using a machine with the following characteristics:

VM E2-identical outdated-8 (8 vCPUs, 4 cores, 32GB memory)
Storage 100GB
Running Gadget Debian 11.8
Postgres 15
Postgres Config maintenance_work_mem=5GB; work_mem=2GB; shared_buffers=12GB
Lantern 0.0.11
PGVector 0.5.1

For data about other results, please gaze right here.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button