how come its so fast on searches then? dont you need to embed each query then via API
I used openai embeddings for a while but I didnt want to have 200ms speed penalty for getting the embedding of a query back.
I switched to a local embedding, super fast.
how come its so fast on searches then? dont you need to embed each query then via API
I used openai embeddings for a while but I didnt want to have 200ms speed penalty for getting the embedding of a query back.
I switched to a local embedding, super fast.