429 points by stared 4 days ago | 95 comments
pamelafox 4 days ago
There's an example in the pgvector-python that uses a cross-encoder model for re-ranking: https://github.com/pgvector/pgvector-python/blob/master/exam...
You can even use a language model for re-ranking, though it may not be as good as a model trained specifically for re-ranking purposes.
In our Azure RAG approaches, we use the AI Search semantic ranker, which uses the same model that Bing uses for re-ranking search results.
pamelafox 4 days ago
variaga 4 days ago
This was to avoid the problem where, when we only had vectors for "valid" sounds and there was an input that didn't match anything in the training set (a foreign language, garbage truck backing up, a dog barking, ...) the model would still return some word as the closest match (there's always a vector that has the highest similarity) and frequently do so with high confidence i.e. even though the actual input didn't actually match anything in the training set, it would be "enough" more like one known vector than any of the others that it would pass most threshold tests, leading to a lot of false positives.
pbhjpbhj 4 days ago
Disclaimer, I don't know shit.
pamelafox 4 days ago
refulgentis 4 days ago
I do embeddings on arbitrary websites at runtime, and had a persistent problem with the last chunk of a web page matching more things. In retrospect, its obvious that the smaller the chunk was, the more it was matching everything
Full details: MSMARCO MiniLM L6V3 inferenced using ONNX on iOS/web/android/macos/windows/linux
mattvr 4 days ago
OutOfHere 3 days ago
jhy 4 days ago
short_sells_poo 4 days ago
pilooch 4 days ago
OutOfHere 3 days ago
antirez 4 days ago
- Use a large context LLM.
- Segment documents to 25% of context or alike.
- With RAG, retrieve fragments from all the documents, they do a first pass semantic re-ranking like this, sending to the LLM:
I have a set of documents I can show you to reply the user question "$QUESTION". Please tell me from the title and best matching fragments what document IDs you want to see to better reply:
[Document ID 0]: "Some title / synopsis. From page 100 to 200"
... best matching fragment of document 0...
... second best fragment ...
[Document ID 1]: "Some title / synopsis. From page 200 to 300"
... fragmnets ...
LLM output: show me 3, 5, 13.
New query, with attached the full documents for 75% of context window.
"Based on the attached documents in this chat, reply to $QUESTION".
datadrivenangel 3 days ago
danielmarkbruce 3 days ago
bjourne 4 days ago
Perhaps one could represent word embeddings as vertices, rather than vectors? Suppose you find "Python" and "scripting" in the same context. You draw a weighted edge between them. If you find the same words again you reduce the weight of the edge. Then to compute the similarity between two words, just compute the weighted shortest path between their vertices. You could extend it to pair-wise sentence similarity using Steiner trees. Of course it would be much slower than cosine similarity, but probably also much more useful.
jsenn 4 days ago
yobbo 4 days ago
It is true that cosine similarity is unhelpful if you expect it to be a distance measure.
[0,0,1] and [0,1,0] are orthogonal (cosine 0) but have euclidean distance √2, and 1/3 of vector elements are identical.
It is better if embeddings encode also angles, absolute and relative distances in some meaningful way. Testing only cosine ignores all distances.
OutOfHere 3 days ago
yobbo 3 days ago
But if random embeddings are gaussian, they are distributed on a "cloud" around the hypersphere, so they are not equal.
tgv 4 days ago
bambax 4 days ago
True, and quite funny. This is an excellent, well-written and very informative article, but this part is wrongly worded:
> Let's have a task that looks simple, a simple quest from our everyday life: "What did I do with my keys?" [and compare it to other notes using cosine similarity]: "Where did I put my wallet" [=> 0.6], "I left them in my pocket" [=> 0.5]
> The best approach is to directly use LLM query to compare two entries, [along the lines of]: "Is {sentence_a} similar to {sentence_b}?"
(bits in brackets paraphrased for quoting convenience)
This will result in the same, or "worse" result, as any LLM will respond that "Where did I put my wallet" is very similar to "What did I do with my keys?", while "I left them in my pocket" is completely dissimilar.
I'm actually not sure what the author was trying to get at here? You could ask an LLM 'is that sentence a plausible answer to the question' and then it would work; but if you ask for pure 'likeness', it seems that in many cases, LLMs' responses will be close to cosine similarity.
stared 4 days ago
In any way, I see how the example "Is {sentence_a} similar to {sentence_b}?" breaks the flow. The original example was:
{question}
# A
{sentence_A}
# B
{sentence_B}
As I now see, I overzealously simplified that. Thank you for your remark! I edited the article. Let me know if it is clearer for you now.echoangle 4 days ago
Dewey5001 2 days ago
deepsquirrelnet 4 days ago
> The most powerful approach
> The best approach is to directly use LLM query to compare two entries.
Cross encoders are a solution I’m quite fond of, high performing and much faster. I recently put an STS cross encoder up on huggingface based on ModernBERT that performs very well.
sroussey 4 days ago
An STS cross encoder is a model that uses the CrossEncoder class to predict the semantic similarity between two sentences. STS stands for Semantic Textual Similarity.
stared 4 days ago
That said, for many applications, we may be perfectly fine with some version of a fine-tuned BERT-like model rather than using the newest AGI-like SoTA just to compare if two products are vaguely similar, and it is worth putting the other one in suggestions.
deepsquirrelnet 3 days ago
https://github.com/dleemiller/WordLlama
There’s also model2vec doing some cool things as well in that area. So it’s cool to see recent progress in 2024/5 on simple static embedding models.
On the computational performance note, the performance of cross encoder I trained using ModernBERT base is on par with the roberta large model, while being about 7-8x faster. Still way more complex than static, but on benchmark datasets, much more capable too.
staticautomatic 4 days ago
deepsquirrelnet 4 days ago
https://huggingface.co/dleemiller/ModernCE-base-sts
There’s also the large model, which performs a bit better.
janalsncm 4 days ago
Frankly, the LLM approach the author talks about in the end doesn’t either. What does “similar” mean here?
Given inputs A, B, and C, you have to decide whether A and B are more similar or A and C are more similar. The algorithm (or architecture, depending on how you look at it) can’t do that for you. Dual encoder, cross encoder, bag of words, it doesn’t matter.
deepsquirrelnet 3 days ago
That’s not practical for a lot of applications, but it can do it.
For the cross encoder I trained, I have a pretty good idea what similar means because I created a semi-synthetic dataset that has variants based on 4 types of similarity.
Perhaps not a perfect solution when you’re really trying to split hairs about what is more similar between texts that are all pretty similar, but not all applications need that level of specificity either.