58 points by Tananon 24 hours ago | 14 comments
Main Features:
- Rust-native inference: Load any Model2Vec model from Hugging Face or your local path with StaticModel::from_pretrained(...).
- Tiny footprint: The crate itself is only ~1.7 mb, with embedding models between 7 and 30 mb.
Performance:
We benchmarked single-threaded on a CPU:
- Python: ~4650 embeddings/sec
- Rust: ~8000 embeddings/sec (~1.7× speedup)
First open-source project in Rust for us, so would be great to get some feedback!
gthompson512 13 hours ago
Edit: it seems like it just splits in to sentences which is a weird thing to do given in English only 95%ish percent agreement is even possible on what a sentence is. ``` // Process in batches for batch in sentences.chunks(batch_size) { // Truncate each sentence to max_length * median_token_length chars let truncated: Vec<&str> = batch .iter() .map(|text| { if let Some(max_tok) = max_length { Self::truncate_str(text, max_tok, self.median_token_length) } else { text.as_str() } }) .collect(); ```
gthompson512 13 hours ago
Edit: That wasn't intended to be mean, although it may come off that way, but what is this supposed to be for? Myself I have text >8k tokens that need to be embedded and test things regularly.
stephantul 10 hours ago
jasonjmcghee 7 minutes ago
https://github.com/MinishLab/model2vec-rs/blob/480ec988d7f4a...
https://github.com/MinishLab/model2vec-rs/blob/480ec988d7f4a...
noahbp 21 hours ago
For someone looking to build a large embedding search, fast static embeddings seem like a good deal, but almost too good to be true. What quality tradeoff are you seeing with these models versus embedding models with attention mechanisms?
Tananon 20 hours ago
There's definitely a quality trade-off. We have extensive benchmarks here: https://github.com/MinishLab/model2vec/blob/main/results/REA.... potion-base-32M reaches ~92% of the performance of MiniLM while being much faster (about 70x faster on CPU). It depends a bit on your constraints: if you have limited hardware and very high throughput, these models will allow you to still make decent quality embeddings, but ofcourse an attention based model will be better, but more expensive.
refulgentis 16 hours ago
I've been chewing on if there was a miracle that could make embeddings 10x faster for my search app that uses minilmv3, sounds like there is :) I never would have dreamed. I'll definitely be trying potion-base in my library for Flutter x ONNX.
EDIT: I was thanking you for thorough benchmarking, then it dawned on me you were on the team that built the model - fantastic work, I can't wait to try this. And you already have ONNX!
EDIT2: Craziest demo I've seen in a while. I'm seeing 23x faster, after 10 minutes of work.
Tananon 8 hours ago
badmonster 9 hours ago
Tananon 8 hours ago
Havoc 20 hours ago
Tananon 20 hours ago
echelon 13 hours ago
We've been using Candle and Cudarc and having a fairly good time of it. We've built a real time drawing app on a custom LCM stack, and Rust makes it feel rock solid. Python is way too flimsy for something like this.
The more the Rust ML ecosystem grows, the better. It's a little bit fledgling right now, so every little bit counts.
If llama.cpp had instead been llama.rs, I feel like we would have had a runaway success.
We'll be checking this out! Kudos, and keep it up!
Tananon 8 hours ago