124 points by kwindla 3 days ago | 28 comments
I've been experimenting with LLM voice conversations since GPT-4 was first released. (There's a previous front page Show HN about Pipecat, the open source voice AI orchestration framework I work on. [1])
It's been almost two years, and for most of that time, I've been expecting that someone would "solve" turn detection. We all built initial, pretty good 80/20 versions of turn detection on top of VAD (voice activity detection) models. And then, as an ecosystem, we kind of got stuck.
A few production applications have recently started using Gemini 2.0 Flash to do context aware turn detection. [2] But because latency is ~500ms, that's a more complicated approach than using a specialized model. The team at LiveKit released an open weights model that does text-based turn detection. [3] I was really excited to see that, but I'm not super-optimistic that a text-input model will ever be good enough for this task. (A good rule of thumb in deep learning is that you should bet on end-to-end.)
So ... I spent Christmas break training several little proof of concept models, and experimenting with generating synthetic audio data. So, so, so much fun. The results were promising enough that I nerd-sniped a few friends and we started working in earnest on this.
The model now performs really well on a subset of turn detection tasks. Too well, really. We're overfitting on a not-terribly-broad initial data set of about 8,000 samples. Getting to this point was the initial bar we set for doing a public release and seeing if other people want to get involved in the project.
There are lots of ways to contribute. [4]
Medium-term goals for the project are:
- Support for a wide range of languages
- Inference time of <50ms on GPU and <500ms on CPU
- Much wider range of speech nuances captured in training data
- A completely synthetic training data pipeline. (Maybe?)
- Text conditioning of the model, to support "modes" like credit card, telephone number, and address entry.
If you're interested in voice AI or in audio model ML engineering, please try the model out and see what you think. I'd love to hear your thoughts and ideas.[1] https://news.ycombinator.com/item?id=40345696
[2] https://x.com/kwindla/status/1870974144831275410
[3] https://blog.livekit.io/using-a-transformer-to-improve-end-o...
pzo 2 days ago
I'm not sure if turn detection can be really solved except dedicated push to talk button like in walkie-talkie. I often tried google translator app and the problem is in many times when you speaking longer sentence you will stop or slow down a little to gather thought before continuing talking (especially if you are not native speaker). For this reason I avoid converation mode in such cases like google translator and when using perplexity app I prefer the push to talk button mode instead of new one.
I think this could be solved but we would need not only low latency turn detection but also low latency speech interruption detection and also very fast low latency llm on device. And in case we have interruption good recovery that system know we continue last sentence instead of discarding previous audio and starting new etc.
Lots of things can be improved also regarding i/o latency, like using low latency audio api, very short audio buffer, dedicated audio category and mode (in iOS), using wired headsets instead of buildin speaker, turning off system processing like in iphone audio boosting or polar pattern. And streaming mode for all STT, transport (using using remote LLM), TTS. Not sure if we can have TTS in streaming mode. I think most of the time they split by sentence.
I think push to talk is a good solution if well designed: big button in place easily reached with your thumb, integration with iphone action button, using haptic for feedback, using apple watch as big push button, etc.
genewitch 1 day ago
kwindla 2 days ago
- 100ms inference using CoreML: https://x.com/maxxrubin_/status/1897864136698347857
- An LSTM model (1/7th the size) trained on a subset of the data: https://github.com/pipecat-ai/smart-turn/issues/1
foundzen 2 days ago
kwindla 2 days ago
# Training parameters
"learning_rate": 5e-5,
"num_epochs": 10,
"train_batch_size": 12,
"eval_batch_size": 32,
"warmup_ratio": 0.2,
"weight_decay": 0.05,
# Evaluation parameters
"eval_steps": 50,
"save_steps": 50,
"logging_steps": 5,
# Model architecture parameters
"num_frozen_layers": 20
I haven't seen a run do all 10 epochs, recently. There's usually an early stop after about 4 epochs.The current data set size is ~8,000 samples.
remram 2 days ago
kwindla 2 days ago
remram 2 days ago
password4321 2 days ago
whiddershins 2 days ago
remram 2 days ago
ry167 2 days ago
It’s a big deal for detecting human speech when interacting with LLM systems
woodson 2 days ago
lelag 2 days ago
kwindla 2 days ago
Endpoint detection (and phrase endpointing, and end of utterance) are terms from the academic literature about this, and related, problems.
Very few people who are doing "AI Engineering" or even "Machine Learning" today know these terms. In the past, I argued that we should use the existing academic language rather than invent new terms.
But then OpenAI released the Realtime API and called this "turn detection" in their docs. And that was that. It no longer made sense to use any other verbiage.
mncharity 2 days ago
To help with "what is?" and SEO, perhaps something like "Turn detection (aka [...], end of utterance)"... ?
lelag 2 days ago