143 points by davekiss 6 months ago | 37 comments
cannam 6 months ago
- The Vamp Plugin Pack for Mac finally got an ARM/Intel universal build in its 2.0 release last year, so hopefully the caveat mentioned about the M1 Mac should no longer apply
- Most of the Vamp plugins in the Pack pre-date the pervasive use of deep learning in academia, and use classic AI or machine-learning methods with custom feature design and filtering/clustering/state models etc. (The associated papers can be an interesting read, because the methods are so explicitly tailored to the domain)
- Audacity as host only supports plugins that emit time labels as output - this obviously includes beats and chords, but there are other forms of analysis plugins can do if the host (e.g. Sonic Visualiser) supports them
- Besides the simple host in the Vamp SDK, there is another command-line Vamp host called Sonic Annotator (https://vamp-plugins.org/sonic-annotator/) which is even harder to use, equally poorly documented, and even more poorly maintained, but capable of some quite powerful batch analysis and supporting a wider range of audio file formats. Worth checking out if you're curious
(I'm the main author of the Vamp SDK and wrote bits of some of the plugins, so if you have other questions I may be able to help)
swyx 6 months ago
pards 6 months ago
vjshah 6 months ago
How was this done? This seems like an even more difficult task to do well than what’s described in the article
WorkerBee28474 6 months ago
JKCalhoun 6 months ago
pyaamb 6 months ago
nosioptar 6 months ago
If I recall correctly, https://vocalremover.org/ worked pretty well. Though, it's pretty limited in the free tier and only allows payment via patreon. I never tried the paid version because I don't have a patreon account and don't want one.
jogu 6 months ago
https://github.com/intel/openvino-plugins-ai-audacity/blob/m...
senko 6 months ago
Works pretty well for my personal/hobbyist use (quality also depends on genre and instruments used - synth stuff tends to bleed into voice a bit).
Fronzie 6 months ago
aaarrm 6 months ago
h3xadecima1 6 months ago
tarentel 6 months ago
ciroduran 6 months ago
webprofusion 6 months ago
Around 2013 I built a guitar tab synced to youtube video proof of concept thing and promptly let it rot, should have done more with it!
adrianh 6 months ago
yurishimo 6 months ago
One 'feature' that immediately came to mind for me is automatic transposing for use with a capo. Many hobby guitarists cannot play barre chords for an entire track, especially if they don't know it already. Transposing is already a thing for vocal karaoke and quite common. Some players may be skilled enough to transpose in their head to take advantage of the capo, but juggling the lyrics, instrument, and transposing at once is quite taxing mentally.
Cool project!
keymasta 6 months ago
I started dabbling with vamp as well a couple years ago, but lost track of the project as my goals started ballooning. Although the code is still sitting (somewhere), waiting to be resuscitated.
I have had an idea for many years of the utility of having chord analysis further built out such that a functional chart can be made from it. With vamp most of/all the ingredients are there. I think that's probably what chordify.com does, but they clearly haven't solved segmentation or time to musical time, as their charts are terrible. I don't think they are using chordino, and whatever they do use is actually worse.
I got as far as creating a python script which would convert audio files in a directory into different midi files, to start to collect the necessary data to construct a chart.
For your use case, you'd probably just need to quantize the chords to the nearest beat, so you could maybe use:
vamp-aubio_aubiotempo_beats, or vamp-plugins_qm-barbeattracker_bars
and then combine those values with the actual time values that you are getting from chordino.
I'd love to talk about this more, as this is a seemingly niche area. I've only heard about this rarely if at all, so I was happy to read this!
cannam 6 months ago
I think they were initially using the Chordino chroma features (NNLS-Chroma) but a different chord language model "front end". Their page at https://chordify.net/pages/technology-algorithm-explained/ seems to imply they've since switched to a deep learning model (not surprisingly)
brandoniscool 6 months ago
crtasm 6 months ago
brandoniscool 5 months ago
JInwreck 6 months ago
rwmj 6 months ago
keymasta 6 months ago
But to get the chords I don't think you need to worry about that.
jachee 6 months ago
darkwater 6 months ago
mathieuh 6 months ago
darkwater 5 months ago
liampulles 6 months ago
flobosg 6 months ago
snappr021 6 months ago
dboreham 6 months ago
webprofusion 6 months ago
dboreham 6 months ago
dotancohen 6 months ago
6 months ago