The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Jukebox Embeddings for the GuitarSet Dataset
Repo with Colab notebook used to extract the embeddings.
Overview
This dataset extends the GuitarSet Dataset by providing embeddings for each audio file.
Original GuitarSet Dataset
GuitarSet is a dataset that provides high quality guitar recordings alongside rich annotations and metadata. By recording guitars using a hexaphonic pickup, it provides recordings of individual strings and largely automates the expensive annotation process. The dataset contains recordings of a variety of musical excerpts played on an acoustic guitar, along with time-aligned annotations including pitch contours, string and fret positions, chords, beats, downbeats, and playing style.
Jukebox Embeddings
Embeddings are derived from OpenAI's Jukebox model, following the approach described in Castellon et al. (2021) with some modifications followed in Spotify's Llark paper:
- Source: Output of the 36th layer of the Jukebox encoder
- Original Jukebox encoding: 4800-dimensional vectors at 345Hz
- Audio/embeddings are chunked into 25 seconds clips as that is the max Jukebox can take in as input, any clips shorter than 25 seconds are padded before passed through Jukebox
- Approach: Mean-pooling within 100ms frames, resulting in:
- Downsampled frequency: 10Hz
- Embedding size: 1.2 × 10^6 for a 25s audio clip
- For a 25s audio clip the 2D array shape will be [250, 4800]
- This method retains temporal information while reducing the embedding size
Why Jukebox? Are these embeddings state-of-the-art as of September 2024?
Determining the optimal location to extract embeddings from large models typically requires extensive probing. This involves testing various activations or extracted layers of the model on different classification tasks through a process of trial and error. Additional fine-tuning is often done to optimise embeddings across these tasks.
The two largest publicly available music generation and music continuation (i.e.: able to take in audio as input) models are Jukebox and MusicGen. According to this paper on probing MusicGen, embeddings extracted from Jukebox appears to outperform MusicGen on average in their classification tasks.
Dataset Features
This extension to the GuitarSet dataset includes:
- File name of each audio file in the GuitarSet dataset
- Start time of the audio
- Jukebox embedding for each audio file
The 4 different mix types GuitarSet provides (audio_hex-pickup_debleeded, audio_hex-pickup_original, audio_mono-mic, audio_mono-pickup_mix) are prefixed onto the filenames, for example: audio_mono-pickup_mix-00_BN1-129-Eb_comp_mic.wav
Applications
This extended dataset can be used for various tasks, including but not limited to:
- Guitar transcription
- Performance analysis
- Style classification
- Multi-modal information retrieval
Usage
from datasets import load_dataset
dataset = load_dataset("jonflynn/guitarset_jukebox_embeddings")
# There's only one split, that is train
train_dataset = dataset['train']
Citation
If you use this dataset in your research, please cite the original GuitarSet paper and this extension:
@inproceedings{xi2018guitarset,
title={GuitarSet: A Dataset for Guitar Transcription},
author={Xi, Qingyang and Bittner, Rachel M and Pauwels, Johan and Ye, Xuzhou and Bello, Juan P},
booktitle={19th International Society for Music Information Retrieval Conference},
year={2018},
address={Paris, France}
}
@dataset{flynn2024guitarsetjukebox,
author = {Jon Flynn},
title = {Jukebox Embeddings for the GuitarSet Dataset},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/jonflynn/guitarset_jukebox_embeddings}},
}
- Downloads last month
- 79