--- license: mit datasets: - ccmusic-database/erhu_playing_tech language: - en metrics: - accuracy pipeline_tag: audio-classification tags: - music - art --- # Intro The Erhu Performance Technique Recognition Model is an audio analysis tool based on deep learning techniques, aiming to automatically distinguish different techniques in erhu performance. By deeply analyzing the acoustic characteristics of erhu music, the model is able to recognize 11 basic playing techniques, including split bow, pad bow, overtone, continuous bow, glissando, big glissando, strike bow, pizzicato, throw bow, staccato bow, vibrato, tremolo and vibrato. Through time-frequency conversion, feature extraction and pattern recognition, the model can accurately categorize the complex techniques of erhu performance, which provides an efficient technical support for music information retrieval, music education, and research on the art of erhu performance. The application of this model not only enriches the research in the field of music acoustics, but also opens up a new way for the inheritance and innovation of traditional music. ## Demo ## Usage ```python from modelscope import snapshot_download model_dir = snapshot_download('ccmusic-database/erhu_playing_tech') ``` ## Maintenance ```bash git clone git@hf.co:ccmusic-database/erhu_playing_tech cd erhu_playing_tech ``` ## Results A demo result of Swin-T fine-tuning by mel:
Loss curve
Training and validation accuracy
Confusion matrix
## Dataset ## Mirror ## Evaluation ## Cite ```bibtex @dataset{zhaorui_liu_2021_5676893, author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han}, title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research}, month = {mar}, year = {2024}, publisher = {HuggingFace}, version = {1.2}, url = {https://huggingface.co/ccmusic-database} } ```