awsaf49 commited on
Commit
d30e355
Β·
verified Β·
1 Parent(s): 044c7a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -17
README.md CHANGED
@@ -15,29 +15,46 @@ tags:
15
  - song
16
  ---
17
 
18
- <div align="center">
19
- <img src="https://i.postimg.cc/3Jx3yZ5b/real-vs-fake-sonics-w-logo.jpg" width="250">
20
- </div>
21
-
22
- <div align="center">
23
  <h1>SONICS: Synthetic Or Not - Identifying Counterfeit Songs</h1>
24
- <h3><span style="color:red;"><b>ICLR 2025 [Poster]</b></span></h3>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  </div>
26
 
 
27
 
28
- ## Overview
29
-
30
- The recent surge in AI-generated songs presents exciting possibilities and challenges. These innovations necessitate the ability to distinguish between human-composed and synthetic songs to safeguard artistic integrity and protect human musical artistry. Existing research and datasets in fake song detection only focus on singing voice deepfake detection (SVDD), where the vocals are AI-generated but the instrumental music is sourced from real songs. However, these approaches are inadequate for detecting contemporary end-to-end artificial songs where all components (vocals, music, lyrics, and style) could be AI-generated. Additionally, existing datasets lack music-lyrics diversity, long-duration songs, and open-access fake songs. To address these gaps, we introduce **SONICS**, a novel dataset for end-to-end **Synthetic Song Detection (SSD)**, comprising over **97k songs (4,751 hours)**, with over **49k synthetic songs** from popular platforms like **Suno and Udio**. Furthermore, we highlight the importance of modeling long-range temporal dependencies in songs for effective authenticity detection, an aspect entirely overlooked in existing methods. To utilize long-range patterns, we introduce **SpecTTTra**, a novel architecture that significantly improves time and memory efficiency over conventional CNN and Transformer-based models. In particular, for long audio samples, our top-performing variant **outperforms ViT by 8% F1 score while being 38% faster and using 26% less memory**. Additionally, in comparison with ConvNeXt, our model achieves **1% gain in F1 score with a 20% boost in speed and 67% reduction in memory usage**.
31
 
 
 
 
32
 
33
- ## Resources
34
 
35
  - πŸ“„ [**Paper**](https://openreview.net/forum?id=PY7KSh29Z8)
36
  - 🎡 [**Dataset**](https://huggingface.co/datasets/awsaf49/sonics)
37
  - πŸ”¬ [**ArXiv**](https://arxiv.org/abs/2408.14080)
38
  - πŸ’» [**GitHub**](https://github.com/awsaf49/sonics)
39
 
40
- ## Model Variants
 
41
 
42
  <style>
43
  .hf-button {
@@ -62,10 +79,10 @@ The recent surge in AI-generated songs presents exciting possibilities and chall
62
  | `sonics-spectttra-beta-5s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-beta-5s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ² | 5s | 3 | 5 | 0.78 | 0.69 | 0.94 | 152 | 1.1 | 0.2 | 5 | 17 |
63
  | `sonics-spectttra-gamma-5s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-gamma-5s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ³ | 5s | 5 | 7 | 0.76 | 0.66 | 0.92 | 154 | 0.7 | 0.1 | 2 | 17 |
64
  | `sonics-spectttra-alpha-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-alpha-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ± | 120s | 1 | 3 | 0.97 | 0.96 | 0.99 | 47 | 23.7 | 3.9 | 50 | 19 |
65
- | `sonics-spectttra-beta-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-beta-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ² | 120s | 3 | 5 | 0.92 | 0.86 | 0.99 | 80 | 14.0 | 2.3 | 29 | 17 |
66
- | `sonics-spectttra-gamma-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-gamma-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ³ | 120s | 5 | 7 | 0.97 | 0.96 | 0.99 | 97 | 10.1 | 1.6 | 138 | 22 |
67
 
68
- ## Model Architecture
69
 
70
  - **Base Model:** SpectTTTra (Spectro-Temporal Tokens Transformer)
71
  - **Embedding Dimension:** 384
@@ -73,7 +90,7 @@ The recent surge in AI-generated songs presents exciting possibilities and chall
73
  - **Number of Layers:** 12
74
  - **MLP Ratio:** 2.67
75
 
76
- ## Audio Processing
77
 
78
  - **Sample Rate:** 16kHz
79
  - **FFT Size:** 2048
@@ -82,12 +99,24 @@ The recent surge in AI-generated songs presents exciting possibilities and chall
82
  - **Frequency Range:** 20Hz - 8kHz
83
  - **Normalization:** Mean-std normalization
84
 
85
- ## Usage
86
 
87
  ```python
88
  # Install from GitHub
89
- pip install git+https://github.com/awsaf49/sonics.git
90
 
91
  # Load model
92
  from sonics import HFAudioClassifier
93
  model = HFAudioClassifier.from_pretrained("awsaf49/sonics-spectttra-alpha-5s")
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - song
16
  ---
17
 
18
+ <div style="text-align: center; color: white; padding: 20px;">
19
+ <div align="center"><img src="https://i.postimg.cc/3Jx3yZ5b/real-vs-fake-sonics-w-logo.jpg" width="250" alt="SONICS Logo"></div>
 
 
 
20
  <h1>SONICS: Synthetic Or Not - Identifying Counterfeit Songs</h1>
21
+ <h3 style="color:red;"><b>ICLR 2025 [Poster]</b></h3>
22
+ <div style="display: flex; justify-content: center; flex-wrap: wrap; gap: 10px; margin: 20px 0;">
23
+ <a href="https://arxiv.org/abs/2408.14080" style="text-decoration: none;">
24
+ <img src="https://img.shields.io/badge/ArXiv-Paper-red" alt="Paper">
25
+ </a>
26
+ <a href="https://huggingface.co/collections/awsaf49/sonics-spectttra-67bb6517b3920fd18e409013" style="text-decoration: none;">
27
+ <img src="https://img.shields.io/badge/HuggingFace-Model-yellow" alt="Hugging Face">
28
+ </a>
29
+ <a href="https://huggingface.co/datasets/awsaf49/sonics" style="text-decoration: none;">
30
+ <img src="https://img.shields.io/badge/HuggingFace-Dataset-orange" alt="Hugging Face Dataset">
31
+ </a>
32
+ <a href="https://www.kaggle.com/datasets/awsaf49/sonics-dataset" style="text-decoration: none;">
33
+ <img src="https://img.shields.io/badge/Kaggle-Dataset-blue?logo=kaggle" alt="Kaggle Dataset">
34
+ </a>
35
+ <a href="https://huggingface.co/spaces/awsaf49/sonics-fake-song-detection" style="text-decoration: none;">
36
+ <img src="https://img.shields.io/badge/HuggingFace-Demo-blue" alt="Hugging Face Demo">
37
+ </a>
38
+ </div>
39
  </div>
40
 
41
+ ---
42
 
43
+ ## πŸ“Œ Abstract
 
 
44
 
45
+ The recent surge in AI-generated songs presents exciting possibilities and challenges. These innovations necessitate the ability to distinguish between human-composed and synthetic songs to safeguard artistic integrity and protect human musical artistry. Existing research and datasets in fake song detection only focus on singing voice deepfake detection (SVDD), where the vocals are AI-generated
46
+ but the instrumental music is sourced from real songs. However, these approaches are inadequate for detecting contemporary end-to-end artificial songs where all components (vocals, music, lyrics, and style) could be AI-generated. Additionally, existing datasets lack music-lyrics diversity, long-duration songs, and open-access fake songs. To address these gaps, we introduce SONICS, a novel dataset
47
+ for end-to-end Synthetic Song Detection (SSD), comprising over 97k songs (4,751 hours) with over 49k synthetic songs from popular platforms like Suno and Udio. Furthermore, we highlight the importance of modeling long-range temporal dependencies in songs for effective authenticity detection, an aspect entirely overlooked in existing methods. To utilize long-range patterns, we introduce SpecTTTra, a novel architecture that significantly improves time and memory efficiency over conventional CNN and Transformer-based models. For long songs, our top-performing variant outperforms ViT by 8% in F1 score, is 38% faster, and uses 26% less memory, while also surpassing ConvNeXt with a 1% F1 score gain, 20% speed boost, and 67% memory reduction.
48
 
49
+ ## πŸ”— Links
50
 
51
  - πŸ“„ [**Paper**](https://openreview.net/forum?id=PY7KSh29Z8)
52
  - 🎡 [**Dataset**](https://huggingface.co/datasets/awsaf49/sonics)
53
  - πŸ”¬ [**ArXiv**](https://arxiv.org/abs/2408.14080)
54
  - πŸ’» [**GitHub**](https://github.com/awsaf49/sonics)
55
 
56
+
57
+ ## πŸ† Model Performance
58
 
59
  <style>
60
  .hf-button {
 
79
  | `sonics-spectttra-beta-5s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-beta-5s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ² | 5s | 3 | 5 | 0.78 | 0.69 | 0.94 | 152 | 1.1 | 0.2 | 5 | 17 |
80
  | `sonics-spectttra-gamma-5s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-gamma-5s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ³ | 5s | 5 | 7 | 0.76 | 0.66 | 0.92 | 154 | 0.7 | 0.1 | 2 | 17 |
81
  | `sonics-spectttra-alpha-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-alpha-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ± | 120s | 1 | 3 | 0.97 | 0.96 | 0.99 | 47 | 23.7 | 3.9 | 50 | 19 |
82
+ | `sonics-spectttra-beta-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-beta-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ² | 120s | 3 | 5 | 0.92 | 0.86 | 0.99 | 80 | 14.0 | 2.3 | 29 | 21 |
83
+ | `sonics-spectttra-gamma-120s` | <a class="hf-button" href="https://huggingface.co/awsaf49/sonics-spectttra-gamma-120s"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg">HF</a> | SpecTTTra-Ξ³ | 120s | 5 | 7 | 0.88 | 0.79 | 0.99 | 97 | 10.1 | 1.6 | 20 | 24 |
84
 
85
+ ## πŸ“ Model Architecture
86
 
87
  - **Base Model:** SpectTTTra (Spectro-Temporal Tokens Transformer)
88
  - **Embedding Dimension:** 384
 
90
  - **Number of Layers:** 12
91
  - **MLP Ratio:** 2.67
92
 
93
+ ## 🎢 Audio Processing
94
 
95
  - **Sample Rate:** 16kHz
96
  - **FFT Size:** 2048
 
99
  - **Frequency Range:** 20Hz - 8kHz
100
  - **Normalization:** Mean-std normalization
101
 
102
+ ## ♻️ Usage
103
 
104
  ```python
105
  # Install from GitHub
106
+ !pip install git+https://github.com/awsaf49/sonics.git
107
 
108
  # Load model
109
  from sonics import HFAudioClassifier
110
  model = HFAudioClassifier.from_pretrained("awsaf49/sonics-spectttra-alpha-5s")
111
+ ```
112
+
113
+ ## πŸ“ Citation
114
+
115
+ ```bibtex
116
+ @inproceedings{rahman2024sonics,
117
+ title={SONICS: Synthetic Or Not - Identifying Counterfeit Songs},
118
+ author={Rahman, Md Awsafur and Hakim, Zaber Ibn Abdul and Sarker, Najibul Haque and Paul, Bishmoy and Fattah, Shaikh Anowarul},
119
+ booktitle={International Conference on Learning Representations (ICLR)},
120
+ year={2025},
121
+ }
122
+ ```