--- license: cc-by-4.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: dev path: data/dev-* - split: test path: data/test-* dataset_info: features: - name: gender dtype: string - name: speaker_id dtype: string - name: sentence_id dtype: string - name: text dtype: string - name: duration dtype: float32 - name: audio.throat_microphone dtype: audio - name: audio.acoustic_microphone dtype: audio splits: - name: train num_bytes: 4868539208 num_examples: 4000 - name: dev num_bytes: 1152997554 num_examples: 1000 - name: test num_bytes: 1190844664 num_examples: 1000 download_size: 7025122286 dataset_size: 7212381426 language: - ko --- # TAPS: Throat and Acoustic Paired Speech Dataset ## 1. DATASET SUMMARY **The Throat and Acoustic Paired Speech (TAPS) dataset** is a standardized corpus designed for deep learning-based speech enhancement, specifically targeting throat microphone recordings. Throat microphones effectively suppress background noise but suffer from high-frequency attenuation due to the low-pass filtering effect of the skin and tissue. **The dataset provides paired recordings from 60 native Korean speakers, captured simultaneously using a throat microphone (accelerometer-based) and an acoustic microphone.** This dataset facilitates speech enhancement research by enabling the development of models that recover lost high-frequency components and improve intelligibility. Additionally, we introduce a mismatch correction technique to align signals from the two microphones, which enhances model training. ___ ## 2. Dataset Usage To use the TAPS dataset, follow the steps below: ### 2.1 Loading the dataset You can load the dataset from Hugging Face as follows: ```python from datasets import load_dataset dataset = load_dataset("yskim3271/Throat_and_Acoustic_Pairing_Speech_Dataset") print(dataset) ``` ### Example output: ```python DatasetDict({ train: Dataset({ features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration', 'audio.throat_microphone', 'audio.acoustic_microphone'], num_rows: 4000 }) dev: Dataset({ features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration', 'audio.throat_microphone', 'audio.acoustic_microphone'], num_rows: 1000 }) test: Dataset({ features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration', 'audio.throat_microphone', 'audio.acoustic_microphone'], num_rows: 1000 }) }) ``` ## 2.2 Accessing a sample Each dataset entry consists of metadata and paired audio recordings. You can access a sample as follows: ```python sample = dataset["train"][0] # Get the first sample print(f"Gender: {sample['gender']}") print(f"Speaker ID: {sample['speaker_id']}") print(f"Sentence ID: {sample['sentence_id']}") print(f"Text: {sample['text']}") print(f"Duration: {sample['duration']} sec") print(f"Throat Microphone Audio Path: {sample['audio.throat_microphone']['path']}") print(f"Acoustic Microphone Audio Path: {sample['audio.acoustic_microphone']['path']}") ``` ___ ## 3. Links and Details - **Project website:** [Link](http://taps.postech.ac.kr) - **Point of contact:** Yunsik Kim (ys.kim@postech.ac.kr) - **Collected by:** Intelligent Semiconductor and Wearable Devices (ISWD) of the Pohang University of Science and Technology (POSTECH) - **Language:** Korean - **Download size:** 7.03 GB - **Total audio duration:** 15.3 hours - **Number of speech utterances:** 6,000 ___ ## 4. Citataion The BibTeX entry for the dataset is currently being prepared. ___ ## 5. DATASET STRUCTURE & STATISTICS - Training Set (40 speakers, 4,000 utterances, 10.2 hours) - Development Set (10 speakers, 1,000 utterances, 2.5 hours) - Test Set (10 speakers, 1,000 utterances, 2.6 hours) - Each set is gender-balanced (50% male, 50% female). - No speaker overlap across train/dev/test sets. | Dataset Type | Train | Dev | Test | |:---------------|:------------|:--------------|:-----------| | **Number of Speakers** | 40 | 10 | 10 | | **Number of male speakers** | 20 | 5 | 5 | | **Mean / standard deviation of the speaker age** | 28.5 / 7.3 | 25.6 / 3.0 | 26.2 / 1.4 | | **Number of utterances** | 4,000 | 1,000 | 1,000 | | **Total length of utterances (hours)** | 10.2 | 2.5 | 2.6 | | **Max / average / min length of utterances (s)** | 26.3 / 9.1 / 3.2 | 17.9 / 9.0 / 3.3 | 16.6 / 9.3 / 4.2 | ___ ## 6. DATA FIELDS Each dataset entry contains: - **`gender`:** Speaker’s gender (male/female). - **`speaker_id`:** Unique speaker identifier (e.g., "p01"). - **`sentence_id`:** Utterance index (e.g., "u30"). - **`text`:** Transcription (provided only for test set). - **`duration`:** Length of the audio sample. - **`audio.throat_microphone`:** Throat microphone signal. - **`audio.acoustic_microphone`:** Acoustic microphone signal. ___ ## 7. DATASET CREATION ### 7.1 Hardware System for Audio Data Collection The hardware system simultaneously records signals from a throat microphone and an acoustic microphone, ensuring synchronization. - **Throat microphone:** The TDK IIM-42652 MEMS accelerometer captures neck surface vibrations (8 kHz, 16-bit resolution).