Neurosync Audio2Face Dataset
🛠 Trainer Repository: NeuroSync_Trainer_Lite
📜 License: MIT
Overview
The Neurosync Audio2Face Dataset is an open-source dataset designed for training AI models to predict facial blendshape animations based on extracted audio features. This dataset is specifically formatted for integration with the NeuroSync Trainer Lite, a lightweight training framework for real-time facial animation.
Dataset Structure
The dataset consists of pre-extracted audio features and corresponding facial blendshape coefficients. No raw audio is included, ensuring anonymization and privacy while maintaining high-quality training data.
Features
Audio Features: Extracted parameters (e.g., MFCCs, auto corr) in .csv format.
Facial Blendshapes: Frame-by-frame facial animation coefficients compatible with Unreal Engine & DCC software.
No Raw Audio: Ensures privacy while preserving useful training data.
Usage
This dataset is optimized for training with NeuroSync_Trainer_Lite. To use it, simply place the MySlate_*** folders inside the dataset/data directory of the trainers root.
- Downloads last month
- 44