kodiak619 commited on
Commit
89c3c95
·
verified ·
1 Parent(s): d8cfa78

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PruneSLU-30M: Enhanced Model for On-Device Spoken Language Understanding
2
+
3
+ **PruneSLU-30M** is an enhanced version of the [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) model, designed for robust Spoken Language Understanding (SLU) tasks. This model strikes a balance between performance and efficiency, making it suitable for more demanding on-device applications.
4
+
5
+ ### Model Overview
6
+
7
+ - **Base Model:** [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en)
8
+ - **Task:** Spoken Language Understanding (SLU)
9
+ - **Dataset:** Fine-tuned on the [STOP dataset](https://github.com/facebookresearch/fairseq/tree/main/examples/audio_nlp/nlu)
10
+ - **Pruning Techniques:** Employs vocabulary pruning and layer-wise structural pruning, followed by retraining to create a model that is both efficient and high-performing.
11
+
12
+ ### Key Features
13
+
14
+ - **Optimized Size:** PruneSLU-30M contains 30 million parameters, offering a higher capacity for SLU tasks while remaining suitable for on-device deployment.
15
+ - **Improved Performance:** This model is designed to handle more complex SLU tasks, providing enhanced accuracy and robustness compared to lighter models.
16
+ - **Seamless Integration:** The model can be easily accessed and utilized through the Hugging Face Transformers library.
17
+
18
+ ### Usage
19
+
20
+ To load the PruneSLU-30M model in Hugging Face, use the following code:
21
+
22
+ ```python
23
+ from transformers import WhisperForConditionalGeneration
24
+
25
+ model = WhisperForConditionalGeneration.from_pretrained("kodiak619/PruneSLU-30M")
26
+ ```
27
+
28
+ ### Applications
29
+ PruneSLU-30M is ideal for applications requiring a balance between computational efficiency and performance, such as voice-enabled AI systems, smart assistants, and SLU tasks in moderately resource-constrained environments.