vitorcalvi commited on
Commit
2366154
·
1 Parent(s): f577600

Jaw Initial

Browse files
Files changed (2) hide show
  1. README.md +10 -86
  2. README_EXPLAINED.MD +86 -0
README.md CHANGED
@@ -1,86 +1,10 @@
1
- # Jaw Motion Assessment Tool
2
-
3
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
4
- [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME)
5
-
6
- ## Overview
7
-
8
- This application provides a comprehensive assessment of jaw motion, specifically designed for evaluating temporomandibular joint (TMJ) function. It utilizes computer vision techniques, specifically MediaPipe's Face Mesh, to track facial landmarks and analyze jaw movements from video recordings. This tool is valuable for clinicians, researchers, and patients seeking objective measurements and analysis of jaw mobility.
9
-
10
- ## Features
11
-
12
- - **Automated Measurements:** Accurately measures jaw opening and lateral deviation using video input.
13
- - **Multiple Movement Analysis:** Supports the analysis of various jaw movements, including:
14
- - Baseline
15
- - Maximum Opening
16
- - Lateral Left
17
- - Lateral Right
18
- - Combined Movements
19
- - **Detailed Reporting:** Generates a comprehensive report including:
20
- - Maximum Jaw Opening (mm)
21
- - Average Lateral Deviation (mm)
22
- - Movement Range (mm)
23
- - Quality Score (0-1)
24
- - Movement Counts per Type
25
- - Timestamp of assessment
26
- - **Visualizations:** Plots jaw opening and lateral deviation over time to visualize movement patterns, providing a clear picture of the patient's jaw mobility.
27
- - **Objective Assessment:** Provides quantitative data to support clinical decision-making and track patient progress.
28
- - **User-Friendly Interface:** Built with Gradio for an intuitive and easy-to-use interface.
29
-
30
- ## How to Use
31
-
32
- 1. **Access the Application:** Visit the application on Hugging Face Spaces: [https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME](https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME) (Replace `YOUR_USERNAME` and `YOUR_SPACE_NAME` with your actual Hugging Face username and space name).
33
- 2. **Record Video:** Use the "Record Assessment" input to record a video of the patient performing jaw movements as instructed. Ensure the patient's face is well-lit and clearly visible in the video.
34
- 3. **Select Movement Type:** Choose the type of movement being performed from the "Movement Type" options:
35
- - `baseline`
36
- - `maximum_opening`
37
- - `lateral_left`
38
- - `lateral_right`
39
- - `combined`
40
- 4. **Process and Analyze:** Click the "Submit" button. The application will automatically process the video, track facial landmarks, and calculate relevant measurements.
41
- 5. **View Results:** Review the results presented in the following output components:
42
- - **Processed Recording:** A video with overlaid measurements, visually highlighting the tracked points and calculated distances.
43
- - **Analysis Report:** A detailed text report summarizing the key metrics and movement counts.
44
- - **Movement Patterns:** A plot visualizing the jaw opening and lateral deviation throughout the recording.
45
-
46
- ## Technical Details
47
-
48
- - **Frontend:** Gradio
49
- - **Backend:** Python
50
- - **Computer Vision:** MediaPipe Face Mesh
51
- - **Data Processing:** NumPy, Pandas
52
- - **Video Processing:** OpenCV, Tempfile
53
- - **Plotting:** Matplotlib
54
-
55
- ## Calibration
56
-
57
- Currently, the application does not support manual calibration. The default calibration assumes a standard distance from the camera and uses it for millimeter conversion.
58
-
59
- ## Limitations
60
-
61
- - Accuracy is dependent on video quality, lighting conditions, and clear visibility of the face.
62
- - The default calibration might not be accurate for all setups.
63
-
64
- ## Contributing
65
-
66
- Contributions are welcome! If you'd like to contribute to this project, please follow these steps:
67
-
68
- 1. Fork the repository.
69
- 2. Create a new branch for your feature or bug fix.
70
- 3. Commit your changes with clear and concise commit messages.
71
- 4. Push your branch to your forked repository.
72
- 5. Submit a pull request to the main repository.
73
-
74
- ## License
75
-
76
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
77
-
78
- ## Acknowledgements
79
-
80
- - [MediaPipe](https://mediapipe.dev/) for the Face Mesh model.
81
- - [Gradio](https://gradio.app/) for the user interface framework.
82
- - All the contributors who helped improve this project.
83
-
84
- ## Contact
85
-
86
- For any questions or issues, please open an issue on the GitHub repository.
 
1
+ ---
2
+ title: Jaw Motion Assessment
3
+ emoji: 🦷
4
+ colorFrom: '#007bff'
5
+ colorTo: '#00d4ff'
6
+ sdk: gradio
7
+ sdk_version: '4.19.2'
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README_EXPLAINED.MD ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Jaw Motion Assessment Tool
2
+
3
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
4
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME)
5
+
6
+ ## Overview
7
+
8
+ This application provides a comprehensive assessment of jaw motion, specifically designed for evaluating temporomandibular joint (TMJ) function. It utilizes computer vision techniques, specifically MediaPipe's Face Mesh, to track facial landmarks and analyze jaw movements from video recordings. This tool is valuable for clinicians, researchers, and patients seeking objective measurements and analysis of jaw mobility.
9
+
10
+ ## Features
11
+
12
+ - **Automated Measurements:** Accurately measures jaw opening and lateral deviation using video input.
13
+ - **Multiple Movement Analysis:** Supports the analysis of various jaw movements, including:
14
+ - Baseline
15
+ - Maximum Opening
16
+ - Lateral Left
17
+ - Lateral Right
18
+ - Combined Movements
19
+ - **Detailed Reporting:** Generates a comprehensive report including:
20
+ - Maximum Jaw Opening (mm)
21
+ - Average Lateral Deviation (mm)
22
+ - Movement Range (mm)
23
+ - Quality Score (0-1)
24
+ - Movement Counts per Type
25
+ - Timestamp of assessment
26
+ - **Visualizations:** Plots jaw opening and lateral deviation over time to visualize movement patterns, providing a clear picture of the patient's jaw mobility.
27
+ - **Objective Assessment:** Provides quantitative data to support clinical decision-making and track patient progress.
28
+ - **User-Friendly Interface:** Built with Gradio for an intuitive and easy-to-use interface.
29
+
30
+ ## How to Use
31
+
32
+ 1. **Access the Application:** Visit the application on Hugging Face Spaces: [https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME](https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME) (Replace `YOUR_USERNAME` and `YOUR_SPACE_NAME` with your actual Hugging Face username and space name).
33
+ 2. **Record Video:** Use the "Record Assessment" input to record a video of the patient performing jaw movements as instructed. Ensure the patient's face is well-lit and clearly visible in the video.
34
+ 3. **Select Movement Type:** Choose the type of movement being performed from the "Movement Type" options:
35
+ - `baseline`
36
+ - `maximum_opening`
37
+ - `lateral_left`
38
+ - `lateral_right`
39
+ - `combined`
40
+ 4. **Process and Analyze:** Click the "Submit" button. The application will automatically process the video, track facial landmarks, and calculate relevant measurements.
41
+ 5. **View Results:** Review the results presented in the following output components:
42
+ - **Processed Recording:** A video with overlaid measurements, visually highlighting the tracked points and calculated distances.
43
+ - **Analysis Report:** A detailed text report summarizing the key metrics and movement counts.
44
+ - **Movement Patterns:** A plot visualizing the jaw opening and lateral deviation throughout the recording.
45
+
46
+ ## Technical Details
47
+
48
+ - **Frontend:** Gradio
49
+ - **Backend:** Python
50
+ - **Computer Vision:** MediaPipe Face Mesh
51
+ - **Data Processing:** NumPy, Pandas
52
+ - **Video Processing:** OpenCV, Tempfile
53
+ - **Plotting:** Matplotlib
54
+
55
+ ## Calibration
56
+
57
+ Currently, the application does not support manual calibration. The default calibration assumes a standard distance from the camera and uses it for millimeter conversion.
58
+
59
+ ## Limitations
60
+
61
+ - Accuracy is dependent on video quality, lighting conditions, and clear visibility of the face.
62
+ - The default calibration might not be accurate for all setups.
63
+
64
+ ## Contributing
65
+
66
+ Contributions are welcome! If you'd like to contribute to this project, please follow these steps:
67
+
68
+ 1. Fork the repository.
69
+ 2. Create a new branch for your feature or bug fix.
70
+ 3. Commit your changes with clear and concise commit messages.
71
+ 4. Push your branch to your forked repository.
72
+ 5. Submit a pull request to the main repository.
73
+
74
+ ## License
75
+
76
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
77
+
78
+ ## Acknowledgements
79
+
80
+ - [MediaPipe](https://mediapipe.dev/) for the Face Mesh model.
81
+ - [Gradio](https://gradio.app/) for the user interface framework.
82
+ - All the contributors who helped improve this project.
83
+
84
+ ## Contact
85
+
86
+ For any questions or issues, please open an issue on the GitHub repository.