vitorcalvi commited on
Commit
f577600
Β·
1 Parent(s): 27d4540

Jaw Initial

Browse files
Files changed (1) hide show
  1. README.md +71 -99
README.md CHANGED
@@ -1,114 +1,86 @@
1
- # JawTrack
2
 
3
- JawTrack is a real-time jaw motion analysis system that uses computer vision to track and analyze jaw movements. Built with MediaPipe and OpenCV, it provides quantitative measurements for jaw motion assessment.
 
4
 
5
- ## Features
6
-
7
- - Real-time jaw motion tracking
8
- - Video-based analysis
9
- - Quantitative measurements:
10
- - Jaw opening distance
11
- - Lateral deviation
12
- - Movement patterns
13
- - Data visualization
14
- - Assessment reports
15
- - CSV data export
16
-
17
- ## Requirements
18
-
19
- - Python 3.10+
20
- - OpenCV
21
- - MediaPipe
22
- - Gradio
23
- - NumPy
24
- - Pandas
25
- - Matplotlib
26
-
27
- ## Installation
28
-
29
- 1. Clone the repository:
30
-
31
- ```bash
32
- git clone https://github.com/yourusername/jawtrack.git
33
- cd jawtrack
34
- ```
35
-
36
- 2. Create a virtual environment:
37
-
38
- ```bash
39
- python -m venv venv
40
- source venv/bin/activate # On Windows: venv\Scripts\activate
41
- ```
42
-
43
- 3. Install dependencies:
44
-
45
- ```bash
46
- pip install -r requirements.txt
47
- ```
48
-
49
- ## Usage
50
 
51
- 1. Start the application:
52
 
53
- ```bash
54
- python app.py
55
- ```
56
-
57
- 2. Open your web browser and navigate to:
58
-
59
- ```
60
- http://localhost:7860
61
- ```
62
-
63
- 3. Upload a video or use webcam for real-time analysis
64
-
65
- ## Development Setup
66
-
67
- 1. Install development dependencies:
68
-
69
- ```bash
70
- pip install -r requirements-dev.txt
71
- ```
72
-
73
- 2. Run tests:
74
-
75
- ```bash
76
- pytest tests/
77
- ```
78
-
79
- ## Project Structure
80
 
81
- ```
82
- jawtrack/
83
- β”œβ”€β”€ README.md
84
- β”œβ”€β”€ requirements.txt
85
- β”œβ”€β”€ setup.py
86
- β”œβ”€β”€ jawtrack/
87
- β”‚ β”œβ”€β”€ core/
88
- β”‚ β”œβ”€β”€ analysis/
89
- β”‚ └── ui/
90
- β”œβ”€β”€ tests/
91
- └── examples/
92
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
  ## Contributing
95
 
96
- 1. Fork the repository
97
- 2. Create a feature branch
98
- 3. Commit your changes
99
- 4. Push to the branch
100
- 5. Create a Pull Request
 
 
101
 
102
  ## License
103
 
104
- This project is licensed under the MIT License - see the LICENSE file for details.
105
 
106
- ## Authors
107
 
108
- - Your Name - Initial work
 
 
109
 
110
- ## Acknowledgments
111
 
112
- - MediaPipe team for face mesh implementation
113
- - OpenCV community
114
- - Gradio team for the web interface framework
 
1
+ # Jaw Motion Assessment Tool
2
 
3
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
4
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME)
5
 
6
+ ## Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
+ This application provides a comprehensive assessment of jaw motion, specifically designed for evaluating temporomandibular joint (TMJ) function. It utilizes computer vision techniques, specifically MediaPipe's Face Mesh, to track facial landmarks and analyze jaw movements from video recordings. This tool is valuable for clinicians, researchers, and patients seeking objective measurements and analysis of jaw mobility.
9
 
10
+ ## Features
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
+ - **Automated Measurements:** Accurately measures jaw opening and lateral deviation using video input.
13
+ - **Multiple Movement Analysis:** Supports the analysis of various jaw movements, including:
14
+ - Baseline
15
+ - Maximum Opening
16
+ - Lateral Left
17
+ - Lateral Right
18
+ - Combined Movements
19
+ - **Detailed Reporting:** Generates a comprehensive report including:
20
+ - Maximum Jaw Opening (mm)
21
+ - Average Lateral Deviation (mm)
22
+ - Movement Range (mm)
23
+ - Quality Score (0-1)
24
+ - Movement Counts per Type
25
+ - Timestamp of assessment
26
+ - **Visualizations:** Plots jaw opening and lateral deviation over time to visualize movement patterns, providing a clear picture of the patient's jaw mobility.
27
+ - **Objective Assessment:** Provides quantitative data to support clinical decision-making and track patient progress.
28
+ - **User-Friendly Interface:** Built with Gradio for an intuitive and easy-to-use interface.
29
+
30
+ ## How to Use
31
+
32
+ 1. **Access the Application:** Visit the application on Hugging Face Spaces: [https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME](https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME) (Replace `YOUR_USERNAME` and `YOUR_SPACE_NAME` with your actual Hugging Face username and space name).
33
+ 2. **Record Video:** Use the "Record Assessment" input to record a video of the patient performing jaw movements as instructed. Ensure the patient's face is well-lit and clearly visible in the video.
34
+ 3. **Select Movement Type:** Choose the type of movement being performed from the "Movement Type" options:
35
+ - `baseline`
36
+ - `maximum_opening`
37
+ - `lateral_left`
38
+ - `lateral_right`
39
+ - `combined`
40
+ 4. **Process and Analyze:** Click the "Submit" button. The application will automatically process the video, track facial landmarks, and calculate relevant measurements.
41
+ 5. **View Results:** Review the results presented in the following output components:
42
+ - **Processed Recording:** A video with overlaid measurements, visually highlighting the tracked points and calculated distances.
43
+ - **Analysis Report:** A detailed text report summarizing the key metrics and movement counts.
44
+ - **Movement Patterns:** A plot visualizing the jaw opening and lateral deviation throughout the recording.
45
+
46
+ ## Technical Details
47
+
48
+ - **Frontend:** Gradio
49
+ - **Backend:** Python
50
+ - **Computer Vision:** MediaPipe Face Mesh
51
+ - **Data Processing:** NumPy, Pandas
52
+ - **Video Processing:** OpenCV, Tempfile
53
+ - **Plotting:** Matplotlib
54
+
55
+ ## Calibration
56
+
57
+ Currently, the application does not support manual calibration. The default calibration assumes a standard distance from the camera and uses it for millimeter conversion.
58
+
59
+ ## Limitations
60
+
61
+ - Accuracy is dependent on video quality, lighting conditions, and clear visibility of the face.
62
+ - The default calibration might not be accurate for all setups.
63
 
64
  ## Contributing
65
 
66
+ Contributions are welcome! If you'd like to contribute to this project, please follow these steps:
67
+
68
+ 1. Fork the repository.
69
+ 2. Create a new branch for your feature or bug fix.
70
+ 3. Commit your changes with clear and concise commit messages.
71
+ 4. Push your branch to your forked repository.
72
+ 5. Submit a pull request to the main repository.
73
 
74
  ## License
75
 
76
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
77
 
78
+ ## Acknowledgements
79
 
80
+ - [MediaPipe](https://mediapipe.dev/) for the Face Mesh model.
81
+ - [Gradio](https://gradio.app/) for the user interface framework.
82
+ - All the contributors who helped improve this project.
83
 
84
+ ## Contact
85
 
86
+ For any questions or issues, please open an issue on the GitHub repository.