File size: 4,320 Bytes
2366154
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
# Jaw Motion Assessment Tool

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME)

## Overview

This application provides a comprehensive assessment of jaw motion, specifically designed for evaluating temporomandibular joint (TMJ) function. It utilizes computer vision techniques, specifically MediaPipe's Face Mesh, to track facial landmarks and analyze jaw movements from video recordings. This tool is valuable for clinicians, researchers, and patients seeking objective measurements and analysis of jaw mobility.

## Features

- **Automated Measurements:** Accurately measures jaw opening and lateral deviation using video input.
- **Multiple Movement Analysis:** Supports the analysis of various jaw movements, including:
  - Baseline
  - Maximum Opening
  - Lateral Left
  - Lateral Right
  - Combined Movements
- **Detailed Reporting:** Generates a comprehensive report including:
  - Maximum Jaw Opening (mm)
  - Average Lateral Deviation (mm)
  - Movement Range (mm)
  - Quality Score (0-1)
  - Movement Counts per Type
  - Timestamp of assessment
- **Visualizations:** Plots jaw opening and lateral deviation over time to visualize movement patterns, providing a clear picture of the patient's jaw mobility.
- **Objective Assessment:** Provides quantitative data to support clinical decision-making and track patient progress.
- **User-Friendly Interface:** Built with Gradio for an intuitive and easy-to-use interface.

## How to Use

1.  **Access the Application:** Visit the application on Hugging Face Spaces: [https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME](https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME) (Replace `YOUR_USERNAME` and `YOUR_SPACE_NAME` with your actual Hugging Face username and space name).
2.  **Record Video:** Use the "Record Assessment" input to record a video of the patient performing jaw movements as instructed. Ensure the patient's face is well-lit and clearly visible in the video.
3.  **Select Movement Type:** Choose the type of movement being performed from the "Movement Type" options:
    - `baseline`
    - `maximum_opening`
    - `lateral_left`
    - `lateral_right`
    - `combined`
4.  **Process and Analyze:** Click the "Submit" button. The application will automatically process the video, track facial landmarks, and calculate relevant measurements.
5.  **View Results:** Review the results presented in the following output components:
    - **Processed Recording:** A video with overlaid measurements, visually highlighting the tracked points and calculated distances.
    - **Analysis Report:** A detailed text report summarizing the key metrics and movement counts.
    - **Movement Patterns:** A plot visualizing the jaw opening and lateral deviation throughout the recording.

## Technical Details

- **Frontend:** Gradio
- **Backend:** Python
- **Computer Vision:** MediaPipe Face Mesh
- **Data Processing:** NumPy, Pandas
- **Video Processing:** OpenCV, Tempfile
- **Plotting:** Matplotlib

## Calibration

Currently, the application does not support manual calibration. The default calibration assumes a standard distance from the camera and uses it for millimeter conversion.

## Limitations

- Accuracy is dependent on video quality, lighting conditions, and clear visibility of the face.
- The default calibration might not be accurate for all setups.

## Contributing

Contributions are welcome! If you'd like to contribute to this project, please follow these steps:

1.  Fork the repository.
2.  Create a new branch for your feature or bug fix.
3.  Commit your changes with clear and concise commit messages.
4.  Push your branch to your forked repository.
5.  Submit a pull request to the main repository.

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Acknowledgements

- [MediaPipe](https://mediapipe.dev/) for the Face Mesh model.
- [Gradio](https://gradio.app/) for the user interface framework.
- All the contributors who helped improve this project.

## Contact

For any questions or issues, please open an issue on the GitHub repository.