MichalMlodawski commited on
Commit
d5da015
1 Parent(s): 8bfdcf2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +151 -3
README.md CHANGED
@@ -1,3 +1,151 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ datasets:
4
+ - MichalMlodawski/closed-open-eyes
5
+ language:
6
+ - en
7
+ tags:
8
+ - eye
9
+ - eyes
10
+ model-index:
11
+ - name: mobilevitv2 Eye State Classifier
12
+ results:
13
+ - task:
14
+ type: image-classification
15
+ dataset:
16
+ name: MichalMlodawski/closed-open-eyes
17
+ type: custom
18
+ metrics:
19
+ - name: Accuracy
20
+ type: self-reported
21
+ value: 99%
22
+ - name: Precision
23
+ type: self-reported
24
+ value: 99%
25
+ - name: Recall
26
+ type: self-reported
27
+ value: 99%
28
+ ---
29
+ ---
30
+
31
+ # 👁️ Open-Closed Eye Classification mobilevitv2 👁️
32
+
33
+ ## Model Overview 🔍
34
+
35
+ This model is a fine-tuned version of mobilevitv2, specifically designed for classifying images of eyes as either open or closed. With an impressive accuracy of 99%, this classifier excels in distinguishing between open and closed eyes in various contexts.
36
+
37
+ ## Model Details 📊
38
+
39
+ - **Model Name**: open-closed-eye-classification-focalnet-base
40
+ - **Base Model**: apple/mobilevitv2-1.0-imagenet1k-256
41
+ - **Fine-tuned By**: Michał Młodawski
42
+ - **Categories**:
43
+ - 0: Closed Eyes 😴
44
+ - 1: Open Eyes 👀
45
+ - **Accuracy**: 99% 🎯
46
+
47
+ ## Use Cases 💡
48
+
49
+ This high-accuracy model is particularly useful for applications involving:
50
+
51
+ - Driver Drowsiness Detection 🚗
52
+ - Attentiveness Monitoring in Educational Settings 🏫
53
+ - Medical Diagnostics related to Eye Conditions 🏥
54
+ - Facial Analysis in Photography and Videography 📸
55
+ - Human-Computer Interaction Systems 💻
56
+
57
+ ## How It Works 🛠️
58
+
59
+ The model takes an input image and classifies it into one of two categories:
60
+
61
+ - **Closed Eyes** (0): Images where the subject's eyes are fully or mostly closed.
62
+ - **Open Eyes** (1): Images where the subject's eyes are open.
63
+
64
+ The classification leverages the advanced image processing capabilities of the FocalNet architecture, fine-tuned on a carefully curated dataset of eye images.
65
+
66
+ ## Getting Started 🚀
67
+
68
+ To start using the open-closed-eye-classification-focalnet-base, you can integrate it into your projects with the following steps:
69
+
70
+ ### Installation
71
+
72
+ ```bash
73
+ pip install transformers==4.37.2
74
+ pip install torch==2.3.1
75
+ pip install Pillow
76
+ ```
77
+
78
+ ### Usage
79
+
80
+ ```python
81
+ import os
82
+ from PIL import Image
83
+ import torch
84
+ from torchvision import transforms
85
+ from transformers import AutoImageProcessor, MobileViTV2ForImageClassification
86
+
87
+ # Path to the folder with images
88
+ image_folder = ""
89
+ # Path to the model
90
+ model_path = "MichalMlodawski/open-closed-eye-classification-mobilevitv2-1.0"
91
+
92
+ # List of jpg files in the folder
93
+ jpg_files = [file for file in os.listdir(image_folder) if file.lower().endswith(".jpg")]
94
+
95
+ # Check if there are jpg files in the folder
96
+ if not jpg_files:
97
+ print("🚫 No jpg files found in folder:", image_folder)
98
+ exit()
99
+
100
+ # Load the model and image processor
101
+ image_processor = AutoImageProcessor.from_pretrained(model_path)
102
+ model = MobileViTV2ForImageClassification.from_pretrained(model_path)
103
+ model.eval()
104
+
105
+ # Image transformations
106
+ transform = transforms.Compose([
107
+ transforms.Resize((256, 256)),
108
+ transforms.ToTensor()
109
+ ])
110
+
111
+ # Processing and prediction for each image
112
+ results = []
113
+ for jpg_file in jpg_files:
114
+ selected_image = os.path.join(image_folder, jpg_file)
115
+ image = Image.open(selected_image).convert("RGB")
116
+ image_tensor = transform(image).unsqueeze(0)
117
+
118
+ # Process image using image_processor
119
+ inputs = image_processor(images=image, return_tensors="pt")
120
+
121
+ # Prediction using the model
122
+ with torch.no_grad():
123
+ outputs = model(**inputs)
124
+ probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
125
+ confidence, predicted = torch.max(probabilities, 1)
126
+
127
+ results.append((jpg_file, predicted.item(), confidence.item() * 100))
128
+
129
+ # Display results
130
+ print("🖼️ Image Classification Results 🖼️")
131
+ print("=" * 40)
132
+
133
+ for jpg_file, prediction, confidence in results:
134
+ emoji = "👁️" if prediction == 1 else "❌"
135
+ confidence_bar = "🟩" * int(confidence // 10) + "⬜" * (10 - int(confidence // 10))
136
+
137
+ print(f"📄 File name: {jpg_file}")
138
+ print(f"{emoji} Prediction: {'Open' if prediction == 1 else 'Closed'}")
139
+ print(f"🎯 Confidence: {confidence:.2f}% {confidence_bar}")
140
+ print(f"{'=' * 40}")
141
+
142
+ print("🏁 Classification completed! 🎉")
143
+ ```
144
+
145
+ ## Disclaimer ⚠️
146
+
147
+ This model is provided for research and development purposes only. The creators and distributors of this model do not assume any legal responsibility for its use or misuse. Users are solely responsible for ensuring that their use of this model complies with applicable laws, regulations, and ethical standards. The model's performance may vary depending on the quality and nature of input images. Always validate results in critical applications.
148
+
149
+ 🚫 Do not use this model for any illegal, unethical, or potentially harmful purposes.
150
+
151
+ 📝 Please note that while the model demonstrates high accuracy, it should not be used as a sole decision-making tool in safety-critical systems without proper validation and human oversight.