Update README.md
Browse files
README.md
CHANGED
@@ -1,23 +1,31 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
base_model:
|
6 |
-
- google/vit-base-patch16-224-in21k
|
7 |
-
pipeline_tag: image-classification
|
8 |
-
library_name: transformers
|
9 |
-
tags:
|
10 |
-
- Deepfake
|
11 |
-
- Quality
|
12 |
-
- Assess
|
13 |
-
---
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
weighted avg 0.7920 0.7917 0.7918 3000
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- google/vit-base-patch16-224-in21k
|
7 |
+
pipeline_tag: image-classification
|
8 |
+
library_name: transformers
|
9 |
+
tags:
|
10 |
+
- Deepfake
|
11 |
+
- Quality
|
12 |
+
- Assess
|
13 |
+
---
|
14 |
+
# **Deepfake-QualityAssess-85M**
|
15 |
+
|
16 |
+
Deepfake-QualityAssess-85M is an image classification model for quality assessment of good and bad quality deepfakes. It is based on Google's ViT model (`google/vit-base-patch16-224-in21k`).
|
17 |
+
|
18 |
+
A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task.
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
+
Classification report:
|
23 |
+
|
24 |
+
precision recall f1-score support
|
25 |
+
|
26 |
+
Issue In Deepfake 0.7962 0.8067 0.8014 1500
|
27 |
+
High Quality Deepfake 0.7877 0.7767 0.7822 1500
|
28 |
+
|
29 |
+
accuracy 0.7940 3000
|
30 |
+
macro avg 0.7920 0.7917 0.7918 3000
|
31 |
weighted avg 0.7920 0.7917 0.7918 3000
|