prithivMLmods commited on
Commit
a577b19
Β·
verified Β·
1 Parent(s): 5be2cb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -17,8 +17,6 @@ Deepfake-QualityAssess-85M is an image classification model for quality assessme
17
 
18
  A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task.
19
 
20
-
21
-
22
  Classification report:
23
 
24
  precision recall f1-score support
@@ -28,4 +26,56 @@ A reasonable number of training samples were used to achieve good efficiency in
28
 
29
  accuracy 0.7940 3000
30
  macro avg 0.7920 0.7917 0.7918 3000
31
- weighted avg 0.7920 0.7917 0.7918 3000
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  A reasonable number of training samples were used to achieve good efficiency in the final training process and its efficiency metrics. Since this task involves classifying deepfake images with varying quality levels, the model was trained accordingly. Future improvements will be made based on the complexity of the task.
19
 
 
 
20
  Classification report:
21
 
22
  precision recall f1-score support
 
26
 
27
  accuracy 0.7940 3000
28
  macro avg 0.7920 0.7917 0.7918 3000
29
+ weighted avg 0.7920 0.7917 0.7918 3000
30
+
31
+ # **Inference with Hugging Face Pipeline**
32
+ ```python
33
+ from transformers import pipeline
34
+
35
+ # Load the model
36
+ pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-QualityAssess-85M", device=0)
37
+
38
+ # Predict on an image
39
+ result = pipe("path_to_image.jpg")
40
+ print(result)
41
+ ```
42
+
43
+ # **Inference with PyTorch**
44
+ ```python
45
+ from transformers import ViTForImageClassification, ViTImageProcessor
46
+ from PIL import Image
47
+ import torch
48
+
49
+ # Load the model and processor
50
+ model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-QualityAssess-85M")
51
+ processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-QualityAssess-85M")
52
+
53
+ # Load and preprocess the image
54
+ image = Image.open("path_to_image.jpg").convert("RGB")
55
+ inputs = processor(images=image, return_tensors="pt")
56
+
57
+ # Perform inference
58
+ with torch.no_grad():
59
+ outputs = model(**inputs)
60
+ logits = outputs.logits
61
+ predicted_class = torch.argmax(logits, dim=1).item()
62
+
63
+ # Map class index to label
64
+ label = model.config.id2label[predicted_class]
65
+ print(f"Predicted Label: {label}")
66
+ ```
67
+ # **Limitations of Deepfake-QualityAssess-85M**
68
+ 1. **Limited Generalization** – The model is trained on specific datasets and may not generalize well to unseen deepfake generation techniques or novel deepfake artifacts.
69
+ 2. **Variability in Deepfake Quality** – Different deepfake creation methods introduce varying levels of noise and artifacts, which may affect model performance.
70
+ 3. **Dependence on Training Data** – The model's accuracy is influenced by the quality and diversity of the training data. Biases in the dataset could lead to misclassification.
71
+ 4. **Resolution Sensitivity** – Performance may degrade when analyzing extremely high- or low-resolution images not seen during training.
72
+ 5. **Potential False Positives/Negatives** – The model may sometimes misclassify good-quality deepfakes as bad (or vice versa), limiting its reliability in critical applications.
73
+ 6. **Lack of Explainability** – Being based on a ViT (Vision Transformer), the decision-making process is less interpretable than traditional models, making it harder to analyze why certain classifications are made.
74
+ 7. **Not a Deepfake Detector** – This model assesses the quality of deepfakes but does not determine whether an image is real or fake.
75
+
76
+ # **Intended Use of Deepfake-QualityAssess-85M**
77
+ - **Quality Assessment for Research** – Used by researchers to analyze and improve deepfake generation methods by assessing output quality.
78
+ - **Dataset Filtering** – Helps filter out low-quality deepfake samples in datasets for better training of deepfake detection models.
79
+ - **Forensic Analysis** – Supports forensic teams in evaluating deepfake quality to prioritize high-quality samples for deeper analysis.
80
+ - **Content Moderation** – Assists social media platforms and content moderation teams in assessing deepfake quality before deciding on further actions.
81
+ - **Benchmarking Deepfake Models** – Used to compare and evaluate different deepfake generation models based on their output quality.