prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,17 @@
|
|
1 |
---
|
2 |
license: creativeml-openrail-m
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
-
|
6 |
![pipeline](dfd.jpg)
|
7 |
|
|
|
|
|
|
|
8 |
```
|
9 |
Classification report:
|
10 |
|
@@ -16,4 +23,40 @@ Classification report:
|
|
16 |
accuracy 0.9935 9521
|
17 |
macro avg 0.9935 0.9935 0.9935 9521
|
18 |
weighted avg 0.9935 0.9935 0.9935 9521
|
19 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: creativeml-openrail-m
|
3 |
+
pipeline_tag: image-classification
|
4 |
+
library_name: transformers
|
5 |
+
tags:
|
6 |
+
- deep-fake
|
7 |
+
- detectioon
|
8 |
---
|
9 |
|
|
|
10 |
![pipeline](dfd.jpg)
|
11 |
|
12 |
+
|
13 |
+
# **Image-Deep-Fake-Detector**
|
14 |
+
|
15 |
```
|
16 |
Classification report:
|
17 |
|
|
|
23 |
accuracy 0.9935 9521
|
24 |
macro avg 0.9935 0.9935 0.9935 9521
|
25 |
weighted avg 0.9935 0.9935 0.9935 9521
|
26 |
+
```
|
27 |
+
|
28 |
+
The **precision score** is a key metric to evaluate the performance of a deep fake detector. Precision is defined as:
|
29 |
+
|
30 |
+
\[
|
31 |
+
\text{Precision} = \frac{\text{True Positives}}{\text{True Positives + False Positives}}
|
32 |
+
\]
|
33 |
+
|
34 |
+
It indicates how well the model avoids false positives, which in the context of a deep fake detector means it measures how often the "Fake" label is correctly identified without mistakenly classifying real content as fake.
|
35 |
+
|
36 |
+
From the **classification report**, the precision values are:
|
37 |
+
|
38 |
+
- **Real:** 0.9933
|
39 |
+
- **Fake:** 0.9937
|
40 |
+
- **Macro average:** 0.9935
|
41 |
+
- **Weighted average:** 0.9935
|
42 |
+
|
43 |
+
### Key Observations:
|
44 |
+
1. **High precision (0.9933 for Real, 0.9937 for Fake):**
|
45 |
+
The model rarely misclassifies real content as fake and vice versa. This is critical for applications like deep fake detection, where false accusations (false positives) can have significant consequences.
|
46 |
+
|
47 |
+
2. **Macro and Weighted Averages (0.9935):**
|
48 |
+
The precision is evenly high across both classes, which shows that the model is well-balanced in its performance for detecting both real and fake content.
|
49 |
+
|
50 |
+
3. **Reliability of Predictions:**
|
51 |
+
With precision near 1.0, when the model predicts a video as fake (or real), it's highly likely to be correct. This is essential in reducing unnecessary manual verification in real-world applications like social media content moderation or fraud detection.
|
52 |
+
|
53 |
+
### ONNX Exchange
|
54 |
+
|
55 |
+
The ONNX model is converted using the following method, which directly writes the ONNX files to the repository using the Hugging Face write token.
|
56 |
+
|
57 |
+
🧪 : https://huggingface.co/spaces/prithivMLmods/convert-to-onnx-dir
|
58 |
+
|
59 |
+
![Screenshot 2025-01-27 at 19-03-01 ONNX - a Hugging Face Space by prithivMLmods.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/5T979tVYJ4jCKzlE6nOma.png)
|
60 |
+
|
61 |
+
### Conclusion:
|
62 |
+
The deep fake detector model demonstrates **excellent precision** for both the "Real" and "Fake" classes, indicating a highly reliable detection system with minimal false positives. Combined with similarly high recall and F1-score, the overall accuracy (99.35%) reflects that this is a robust and trustworthy model for identifying deep fakes.
|