Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ A CLIP ViT-B/32 model trained with the Quilt-1M dataset (https://quilt1m.github.
|
|
22 |
|
23 |
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
|
24 |
|
25 |
-
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
|
26 |
|
27 |
## Direct Use
|
28 |
|
@@ -34,17 +34,17 @@ Image classification and other image task fine-tuning, linear probe image classi
|
|
34 |
|
35 |
### Intended Use
|
36 |
|
37 |
-
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models
|
38 |
|
39 |
#### Primary intended uses
|
40 |
|
41 |
The primary intended users of these models are AI researchers.
|
42 |
|
43 |
-
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
|
44 |
|
45 |
### Out-of-Scope Use Cases
|
46 |
|
47 |
-
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy.
|
48 |
|
49 |
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
|
50 |
|
@@ -59,7 +59,7 @@ Curated from educational videos on Youtube QUILT-1M contributes the largest data
|
|
59 |
|
60 |
# Evaluation
|
61 |
|
62 |
-
Evaluation done with code in the [
|
63 |
|
64 |
|
65 |
# Disclaimer
|
|
|
22 |
|
23 |
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
|
24 |
|
25 |
+
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
|
26 |
|
27 |
## Direct Use
|
28 |
|
|
|
34 |
|
35 |
### Intended Use
|
36 |
|
37 |
+
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models.
|
38 |
|
39 |
#### Primary intended uses
|
40 |
|
41 |
The primary intended users of these models are AI researchers.
|
42 |
|
43 |
+
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision histopathology models.
|
44 |
|
45 |
### Out-of-Scope Use Cases
|
46 |
|
47 |
+
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy.
|
48 |
|
49 |
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
|
50 |
|
|
|
59 |
|
60 |
# Evaluation
|
61 |
|
62 |
+
Evaluation done with code in the [CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark) and results can be found in the paper on a list of varying histology tasks and datasets.
|
63 |
|
64 |
|
65 |
# Disclaimer
|