Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -34,29 +34,29 @@ More details on model performance across various devices, can be found
|
|
34 |
|
35 |
| Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
36 |
|---|---|---|---|---|---|---|---|---|
|
37 |
-
| ConvNext-Base | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 7.
|
38 |
-
| ConvNext-Base | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 8.
|
39 |
-
| ConvNext-Base | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 37.
|
40 |
-
| ConvNext-Base | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 5.
|
41 |
-
| ConvNext-Base | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 6.
|
42 |
-
| ConvNext-Base | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 26.
|
43 |
-
| ConvNext-Base | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE |
|
44 |
-
| ConvNext-Base | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN |
|
45 |
-
| ConvNext-Base | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 23.
|
46 |
-
| ConvNext-Base | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 7.
|
47 |
-
| ConvNext-Base | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 8.
|
48 |
-
| ConvNext-Base | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 20.
|
49 |
-
| ConvNext-Base | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 21.
|
50 |
-
| ConvNext-Base | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 8.
|
51 |
-
| ConvNext-Base | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX |
|
52 |
|
53 |
|
54 |
|
55 |
|
56 |
## Installation
|
57 |
|
58 |
-
This model can be installed as a Python package via pip.
|
59 |
|
|
|
60 |
```bash
|
61 |
pip install qai-hub-models
|
62 |
```
|
@@ -113,7 +113,7 @@ ConvNext-Base
|
|
113 |
Device : Samsung Galaxy S23 (13)
|
114 |
Runtime : TFLITE
|
115 |
Estimated inference time (ms) : 7.7
|
116 |
-
Estimated peak memory usage (MB): [0,
|
117 |
Total # Ops : 598
|
118 |
Compute Unit(s) : NPU (598 ops)
|
119 |
```
|
@@ -140,7 +140,7 @@ from qai_hub_models.models.convnext_base import Model
|
|
140 |
torch_model = Model.from_pretrained()
|
141 |
|
142 |
# Device
|
143 |
-
device = hub.Device("Samsung Galaxy
|
144 |
|
145 |
# Trace model
|
146 |
input_shape = torch_model.get_input_spec()
|
@@ -232,7 +232,8 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
|
232 |
|
233 |
|
234 |
## License
|
235 |
-
* The license for the original implementation of ConvNext-Base can be found
|
|
|
236 |
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
|
237 |
|
238 |
|
|
|
34 |
|
35 |
| Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
36 |
|---|---|---|---|---|---|---|---|---|
|
37 |
+
| ConvNext-Base | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 7.743 ms | 0 - 27 MB | FP16 | NPU | [ConvNext-Base.tflite](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.tflite) |
|
38 |
+
| ConvNext-Base | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 8.489 ms | 1 - 22 MB | FP16 | NPU | [ConvNext-Base.so](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.so) |
|
39 |
+
| ConvNext-Base | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 37.604 ms | 0 - 413 MB | FP16 | NPU | [ConvNext-Base.onnx](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.onnx) |
|
40 |
+
| ConvNext-Base | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 5.804 ms | 0 - 61 MB | FP16 | NPU | [ConvNext-Base.tflite](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.tflite) |
|
41 |
+
| ConvNext-Base | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 6.163 ms | 1 - 61 MB | FP16 | NPU | [ConvNext-Base.so](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.so) |
|
42 |
+
| ConvNext-Base | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 26.61 ms | 1 - 142 MB | FP16 | NPU | [ConvNext-Base.onnx](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.onnx) |
|
43 |
+
| ConvNext-Base | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 4.334 ms | 0 - 73 MB | FP16 | NPU | [ConvNext-Base.tflite](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.tflite) |
|
44 |
+
| ConvNext-Base | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 4.688 ms | 1 - 74 MB | FP16 | NPU | Use Export Script |
|
45 |
+
| ConvNext-Base | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 23.177 ms | 1 - 153 MB | FP16 | NPU | [ConvNext-Base.onnx](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.onnx) |
|
46 |
+
| ConvNext-Base | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 7.77 ms | 0 - 22 MB | FP16 | NPU | [ConvNext-Base.tflite](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.tflite) |
|
47 |
+
| ConvNext-Base | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 8.227 ms | 1 - 3 MB | FP16 | NPU | Use Export Script |
|
48 |
+
| ConvNext-Base | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 20.143 ms | 0 - 54 MB | FP16 | NPU | [ConvNext-Base.tflite](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.tflite) |
|
49 |
+
| ConvNext-Base | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 21.227 ms | 0 - 52 MB | FP16 | NPU | Use Export Script |
|
50 |
+
| ConvNext-Base | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 8.566 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
|
51 |
+
| ConvNext-Base | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 40.983 ms | 175 - 175 MB | FP16 | NPU | [ConvNext-Base.onnx](https://huggingface.co/qualcomm/ConvNext-Base/blob/main/ConvNext-Base.onnx) |
|
52 |
|
53 |
|
54 |
|
55 |
|
56 |
## Installation
|
57 |
|
|
|
58 |
|
59 |
+
Install the package via pip:
|
60 |
```bash
|
61 |
pip install qai-hub-models
|
62 |
```
|
|
|
113 |
Device : Samsung Galaxy S23 (13)
|
114 |
Runtime : TFLITE
|
115 |
Estimated inference time (ms) : 7.7
|
116 |
+
Estimated peak memory usage (MB): [0, 27]
|
117 |
Total # Ops : 598
|
118 |
Compute Unit(s) : NPU (598 ops)
|
119 |
```
|
|
|
140 |
torch_model = Model.from_pretrained()
|
141 |
|
142 |
# Device
|
143 |
+
device = hub.Device("Samsung Galaxy S24")
|
144 |
|
145 |
# Trace model
|
146 |
input_shape = torch_model.get_input_spec()
|
|
|
232 |
|
233 |
|
234 |
## License
|
235 |
+
* The license for the original implementation of ConvNext-Base can be found
|
236 |
+
[here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
237 |
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
|
238 |
|
239 |
|