kvaishnavi
commited on
Commit
•
79f5814
1
Parent(s):
0f87da3
Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ How do you know which is the best ONNX model for you:
|
|
33 |
- No → Access the Hugging Face ONNX models for CPU devices and instructions at [Phi-3-vision-128k-instruct-onnx-cpu](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cpu)
|
34 |
|
35 |
## How to Get Started with the Model
|
36 |
-
To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](
|
37 |
|
38 |
## Hardware Supported
|
39 |
|
|
|
33 |
- No → Access the Hugging Face ONNX models for CPU devices and instructions at [Phi-3-vision-128k-instruct-onnx-cpu](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cpu)
|
34 |
|
35 |
## How to Get Started with the Model
|
36 |
+
To support the Phi-3 models across a range of devices, platforms, and EP backends, we introduce a new API to wrap several aspects of generative AI inferencing. This API makes it easy to drag and drop LLMs straight into your app. To run the early version of these models with ONNX, follow the steps [here](https://aka.ms/run-phi3-v-onnx).
|
37 |
|
38 |
## Hardware Supported
|
39 |
|