Spaces:
Running
Running
title: README | |
emoji: 🐢 | |
colorFrom: purple | |
colorTo: gray | |
sdk: static | |
pinned: false | |
<div class="grid lg:grid-cols-3 gap-x-4 gap-y-7"> | |
<p class="lg:col-span-3"> | |
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers. | |
</p> | |
<a | |
href="https://huggingface.co/blog/intel" | |
class="block overflow-hidden group" | |
> | |
<div | |
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850" | |
> | |
<img | |
alt="" | |
src="https://cdn-media.huggingface.co/marketing/intel-page/Intel-Hugging-Face-alt-version2-org-page.png" | |
class="w-40" | |
/> | |
</div> | |
<div class="underline">Learn more about Hugging Face collaboration with Intel AI</div> | |
</a> | |
<a | |
href="https://github.com/huggingface/optimum" | |
class="block overflow-hidden group" | |
> | |
<div | |
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850" | |
> | |
<img | |
alt="" | |
src="/blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png" | |
class="w-40" | |
/> | |
</div> | |
<div class="underline">Quantize Transformers with Intel® Neural Compressor and Optimum</div> | |
</a> | |
<a href="https://huggingface.co/blog/generative-ai-models-on-intel-cpu" class="block overflow-hidden group"> | |
<div | |
class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850" | |
> | |
<img | |
alt="" | |
src="/blog/assets/143_q8chat/thumbnail.png" | |
class="w-40" | |
/> | |
</div> | |
<div class="underline">Quantizing 7B LLM on Intel CPU</div> | |
</a> | |
<div class="lg:col-span-3"> | |
<p class="mb-2"> | |
Intel optimizes the most widely adopted and innovative AI software | |
tools, frameworks, and libraries for Intel® architecture. Whether | |
you are computing locally or deploying AI applications on a massive | |
scale, your organization can achieve peak performance with AI | |
software optimized for Intel® Xeon® Scalable platforms. | |
</p> | |
<p class="mb-2"> | |
Intel’s engineering collaboration with Hugging Face offers state-of-the-art hardware and software acceleration to train, fine-tune and predict with Transformers. | |
</p> | |
<p> | |
Useful Resources: | |
</p> | |
<ul> | |
<li class="ml-6"><a href="https://huggingface.co/hardware/intel" class="underline" data-ga-category="intel-org" data-ga-action="clicked partner page" data-ga-label="partner page">- Intel AI + Hugging Face partner page</a></li> | |
<li class="ml-6"><a href="https://github.com/IntelAI" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel ai github" data-ga-label="intel ai github">- Intel AI GitHub</a></li> | |
<li class="ml-6"><a href="https://www.intel.com/content/www/us/en/developer/partner/hugging-face.html" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel partner page" data-ga-label="intel partner page">- Developer Resources from Intel and Hugging Face</a></li> | |
</ul> | |
</div> | |
<div class="lg:col-span-3"> | |
<p class="mb-2"> | |
To get started with Intel® hardware and software optimizations, download and install the Optimum-Intel® | |
and Intel® Extension for Transformers libraries with the following commands: | |
</p> | |
<pre><code> | |
$ python -m pip install "optimum-intel[extras]"@git+https://github.com/huggingface/optimum-intel.git | |
$ python -m pip install intel-extension-for-transformers | |
</code></pre> | |
<p> | |
<i>For additional information on these two libraries including installation, features, and usage, see the two links below.</i> | |
</p> | |
<p class="mb-2"> | |
Next, find your desired model (and dataset) by searching in the search box at the top-left of Hugging Face’s website. | |
Add “intel” to your search to narrow your search to Intel®-pretrained models. | |
</p> | |
<p class="mb-2"> | |
On the model’s page (called a “Model Card”) you will find description and usage information, an embedded | |
inferencing demo, and the associated dataset. In the upper-right of your screen, click “Use in Transformers” | |
for helpful code hints on how to import the model to your own workspace with an established Hugging Face pipeline and tokenizer. | |
</p> | |
<p> | |
Library Source and Documentation: | |
</p> | |
<ul> | |
<li class="ml-6"><a href="https://github.com/huggingface/optimum-intel" class="underline" data-ga-category="intel-org" data-ga-action="clicked optimum intel" data-ga-label="optimum intel">- 🤗 Optimum-Intel® library</a></li> | |
<li class="ml-6"><a href="https://github.com/intel/intel-extension-for-transformers" class="underline" data-ga-category="intel-org" data-ga-action="clicked intel extension for transformers" data-ga-label="intel extension for transformers">- Intel® Extension for Transformers</a></li> | |
</ul> | |
</div> | |
</div> |