Google TPUs documentation

Installation

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Installation

This assumes you already have a TPU instance running. If not, please look at TPU setup tutorial

If it is your first time using TPU, look at our tutorial that explains how to setup a TPU for the first time

This walkthrough will explain how to install the optimum-tpu package to leverage HuggingFace’s solution to run AI workloads as fast as possible on Google TPUs 🚀

Optimum-TPU

Installing the optimum-tpu python package is mainly useful for training. If you wish to do serving the recommended way to inferface with that is through our TGI containers. You can also look at our tutorial on serving for more information.

To install Optimum-TPU, it should be as simple as

$ python3 -m pip install optimum-tpu -f https://storage.googleapis.com/libtpu-releases/index.html
$ export PJRT_DEVICE=TPU

You can now leverage PyTorch/XLA through Optimum-TPU. You can validate the installation with the following command which should print xla:0 as we do have a single TPU device bound to this instance.

$ python -c "import torch_xla.core.xla_model as xm; print(xm.xla_device())"
xla:0

You can also look at the rest at our fine-tuning examples for more information on how to use the optimum-tpu package

Remarks: you can also use optimum-tpu training container for a pre-setup container with optimum-tpu installed and all HuggingFace libraries pre-configured