Google TPUs documentation

Training on a Google Cloud TPU instance

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Training on a Google Cloud TPU instance

Welcome to the 🤗 Optimum-TPU training guide! This section covers how to fine-tune models using Google Cloud TPUs.

Supported Models

See Supported Models.

Getting Started

Prerequisites

Before starting the training process, ensure you have:

  1. A configured Google Cloud TPU instance (see Deployment Guide)
  2. Optimum-TPU installed with PyTorch/XLA support:
pip install optimum-tpu -f https://storage.googleapis.com/libtpu-releases/index.html

Example Training Scripts

You can now follow one of our several example scripts to get started:

  1. Gemma Fine-tuning:

  2. LLaMA Fine-tuning: