text
stringlengths
2
11.8k
FlashAttention can only be used for models with the fp16 or bf16 torch type, so make sure to cast your model to the appropriate type first. The memory-efficient attention backend is able to handle fp32 models. By default, SDPA selects the most performant kernel available but you can check whether a backend is available in a given setting (hardware, problem size) with torch.backends.cuda.sdp_kernel as a context manager:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16).to("cuda") convert the model to BetterTransformer model.to_bettertransformer() input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True))
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) If you see a bug with the traceback below, try using the nightly version of PyTorch which may have broader coverage for FlashAttention: ```bash RuntimeError: No available kernel. Aborting execution. install PyTorch nightly pip3 install -U --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118 BetterTransformer
BetterTransformer Some BetterTransformer features are being upstreamed to Transformers with default support for native torch.nn.scaled_dot_product_attention. BetterTransformer still has a wider coverage than the Transformers SDPA integration, but you can expect more and more architectures to natively support SDPA in Transformers.
Check out our benchmarks with BetterTransformer and scaled dot product attention in the Out of the box acceleration and memory savings of 🤗 decoder models with PyTorch 2.0 and learn more about the fastpath execution in the BetterTransformer blog post. BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are:
BetterTransformer accelerates inference with its fastpath (native PyTorch specialized implementation of Transformer functions) execution. The two optimizations in the fastpath execution are: fusion, which combines multiple sequential operations into a single "kernel" to reduce the number of computation steps skipping the inherent sparsity of padding tokens to avoid unnecessary computation with nested tensors
BetterTransformer also converts all attention operations to use the more memory-efficient scaled dot product attention (SDPA), and it calls optimized kernels like FlashAttention under the hood. Before you start, make sure you have 🤗 Optimum installed. Then you can enable BetterTransformer with the [PreTrainedModel.to_bettertransformer] method: python model = model.to_bettertransformer() You can return the original Transformers model with the [~PreTrainedModel.reverse_bettertransformer] method. You should use this before saving your model to use the canonical Transformers modeling: py model = model.reverse_bettertransformer() model.save_pretrained("saved_model") bitsandbytes bitsandbytes is a quantization library that includes support for 4-bit and 8-bit quantization. Quantization reduces your model size compared to its native full precision version, making it easier to fit large models onto GPUs with limited memory. Make sure you have bitsandbytes and 🤗 Accelerate installed: ```bash these versions support 8-bit and 4-bit pip install bitsandbytes>=0.39.0 accelerate>=0.20.0 install Transformers pip install transformers
4-bit To load a model in 4-bit for inference, use the load_in_4bit parameter. The device_map parameter is optional, but we recommend setting it to "auto" to allow 🤗 Accelerate to automatically and efficiently allocate the model given the available resources in the environment. from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True)
To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 600MB of memory to the first GPU and 1GB of memory to the second GPU: py max_memory_mapping = {0: "600MB", 1: "1GB"} model_name = "bigscience/bloom-3b" model_4bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping ) 8-bit
If you're curious and interested in learning more about the concepts underlying 8-bit quantization, read the Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes blog post.
To load a model in 8-bit for inference, use the load_in_8bit parameter. The device_map parameter is optional, but we recommend setting it to "auto" to allow 🤗 Accelerate to automatically and efficiently allocate the model given the available resources in the environment: from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True)
If you're loading a model in 8-bit for text generation, you should use the [~transformers.GenerationMixin.generate] method instead of the [Pipeline] function which is not optimized for 8-bit models and will be slower. Some sampling strategies, like nucleus sampling, are also not supported by the [Pipeline] for 8-bit models. You should also place all inputs on the same device as the model:
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-2b5" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) prompt = "Hello, my llama is cute" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
To load a model in 4-bit for inference with multiple GPUs, you can control how much GPU RAM you want to allocate to each GPU. For example, to distribute 1GB of memory to the first GPU and 2GB of memory to the second GPU: py max_memory_mapping = {0: "1GB", 1: "2GB"} model_name = "bigscience/bloom-3b" model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping )
Feel free to try running a 11 billion parameter T5 model or the 3 billion parameter BLOOM model for inference on Google Colab's free tier GPUs! 🤗 Optimum Learn more details about using ORT with 🤗 Optimum in the Accelerated inference on NVIDIA GPUs and Accelerated inference on AMD GPUs guides. This section only provides a brief and simple example.
ONNX Runtime (ORT) is a model accelerator that supports accelerated inference on Nvidia GPUs, and AMD GPUs that use ROCm stack. ORT uses optimization techniques like fusing common operations into a single node and constant folding to reduce the number of computations performed and speedup inference. ORT also places the most computationally intensive operations on the GPU and the rest on the CPU to intelligently distribute the workload between the two devices. ORT is supported by 🤗 Optimum which can be used in 🤗 Transformers. You'll need to use an [~optimum.onnxruntime.ORTModel] for the task you're solving, and specify the provider parameter which can be set to either CUDAExecutionProvider, ROCMExecutionProvider or TensorrtExecutionProvider. If you want to load a model that was not yet exported to ONNX, you can set export=True to convert your model on-the-fly to the ONNX format:
from optimum.onnxruntime import ORTModelForSequenceClassification ort_model = ORTModelForSequenceClassification.from_pretrained( "distilbert/distilbert-base-uncased-finetuned-sst-2-english", export=True, provider="CUDAExecutionProvider", ) Now you're free to use the model for inference:
Now you're free to use the model for inference: from optimum.pipelines import pipeline from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased-finetuned-sst-2-english") pipeline = pipeline(task="text-classification", model=ort_model, tokenizer=tokenizer, device="cuda:0") result = pipeline("Both the music and visual were astounding, not to mention the actors performance.")
Combine optimizations It is often possible to combine several of the optimization techniques described above to get the best inference performance possible for your model. For example, you can load a model in 4-bit, and then enable BetterTransformer with FlashAttention:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig load model in 4-bit quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", quantization_config=quantization_config) enable BetterTransformer model = model.to_bettertransformer() input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") enable FlashAttention with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
Efficient Training on Multiple CPUs When training on a single CPU is too slow, we can use multiple CPUs. This guide focuses on PyTorch-based DDP enabling distributed CPU training efficiently on bare metal and Kubernetes. Intel® oneCCL Bindings for PyTorch Intel® oneCCL (collective communications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the oneCCL documentation and oneCCL specification. Module oneccl_bindings_for_pytorch (torch_ccl before version 1.12) implements PyTorch C10D ProcessGroup API and can be dynamically loaded as external ProcessGroup and only works on Linux platform now Check more detailed information for oneccl_bind_pt. Intel® oneCCL Bindings for PyTorch installation Wheel files are available for the following Python versions: | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 2.1.0 | | √ | √ | √ | √ | | 2.0.0 | | √ | √ | √ | √ | | 1.13.0 | | √ | √ | √ | √ | | 1.12.100 | | √ | √ | √ | √ | | 1.12.0 | | √ | √ | √ | √ | Please run pip list | grep torch to get your pytorch_version.
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu where {pytorch_version} should be your PyTorch version, for instance 2.1.0. Check more approaches for oneccl_bind_pt installation. Versions of oneCCL and PyTorch must match. oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100
Intel® MPI library Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. This component is part of the Intel® oneAPI HPC Toolkit. oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it. for Intel® oneCCL >= 1.12.0
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh for Intel® oneCCL whose version < 1.12.0
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh Intel® Extension for PyTorch installation Intel Extension for PyTorch (IPEX) provides performance optimizations for CPU training with both Float32 and BFloat16 (refer to the single CPU section to learn more). The following "Usage in Trainer" takes mpirun in Intel® MPI library as an example. Usage in Trainer To enable multi CPU distributed training in the Trainer with the ccl backend, users should add --ddp_backend ccl in the command arguments. Let's see an example with the question-answering example The following command enables training with 2 processes on one Xeon node, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex The following command enables training with a total of four processes on two Xeons (node0 and node1, taking node0 as the main process), ppn (processes per node) is set to 2, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument. shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip Now, run the following command in node0 and 4DDP will be enabled in node0 and node1 with BF16 auto mixed precision: shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path google-bert/bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 Usage with Kubernetes The same distributed training job from the previous section can be deployed to a Kubernetes cluster using the Kubeflow PyTorchJob training operator. Setup This example assumes that you have: * Access to a Kubernetes cluster with Kubeflow installed * kubectl installed and configured to access the Kubernetes cluster * A Persistent Volume Claim (PVC) that can be used to store datasets and model files. There are multiple options for setting up the PVC including using an NFS storage class or a cloud storage bucket. * A Docker container that includes your model training script and all the dependencies needed to run the script. For distributed CPU training jobs, this typically includes PyTorch, Transformers, Intel Extension for PyTorch, Intel oneCCL Bindings for PyTorch, and OpenSSH to communicate between the containers. The snippet below is an example of a Dockerfile that uses a base image that supports distributed CPU training and then extracts a Transformers release to the /workspace directory, so that the example scripts are included in the image: ```dockerfile FROM intel/ai-workflows:torch-2.0.1-huggingface-multinode-py3.9 WORKDIR /workspace Download and extract the transformers code ARG HF_TRANSFORMERS_VER="4.35.2" RUN mkdir transformers && \ curl -sSL --retry 5 https://github.com/huggingface/transformers/archive/refs/tags/v${HF_TRANSFORMERS_VER}.tar.gz | tar -C transformers --strip-components=1 -xzf -
The image needs to be built and copied to the cluster's nodes or pushed to a container registry prior to deploying the PyTorchJob to the cluster. PyTorchJob Specification File The Kubeflow PyTorchJob is used to run the distributed training job on the cluster. The yaml file for the PyTorchJob defines parameters such as: * The name of the PyTorchJob * The number of replicas (workers) * The python script and it's parameters that will be used to run the training job * The types of resources (node selector, memory, and CPU) needed for each worker * The image/tag for the Docker container to use * Environment variables * A volume mount for the PVC The volume mount defines a path where the PVC will be mounted in the container for each worker pod. This location can be used for the dataset, checkpoint files, and the saved model after training completes. The snippet below is an example of a yaml file for a PyTorchJob with 4 workers running the question-answering example. yaml apiVersion: "kubeflow.org/v1" kind: PyTorchJob metadata: name: transformers-pytorchjob namespace: kubeflow spec: elasticPolicy: rdzvBackend: c10d minReplicas: 1 maxReplicas: 4 maxRestarts: 10 pytorchReplicaSpecs: Worker: replicas: 4 # The number of worker pods restartPolicy: OnFailure template: spec: containers: - name: pytorch image: <image name>:<tag> # Specify the docker image to use for the worker pods imagePullPolicy: IfNotPresent command: - torchrun - /workspace/transformers/examples/pytorch/question-answering/run_qa.py - --model_name_or_path - "google-bert/bert-large-uncased" - --dataset_name - "squad" - --do_train - --do_eval - --per_device_train_batch_size - "12" - --learning_rate - "3e-5" - --num_train_epochs - "2" - --max_seq_length - "384" - --doc_stride - "128" - --output_dir - "/tmp/pvc-mount/output" - --no_cuda - --ddp_backend - "ccl" - --use_ipex - --bf16 # Specify --bf16 if your hardware supports bfloat16 env: - name: LD_PRELOAD value: "/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4.5.9:/usr/local/lib/libiomp5.so" - name: TRANSFORMERS_CACHE value: "/tmp/pvc-mount/transformers_cache" - name: HF_DATASETS_CACHE value: "/tmp/pvc-mount/hf_datasets_cache" - name: LOGLEVEL value: "INFO" - name: CCL_WORKER_COUNT value: "1" - name: OMP_NUM_THREADS # Can be tuned for optimal performance
resources: limits: cpu: 200 # Update the CPU and memory limit values based on your nodes memory: 128Gi requests: cpu: 200 # Update the CPU and memory request values based on your nodes memory: 128Gi volumeMounts: - name: pvc-volume mountPath: /tmp/pvc-mount - mountPath: /dev/shm name: dshm restartPolicy: Never nodeSelector: # Optionally use the node selector to specify what types of nodes to use for the workers node-type: spr volumes: - name: pvc-volume persistentVolumeClaim: claimName: transformers-pvc - name: dshm emptyDir: medium: Memory To run this example, update the yaml based on your training script and the nodes in your cluster.
The CPU resource limits/requests in the yaml are defined in cpu units where 1 CPU unit is equivalent to 1 physical CPU core or 1 virtual core (depending on whether the node is a physical host or a VM). The amount of CPU and memory limits/requests defined in the yaml should be less than the amount of available CPU/memory capacity on a single machine. It is usually a good idea to not use the entire machine's capacity in order to leave some resources for the kubelet and OS. In order to get "guaranteed" quality of service for the worker pods, set the same CPU and memory amounts for both the resource limits and requests.
Deploy After the PyTorchJob spec has been updated with values appropriate for your cluster and training job, it can be deployed to the cluster using:
kubectl create -f pytorchjob.yaml The kubectl get pods -n kubeflow command can then be used to list the pods in the kubeflow namespace. You should see the worker pods for the PyTorchJob that was just deployed. At first, they will probably have a status of "Pending" as the containers get pulled and created, then the status should change to "Running". NAME READY STATUS RESTARTS AGE
transformers-pytorchjob-worker-0 1/1 Running 0 7m37s transformers-pytorchjob-worker-1 1/1 Running 0 7m37s transformers-pytorchjob-worker-2 1/1 Running 0 7m37s transformers-pytorchjob-worker-3 1/1 Running 0 7m37s
The logs for worker can be viewed using kubectl logs -n kubeflow <pod name>. Add -f to stream the logs, for example:
kubectl logs -n kubeflow transformers-pytorchjob-worker-0 -f After the training job completes, the trained model can be copied from the PVC or storage location. When you are done with the job, the PyTorchJob resource can be deleted from the cluster using kubectl delete -f pytorchjob.yaml. Summary This guide covered running distributed PyTorch training jobs using multiple CPUs on bare metal and on a Kubernetes cluster. Both cases utilize Intel Extension for PyTorch and Intel oneCCL Bindings for PyTorch for optimal training performance, and can be used as a template to run your own workload on multiple nodes.
Custom Tools and Prompts If you are not aware of what tools and agents are in the context of transformers, we recommend you read the Transformers Agents page first. Transformers Agents is an experimental API that is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. Creating and using custom tools and prompts is paramount to empowering the agent and having it perform new tasks. In this guide we'll take a look at:
Creating and using custom tools and prompts is paramount to empowering the agent and having it perform new tasks. In this guide we'll take a look at: How to customize the prompt How to use custom tools How to create custom tools
Customizing the prompt As explained in Transformers Agents agents can run in [~Agent.run] and [~Agent.chat] mode. Both the run and chat modes underlie the same logic. The language model powering the agent is conditioned on a long prompt and completes the prompt by generating the next tokens until the stop token is reached. The only difference between the two modes is that during the chat mode the prompt is extended with previous user inputs and model generations. This allows the agent to have access to past interactions, seemingly giving the agent some kind of memory. Structure of the prompt Let's take a closer look at how the prompt is structured to understand how it can be best customized. The prompt is structured broadly into four parts.
Introduction: how the agent should behave, explanation of the concept of tools. Description of all the tools. This is defined by a <<all_tools>> token that is dynamically replaced at runtime with the tools defined/chosen by the user. A set of examples of tasks and their solution Current example, and request for solution.
To better understand each part, let's look at a shortened version of how the run prompt can look like: ````text I will ask you to perform a task, your job is to come up with a series of simple commands in Python that will perform the task. [] You can print intermediate results if it makes sense to do so. Tools: - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named document which should be the document containing the information, as well as a question that is the question about the document. It returns a text that contains the answer to the question. - image_captioner: This is a tool that generates a description of an image. It takes an input named image which should be the image to the caption and returns a text that contains the description in English. [] Task: "Answer the question in the variable question about the image stored in the variable image. The question is in French." I will use the following tools: translator to translate the question into English and then image_qa to answer the question on the input image. Answer: py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(image=image, question=translated_question) print(f"The answer is {answer}") Task: "Identify the oldest person in the document and create an image showcasing the result as a banner." I will use the following tools: document_qa to find the oldest person in the document, then image_generator to generate an image according to the answer. Answer: py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) [] Task: "Draw me a picture of rivers and lakes" I will use the following ` The introduction (the text before "Tools:") explains precisely how the model shall behave and what it should do. This part most likely does not need to be customized as the agent shall always behave the same way. The second part (the bullet points below "Tools") is dynamically added upon calling run or chat. There are exactly as many bullet points as there are tools in agent.toolbox and each bullet point consists of the name and description of the tool: text - <tool.name>: <tool.description> Let's verify this quickly by loading the document_qa tool and printing out the name and description.
from transformers import load_tool document_qa = load_tool("document-question-answering") print(f"- {document_qa.name}: {document_qa.description}")
which gives: text - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. We can see that the tool name is short and precise. The description includes two parts, the first explaining what the tool does and the second states what input arguments and return values are expected. A good tool name and tool description are very important for the agent to correctly use it. Note that the only information the agent has about the tool is its name and description, so one should make sure that both are precisely written and match the style of the existing tools in the toolbox. In particular make sure the description mentions all the arguments expected by name in code-style, along with the expected type and a description of what they are.
Check the naming and description of the curated Transformers tools to better understand what name and description a tool is expected to have. You can see all tools with the [Agent.toolbox] property.
The third part includes a set of curated examples that show the agent exactly what code it should produce for what kind of user request. The large language models empowering the agent are extremely good at recognizing patterns in a prompt and repeating the pattern with new data. Therefore, it is very important that the examples are written in a way that maximizes the likelihood of the agent to generating correct, executable code in practice. Let's have a look at one example: ```text Task: "Identify the oldest person in thedocument` and create an image showcasing the result as a banner." I will use the following tools: document_qa to find the oldest person in the document, then image_generator to generate an image according to the answer. Answer: py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ` The pattern the model is prompted to repeat has three parts: The task statement, the agent's explanation of what it intends to do, and finally the generated code. Every example that is part of the prompt has this exact pattern, thus making sure that the agent will reproduce exactly the same pattern when generating new tokens. The prompt examples are curated by the Transformers team and rigorously evaluated on a set of problem statements to ensure that the agent's prompt is as good as possible to solve real use cases of the agent. The final part of the prompt corresponds to: ```text Task: "Draw me a picture of rivers and lakes" I will use the following
is a final and unfinished example that the agent is tasked to complete. The unfinished example is dynamically created based on the actual user input. For the above example, the user ran: py agent.run("Draw me a picture of rivers and lakes") The user input - a.k.a the task: "Draw me a picture of rivers and lakes" is cast into the prompt template: "Task: \n\n I will use the following". This sentence makes up the final lines of the prompt the agent is conditioned on, therefore strongly influencing the agent to finish the example exactly in the same way it was previously done in the examples. Without going into too much detail, the chat template has the same prompt structure with the examples having a slightly different style, e.g.: ````text [] ===== Human: Answer the question in the variable question about the image stored in the variable image. Assistant: I will use the tool image_qa to answer the question on the input image. py answer = image_qa(text=question, image=image) print(f"The answer is {answer}") Human: I tried this code, it worked but didn't give me a good result. The question is in French Assistant: In this case, the question needs to be translated first. I will use the tool translator to do this. py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(text=translated_question, image=image) print(f"The answer is {answer}") ===== [] ` Contrary, to the examples of the run prompt, each chat prompt example has one or more exchanges between the Human and the Assistant. Every exchange is structured similarly to the example of the run prompt. The user's input is appended to behind Human: and the agent is prompted to first generate what needs to be done before generating code. An exchange can be based on previous exchanges, therefore allowing the user to refer to past exchanges as is done e.g. above by the user's input of "I tried this code" refers to the previously generated code of the agent. Upon running .chat, the user's input or task is cast into an unfinished example of the form: text Human: <user-input>\n\nAssistant: which the agent completes. Contrary to the run command, the chat command then appends the completed example to the prompt, thus giving the agent more context for the next chat turn. Great now that we know how the prompt is structured, let's see how we can customize it! Writing good user inputs While large language models are getting better and better at understanding users' intentions, it helps enormously to be as precise as possible to help the agent pick the correct task. What does it mean to be as precise as possible? The agent sees a list of tool names and their description in its prompt. The more tools are added the more difficult it becomes for the agent to choose the correct tool and it's even more difficult to choose the correct sequences of tools to run. Let's look at a common failure case, here we will only return the code to analyze it.
from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.run("Show me a tree", return_code=True) gives: ``text ==Explanation from the agent== I will use the following tool:image_segmenter` to create a segmentation mask for the image. ==Code generated by the agent== mask = image_segmenter(image, prompt="tree")
which is probably not what we wanted. Instead, it is more likely that we want an image of a tree to be generated. To steer the agent more towards using a specific tool it can therefore be very helpful to use important keywords that are present in the tool's name and description. Let's have a look. py agent.toolbox["image_generator"].description text 'This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which contains the image description and outputs an image. The name and description make use of the keywords "image", "prompt", "create" and "generate". Using these words will most likely work better here. Let's refine our prompt a bit. py agent.run("Create an image of a tree", return_code=True) gives: ``text ==Explanation from the agent== I will use the following toolimage_generator` to generate an image of a tree. ==Code generated by the agent== image = image_generator(prompt="tree")
Much better! That looks more like what we want. In short, when you notice that the agent struggles to correctly map your task to the correct tools, try looking up the most pertinent keywords of the tool's name and description and try refining your task request with it. Customizing the tool descriptions As we've seen before the agent has access to each of the tools' names and descriptions. The base tools should have very precise names and descriptions, however, you might find that it could help to change the the description or name of a tool for your specific use case. This might become especially important when you've added multiple tools that are very similar or if you want to use your agent only for a certain domain, e.g. image generation and transformations. A common problem is that the agent confuses image generation with image transformation/modification when used a lot for image generation tasks, e.g. py agent.run("Make an image of a house and a car", return_code=True) returns ``text ==Explanation from the agent== I will use the following toolsimage_generatorto generate an image of a house andimage_transformer` to transform the image of a car into the image of a house. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") house_car_image = image_transformer(image=car_image, prompt="A house")
which is probably not exactly what we want here. It seems like the agent has a difficult time to understand the difference between image_generator and image_transformer and often uses the two together. We can help the agent here by changing the tool name and description of image_transformer. Let's instead call it modifier to disassociate it a bit from "image" and "prompt": py agent.toolbox["modifier"] = agent.toolbox.pop("image_transformer") agent.toolbox["modifier"].description = agent.toolbox["modifier"].description.replace( "transforms an image according to a prompt", "modifies an image" ) Now "modify" is a strong cue to use the new image processor which should help with the above prompt. Let's run it again. py agent.run("Make an image of a house and a car", return_code=True) Now we're getting: ``text ==Explanation from the agent== I will use the following tools:image_generatorto generate an image of a house, thenimage_generator` to generate an image of a car. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car")
which is definitely closer to what we had in mind! However, we want to have both the house and car in the same image. Steering the task more toward single image generation should help: py agent.run("Create image: 'A house and car'", return_code=True) ``text ==Explanation from the agent== I will use the following tool:image_generator` to generate an image. ==Code generated by the agent== image = image_generator(prompt="A house and car")
Agents are still brittle for many use cases, especially when it comes to slightly more complex use cases like generating an image of multiple objects. Both the agent itself and the underlying prompt will be further improved in the coming months making sure that agents become more robust to a variety of user inputs.
Customizing the whole prompt To give the user maximum flexibility, the whole prompt template as explained in above can be overwritten by the user. In this case make sure that your custom prompt includes an introduction section, a tool section, an example section, and an unfinished example section. If you want to overwrite the run prompt template, you can do as follows: template = """ [] """ agent = HfAgent(your_endpoint, run_prompt_template=template)
template = """ [] """ agent = HfAgent(your_endpoint, run_prompt_template=template) Please make sure to have the <<all_tools>> string and the <<prompt>> defined somewhere in the template so that the agent can be aware of the tools, it has available to it as well as correctly insert the user's prompt. Similarly, one can overwrite the chat prompt template. Note that the chat mode always uses the following format for the exchanges: ```text Human: <> Assistant:
Similarly, one can overwrite the chat prompt template. Note that the chat mode always uses the following format for the exchanges: ```text Human: <> Assistant: Therefore it is important that the examples of the custom chat prompt template also make use of this format. You can overwrite the chat template at instantiation as follows. thon template = """ [] """ agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)
Please make sure to have the <<all_tools>> string defined somewhere in the template so that the agent can be aware of the tools, it has available to it.
In both cases, you can pass a repo ID instead of the prompt template if you would like to use a template hosted by someone in the community. The default prompts live in this repo as an example. To upload your custom prompt on a repo on the Hub and share it with the community just make sure: - to use a dataset repository - to put the prompt template for the run command in a file named run_prompt_template.txt - to put the prompt template for the chat command in a file named chat_prompt_template.txt Using custom tools In this section, we'll be leveraging two existing custom tools that are specific to image generation:
We replace huggingface-tools/image-transformation, with diffusers/controlnet-canny-tool to allow for more image modifications. We add a new tool for image upscaling to the default toolbox: diffusers/latent-upscaler-tool replace the existing image-transformation tool. We'll start by loading the custom tools with the convenient [load_tool] function:
We'll start by loading the custom tools with the convenient [load_tool] function: from transformers import load_tool controlnet_transformer = load_tool("diffusers/controlnet-canny-tool") upscaler = load_tool("diffusers/latent-upscaler-tool")
Upon adding custom tools to an agent, the tools' descriptions and names are automatically included in the agents' prompts. Thus, it is imperative that custom tools have a well-written description and name in order for the agent to understand how to use them. Let's take a look at the description and name of controlnet_transformer: py print(f"Description: '{controlnet_transformer.description}'") print(f"Name: '{controlnet_transformer.name}'") gives text Description: 'This is a tool that transforms an image with ControlNet according to a prompt. It takes two inputs: `image`, which should be the image to transform, and `prompt`, which should be the prompt to use to change it. It returns the modified image.' Name: 'image_transformer' The name and description are accurate and fit the style of the curated set of tools. Next, let's instantiate an agent with controlnet_transformer and upscaler: py tools = [controlnet_transformer, upscaler] agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=tools) This command should give you the following info: text image_transformer has been replaced by <transformers_modules.diffusers.controlnet-canny-tool.bd76182c7777eba9612fc03c0 8718a60c0aa6312.image_transformation.ControlNetTransformationTool object at 0x7f1d3bfa3a00> as provided in `additional_tools` The set of curated tools already has an image_transformer tool which is hereby replaced with our custom tool.
Overwriting existing tools can be beneficial if we want to use a custom tool exactly for the same task as an existing tool because the agent is well-versed in using the specific task. Beware that the custom tool should follow the exact same API as the overwritten tool in this case, or you should adapt the prompt template to make sure all examples using that tool are updated.
The upscaler tool was given the name image_upscaler which is not yet present in the default toolbox and is therefore simply added to the list of tools. You can always have a look at the toolbox that is currently available to the agent via the agent.toolbox attribute: py print("\n".join([f"- {a}" for a in agent.toolbox.keys()])) text - document_qa - image_captioner - image_qa - image_segmenter - transcriber - summarizer - text_classifier - text_qa - text_reader - translator - image_transformer - text_downloader - image_generator - video_generator - image_upscaler Note how image_upscaler is now part of the agents' toolbox. Let's now try out the new tools! We will re-use the image we generated in Transformers Agents Quickstart.
from diffusers.utils import load_image image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" )
Let's transform the image into a beautiful winter landscape: py image = agent.run("Transform the image: 'A frozen lake and snowy forest'", image=image) ``text ==Explanation from the agent== I will use the following tool:image_transformer` to transform the image. ==Code generated by the agent== image = image_transformer(image, prompt="A frozen lake and snowy forest")
The new image processing tool is based on ControlNet which can make very strong modifications to the image. By default the image processing tool returns an image of size 512x512 pixels. Let's see if we can upscale it. py image = agent.run("Upscale the image", image) ``text ==Explanation from the agent== I will use the following tool:image_upscaler` to upscale the image. ==Code generated by the agent== upscaled_image = image_upscaler(image)
The agent automatically mapped our prompt "Upscale the image" to the just added upscaler tool purely based on the description and name of the upscaler tool and was able to correctly run it. Next, let's have a look at how you can create a new custom tool. Adding new tools In this section, we show how to create a new tool that can be added to the agent. Creating a new tool We'll first start by creating a tool. We'll add the not-so-useful yet fun task of fetching the model on the Hugging Face Hub with the most downloads for a given task. We can do that with the following code: thon from huggingface_hub import list_models task = "text-classification" model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) print(model.id)
For the task text-classification, this returns 'facebook/bart-large-mnli', for translation it returns 'google-t5/t5-base. How do we convert this to a tool that the agent can leverage? All tools depend on the superclass Tool that holds the main attributes necessary. We'll create a class that inherits from it: thon from transformers import Tool class HFModelDownloadsTool(Tool): pass
This class has a few needs: - An attribute name, which corresponds to the name of the tool itself. To be in tune with other tools which have a performative name, we'll name it model_download_counter. - An attribute description, which will be used to populate the prompt of the agent. - inputs and outputs attributes. Defining this will help the python interpreter make educated choices about types, and will allow for a gradio-demo to be spawned when we push our tool to the Hub. They're both a list of expected values, which can be text, image, or audio. - A __call__ method which contains the inference code. This is the code we've played with above! Here's what our class looks like now: thon from transformers import Tool from huggingface_hub import list_models class HFModelDownloadsTool(Tool): name = "model_download_counter" description = ( "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. " "It takes the name of the category (such as text-classification, depth-estimation, etc), and " "returns the name of the checkpoint." ) inputs = ["text"] outputs = ["text"]
def __call__(self, task: str): model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id We now have our tool handy. Save it in a file and import it from your main script. Let's name this file model_downloads.py, so the resulting import code looks like this: thon from model_downloads import HFModelDownloadsTool tool = HFModelDownloadsTool()
In order to let others benefit from it and for simpler initialization, we recommend pushing it to the Hub under your namespace. To do so, just call push_to_hub on the tool variable: python tool.push_to_hub("hf-model-downloads") You now have your code on the Hub! Let's take a look at the final step, which is to have the agent use it. Having the agent use the tool We now have our tool that lives on the Hub which can be instantiated as such (change the user name for your tool): thon from transformers import load_tool tool = load_tool("lysandre/hf-model-downloads")
In order to use it in the agent, simply pass it in the additional_tools parameter of the agent initialization method: thon from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run( "Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) which outputs the following:text ==Code generated by the agent== model = model_download_counter(task="text-to-video") print(f"The model with the most downloads is {model}.") audio_model = text_reader(model) ==Result== The model with the most downloads is damo-vilab/text-to-video-ms-1.7b.
and generates the following audio. | Audio | |------------------------------------------------------------------------------------------------------------------------------------------------------| | |
Depending on the LLM, some are quite brittle and require very exact prompts in order to work well. Having a well-defined name and description of the tool is paramount to having it be leveraged by the agent.
Replacing existing tools Replacing existing tools can be done simply by assigning a new item to the agent's toolbox. Here's how one would do so: thon from transformers import HfAgent, load_tool agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.toolbox["image-transformation"] = load_tool("diffusers/controlnet-canny-tool")
Beware when replacing tools with others! This will also adjust the agent's prompt. This can be good if you have a better prompt suited for the task, but it can also result in your tool being selected way more than others or for other tools to be selected instead of the one you have defined.
Leveraging gradio-tools gradio-tools is a powerful library that allows using Hugging Face Spaces as tools. It supports many existing Spaces as well as custom Spaces to be designed with it. We offer support for gradio_tools by using the Tool.from_gradio method. For example, we want to take advantage of the StableDiffusionPromptGeneratorTool tool offered in the gradio-tools toolkit so as to improve our prompts and generate better images. We first import the tool from gradio_tools and instantiate it: thon from gradio_tools import StableDiffusionPromptGeneratorTool gradio_tool = StableDiffusionPromptGeneratorTool()
We pass that instance to the Tool.from_gradio method: thon from transformers import Tool tool = Tool.from_gradio(gradio_tool) Now we can manage it exactly as we would a usual custom tool. We leverage it to improve our prompt a rabbit wearing a space suit: thon from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run("Generate an image of the prompt after improving it.", prompt="A rabbit wearing a space suit")
The model adequately leverages the tool: ``text ==Explanation from the agent== I will use the following tools:StableDiffusionPromptGeneratorto improve the prompt, thenimage_generator` to generate an image according to the improved prompt. ==Code generated by the agent== improved_prompt = StableDiffusionPromptGenerator(prompt) print(f"The improved prompt is {improved_prompt}.") image = image_generator(improved_prompt) Before finally generating the image:
Before finally generating the image: gradio-tools requires textual inputs and outputs, even when working with different modalities. This implementation works with image and audio objects. The two are currently incompatible, but will rapidly become compatible as we work to improve the support.
Future compatibility with Langchain We love Langchain and think it has a very compelling suite of tools. In order to handle these tools, Langchain requires textual inputs and outputs, even when working with different modalities. This is often the serialized version (i.e., saved to disk) of the objects. This difference means that multi-modality isn't handled between transformers-agents and langchain. We aim for this limitation to be resolved in future versions, and welcome any help from avid langchain users to help us achieve this compatibility. We would love to have better support. If you would like to help, please open an issue and share what you have in mind.
Padding and truncation Batched inputs are often different lengths, so they can't be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special padding token to ensure shorter sequences will have the same length as either the longest sequence in a batch or the maximum length accepted by the model. Truncation works in the other direction by truncating long sequences. In most cases, padding your batch to the length of the longest sequence and truncating to the maximum length a model can accept works pretty well. However, the API supports more strategies if you need them. The three arguments you need to are: padding, truncation and max_length. The padding argument controls padding. It can be a boolean or a string:
True or 'longest': pad to the longest sequence in the batch (no padding is applied if you only provide a single sequence). 'max_length': pad to a length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). Padding will still be applied if you only provide a single sequence. False or 'do_not_pad': no padding is applied. This is the default behavior.
The truncation argument controls truncation. It can be a boolean or a string:
True or 'longest_first': truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). This will truncate token by token, removing a token from the longest sequence in the pair until the proper length is reached. 'only_second': truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). This will only truncate the second sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. 'only_first': truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). This will only truncate the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. False or 'do_not_truncate': no truncation is applied. This is the default behavior.
The max_length argument controls the length of the padding and truncation. It can be an integer or None, in which case it will default to the maximum length the model can accept. If the model has no specific maximum input length, truncation or padding to max_length is deactivated. The following table summarizes the recommended way to setup padding and truncation. If you use pairs of input sequences in any of the following examples, you can replace truncation=True by a STRATEGY selected in ['only_first', 'only_second', 'longest_first'], i.e. truncation='only_second' or truncation='longest_first' to control how both sequences in the pair are truncated as detailed before. | Truncation | Padding | Instruction | |--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------| | no truncation | no padding | tokenizer(batch_sentences) | | | padding to max sequence in batch | tokenizer(batch_sentences, padding=True) or | | | | tokenizer(batch_sentences, padding='longest') | | | padding to max model input length | tokenizer(batch_sentences, padding='max_length') | | | padding to specific length | tokenizer(batch_sentences, padding='max_length', max_length=42) | | | padding to a multiple of a value | tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) | | truncation to max model input length | no padding | tokenizer(batch_sentences, truncation=True) or | | | | tokenizer(batch_sentences, truncation=STRATEGY) | | | padding to max sequence in batch | tokenizer(batch_sentences, padding=True, truncation=True) or | | | | tokenizer(batch_sentences, padding=True, truncation=STRATEGY) | | | padding to max model input length | tokenizer(batch_sentences, padding='max_length', truncation=True) or | | | | tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY) | | | padding to specific length | Not possible | | truncation to specific length | no padding | tokenizer(batch_sentences, truncation=True, max_length=42) or | | | | tokenizer(batch_sentences, truncation=STRATEGY, max_length=42) | | | padding to max sequence in batch | tokenizer(batch_sentences, padding=True, truncation=True, max_length=42) or | | | | tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42) | | | padding to max model input length | Not possible | | | padding to specific length | tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42) or | | | | tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42) |
Distributed training with 🤗 Accelerate As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the 🤗 Accelerate library to help users easily train a 🤗 Transformers model on any type of distributed setup, whether it is multiple GPU's on one machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment. Setup Get started by installing 🤗 Accelerate:
pip install accelerate Then import and create an [~accelerate.Accelerator] object. The [~accelerate.Accelerator] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device. from accelerate import Accelerator accelerator = Accelerator()
from accelerate import Accelerator accelerator = Accelerator() Prepare to accelerate The next step is to pass all the relevant training objects to the [~accelerate.Accelerator.prepare] method. This includes your training and evaluation DataLoaders, a model and an optimizer: train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( train_dataloader, eval_dataloader, model, optimizer )
train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( train_dataloader, eval_dataloader, model, optimizer ) Backward The last addition is to replace the typical loss.backward() in your training loop with 🤗 Accelerate's [~accelerate.Accelerator.backward]method: for epoch in range(num_epochs): for batch in train_dataloader: outputs = model(**batch) loss = outputs.loss accelerator.backward(loss)
for epoch in range(num_epochs): for batch in train_dataloader: outputs = model(**batch) loss = outputs.loss accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) As you can see in the following code, you only need to add four additional lines of code to your training loop to enable distributed training!
As you can see in the following code, you only need to add four additional lines of code to your training loop to enable distributed training! + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5)
accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model.to(device) train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( train_dataloader, eval_dataloader, model, optimizer )
model.to(device) train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( train_dataloader, eval_dataloader, model, optimizer ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader:
outputs = model(**batch) loss = outputs.loss + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) Train Once you've added the relevant lines of code, launch your training in a script or a notebook like Colaboratory. Train with a script If you are running your training from a script, run the following command to create and save a configuration file:
accelerate config Then launch your training with: accelerate launch train.py Train with a notebook 🤗 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to [~accelerate.notebook_launcher]: from accelerate import notebook_launcher notebook_launcher(training_function) For more information about 🤗 Accelerate and its rich features, refer to the documentation.
Community This page regroups resources around 🤗 Transformers developed by the community. Community resources: | Resource | Description | Author | |:----------|:-------------|------:| | Hugging Face Transformers Glossary Flashcards | A set of flashcards based on the Transformers Docs Glossary that has been put into a form which can be easily learned/revised using Anki an open source, cross platform app specifically designed for long term knowledge retention. See this Introductory video on how to use the flashcards. | Darigov Research | Community notebooks: | Notebook | Description | Author | | |:----------|:-------------|:-------------|------:| | Fine-tune a pre-trained Transformer to generate lyrics | How to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 model | Aleksey Korshuk | | | Train T5 in Tensorflow 2 | How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD | Muhammad Harris | | | Train T5 on TPU | How to train T5 on SQUAD with Transformers and Nlp | Suraj Patil | | | Fine-tune T5 for Classification and Multiple Choice | How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning | Suraj Patil | | | Fine-tune DialoGPT on New Datasets and Languages | How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots | Nathan Cooper | | | Long Sequence Modeling with Reformer | How to train on sequences as long as 500,000 tokens with Reformer | Patrick von Platen | | | Fine-tune BART for Summarization | How to fine-tune BART for summarization with fastai using blurr | Wayde Gilliam | | | Fine-tune a pre-trained Transformer on anyone's tweets | How to generate tweets in the style of your favorite Twitter account by fine-tuning a GPT-2 model | Boris Dayma | | | Optimize 🤗 Hugging Face models with Weights & Biases | A complete tutorial showcasing W&B integration with Hugging Face | Boris Dayma | | | Pretrain Longformer | How to build a "long" version of existing pretrained models | Iz Beltagy | | | Fine-tune Longformer for QA | How to fine-tune longformer model for QA task | Suraj Patil | | | Evaluate Model with 🤗nlp | How to evaluate longformer on TriviaQA with nlp | Patrick von Platen | | | Fine-tune T5 for Sentiment Span Extraction | How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning | Lorenzo Ampil | | | Fine-tune DistilBert for Multiclass Classification | How to fine-tune DistilBert for multiclass classification with PyTorch | Abhishek Kumar Mishra | | |Fine-tune BERT for Multi-label Classification|How to fine-tune BERT for multi-label classification using PyTorch|Abhishek Kumar Mishra || |Fine-tune T5 for Summarization|How to fine-tune T5 for summarization in PyTorch and track experiments with WandB|Abhishek Kumar Mishra || |Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing|How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing|Michael Benesty || |Pretrain Reformer for Masked Language Modeling| How to train a Reformer model with bi-directional self-attention layers | Patrick von Platen | | |Expand and Fine Tune Sci-BERT| How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. | Tanmay Thakur | | |Fine Tune BlenderBotSmall for Summarization using the Trainer API| How to fine-tune BlenderBotSmall for summarization on a custom dataset, using the Trainer API. | Tanmay Thakur | | |Fine-tune Electra and interpret with Integrated Gradients | How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients | Eliza Szczechla | | |fine-tune a non-English GPT-2 Model with Trainer class | How to fine-tune a non-English GPT-2 Model with Trainer class | Philipp Schmid | | |Fine-tune a DistilBERT Model for Multi Label Classification task | How to fine-tune a DistilBERT Model for Multi Label Classification task | Dhaval Taunk | | |Fine-tune ALBERT for sentence-pair classification | How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task | Nadir El Manouzi | | |Fine-tune Roberta for sentiment analysis | How to fine-tune a Roberta model for sentiment analysis | Dhaval Taunk | | |Evaluating Question Generation Models | How accurate are the answers to questions generated by your seq2seq transformer model? | Pascal Zoleko | | |Classify text with DistilBERT and Tensorflow | How to fine-tune DistilBERT for text classification in TensorFlow | Peter Bayerle | | |Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail | How to warm-start a EncoderDecoderModel with a google-bert/bert-base-uncased checkpoint for summarization on CNN/Dailymail | Patrick von Platen | | |Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum | How to warm-start a shared EncoderDecoderModel with a FacebookAI/roberta-base checkpoint for summarization on BBC/XSum | Patrick von Platen | | |Fine-tune TAPAS on Sequential Question Answering (SQA) | How to fine-tune TapasForQuestionAnswering with a tapas-base checkpoint on the Sequential Question Answering (SQA) dataset | Niels Rogge | | |Evaluate TAPAS on Table Fact Checking (TabFact) | How to evaluate a fine-tuned TapasForSequenceClassification with a tapas-base-finetuned-tabfact checkpoint using a combination of the 🤗 datasets and 🤗 transformers libraries | Niels Rogge | | |Fine-tuning mBART for translation | How to fine-tune mBART using Seq2SeqTrainer for Hindi to English translation | Vasudev Gupta | | |Fine-tune LayoutLM on FUNSD (a form understanding dataset) | How to fine-tune LayoutLMForTokenClassification on the FUNSD dataset for information extraction from scanned documents | Niels Rogge | | |Fine-Tune DistilGPT2 and Generate Text | How to fine-tune DistilGPT2 and generate text | Aakash Tripathi | | |Fine-Tune LED on up to 8K tokens | How to fine-tune LED on pubmed for long-range summarization | Patrick von Platen | | |Evaluate LED on Arxiv | How to effectively evaluate LED on long-range summarization | Patrick von Platen | | |Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset) | How to fine-tune LayoutLMForSequenceClassification on the RVL-CDIP dataset for scanned document classification | Niels Rogge | | |Wav2Vec2 CTC decoding with GPT2 adjustment | How to decode CTC sequence with language model adjustment | Eric Lam | | |Fine-tune BART for summarization in two languages with Trainer class | How to fine-tune BART for summarization in two languages with Trainer class | Eliza Szczechla | | |Evaluate Big Bird on Trivia QA | How to evaluate BigBird on long document question answering on Trivia QA | Patrick von Platen | | | Create video captions using Wav2Vec2 | How to create YouTube captions from any video by transcribing the audio with Wav2Vec | Niklas Muennighoff | | | Fine-tune the Vision Transformer on CIFAR-10 using PyTorch Lightning | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch Lightning | Niels Rogge | | | Fine-tune the Vision Transformer on CIFAR-10 using the 🤗 Trainer | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the 🤗 Trainer | Niels Rogge | | | Evaluate LUKE on Open Entity, an entity typing dataset | How to evaluate LukeForEntityClassification on the Open Entity dataset | Ikuya Yamada | | | Evaluate LUKE on TACRED, a relation extraction dataset | How to evaluate LukeForEntityPairClassification on the TACRED dataset | Ikuya Yamada | | | Evaluate LUKE on CoNLL-2003, an important NER benchmark | How to evaluate LukeForEntitySpanClassification on the CoNLL-2003 dataset | Ikuya Yamada | | | Evaluate BigBird-Pegasus on PubMed dataset | How to evaluate BigBirdPegasusForConditionalGeneration on PubMed dataset | Vasudev Gupta | | | Speech Emotion Classification with Wav2Vec2 | How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA dataset | Mehrdad Farahani | | | Detect objects in an image with DETR | How to use a trained DetrForObjectDetection model to detect objects in an image and visualize attention | Niels Rogge | | | Fine-tune DETR on a custom object detection dataset | How to fine-tune DetrForObjectDetection on a custom object detection dataset | Niels Rogge | | | Finetune T5 for Named Entity Recognition | How to fine-tune T5 on a Named Entity Recognition Task | Ogundepo Odunayo | |
Troubleshoot Sometimes errors occur, but we are here to help! This guide covers some of the most common issues we've seen and how you can resolve them. However, this guide isn't meant to be a comprehensive collection of every 🤗 Transformers issue. For more help with troubleshooting your issue, try:
Asking for help on the forums. There are specific categories you can post your question to, like Beginners or 🤗 Transformers. Make sure you write a good descriptive forum post with some reproducible code to maximize the likelihood that your problem is solved! Create an Issue on the 🤗 Transformers repository if it is a bug related to the library. Try to include as much information describing the bug as possible to help us better figure out what's wrong and how we can fix it.
Check the Migration guide if you use an older version of 🤗 Transformers since some important changes have been introduced between versions.
For more details about troubleshooting and getting help, take a look at Chapter 8 of the Hugging Face course. Firewalled environments Some GPU instances on cloud and intranet setups are firewalled to external connections, resulting in a connection error. When your script attempts to download model weights or datasets, the download will hang and then timeout with the following message: ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. In this case, you should try to run 🤗 Transformers on offline mode to avoid the connection error. CUDA out of memory Training large models with millions of parameters can be challenging without the appropriate hardware. A common error you may encounter when the GPU runs out of memory is: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch) Here are some potential solutions you can try to lessen memory use:
Reduce the per_device_train_batch_size value in [TrainingArguments]. Try using gradient_accumulation_steps in [TrainingArguments] to effectively increase overall batch size. Refer to the Performance guide for more details about memory-saving techniques.
Refer to the Performance guide for more details about memory-saving techniques. Unable to load a saved TensorFlow model TensorFlow's model.save method will save the entire model - architecture, weights, training configuration - in a single file. However, when you load the model file again, you may run into an error because 🤗 Transformers may not load all the TensorFlow-related objects in the model file. To avoid issues with saving and loading TensorFlow models, we recommend you: