{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# 使用 DeepSpeed 和 Hugging Face Transformer 微调 FLAN-T5 XL/XXL\n", "\n", "[Scaling Instruction-Finetuned Language Models](https://arxiv.org/pdf/2210.11416.pdf) 论文发布了 FLAN-T5 模型,它是 T5 模型的增强版。FLAN-T5 由很多各种各样的任务微调而得,因此,简单来讲,它就是个方方面面都更优的 T5 模型。相同参数量的条件下,FLAN-T5 的性能相比 T5 而言有两位数的提高。Google 在 Hugging Face 上开源了 [5 个 FLAN-T5 的 checkpoints](https://huggingface.co/models?other=arxiv:2210.11416),参数量范围从 8000 万 到 110 亿。\n", "\n", "在之前的一篇博文中,我们已经学习了如何 [针对聊天对话数据摘要生成任务微调 FLAN-T5](https://www.philschmid.de/fine-tune-flan-t5),那时我们使用的是 [Base (250M参数)](https://huggingface.co/google/flan-t5-base)模型。本文,我们将研究如何将训练从 Base 扩展到 [XL (30 亿参数)](https://huggingface.co/google/flan-t5-xl) 或 [XXL (110 亿参数)](https://huggingface.co/google/flan-t5-xxl)。\n", "\n", "这意味着我们将学习如何利用模型并行、多 GPU 以及 [DeepSpeed ZeRO](https://www.deepspeed.ai/tutorials/zero/) 来微调 FLAN-T5 XL 和 XXL。\n", "\n", "除了作为教程的部分之外,我们还跑了一系列实验,这些实验数据可以帮助你选择正确的硬件设置。你可以在*结果和实验*部分找到详细信息。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# install git lfs for pushing artifacts\n", "!sudo apt install git-lfs\n", "# install torch with the correct cuda version, check nvcc --version\n", "!pip install torch --extra-index-url https://download.pytorch.org/whl/cu116 --upgrade\n", "# install Hugging Face Libraries\n", "!pip install \"transformers==4.26.0\" \"datasets==2.9.0\" \"accelerate==0.16.0\" \"evaluate==0.4.0\" --upgrade\n", "# install deepspeed and ninja for jit compilations of kernels\n", "!pip install \"deepspeed==0.8.0\" ninja --upgrade\n", "# install additional dependencies needed for training\n", "!pip install rouge-score nltk py7zr tensorboard" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# 处理数据集" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "与[针对聊天对话的摘要生成任务微调 FLAN-T5](https://www.philschmid.de/fine-tune-flan-t5)一文中类似,我们需要先准备一个用于微调的数据集。本文,我们将在 [CNN Dailymail 数据集](https://huggingface.co/datasets/cnn_dailymail) 上微调 [FLAN-T5-XXL](https://huggingface.co/google/flan-t5-xxl)。我们不会赘述如何生成数据集,如果你想了解数据集生成的详细步骤,请参阅[上一篇文章](https://www.philschmid.de/fine-tune-flan-t5)。\n", "\n", "我们定义了一些参数,本文的示例都会基于这些参数,但你可以根据实际需要进行调整。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# 实验配置\n", "model_id = \"google/flan-t5-xxl\" # Hugging Face 模型 Id\n", "dataset_id = \"cnn_dailymail\" # Hugging Face 数据集 Id\n", "dataset_config = \"3.0.0\" # 数据集版本\n", "save_dataset_path = \"data\" # 存放处理后数据的本地路径\n", "text_column = \"article\" # 输入文本所属列\n", "summary_column = \"highlights\" # 输出文本所属列\n", "# 定制指令提示格式\n", "prompt_template = f\"Summarize the following news article:\\n{{input}}\\nSummary:\\n\"" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "与 [之前的示例](https://www.philschmid.de/fine-tune-flan-t5) 不同,这次我们把预处理和训练分开。这样我们就可以在非 GPU 实例上运行预处理。我们先对数据集进行预处理(即分词)并将其保存到磁盘,然后训练脚本再从磁盘中加载预处理后的数据集。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from datasets import load_dataset\n", "from transformers import AutoTokenizer\n", "import numpy as np \n", "\n", "# Load dataset from the hub\n", "dataset = load_dataset(dataset_id,name=dataset_config)\n", "# Load tokenizer of FLAN-t5-base\n", "tokenizer = AutoTokenizer.from_pretrained(model_id)\n", "\n", "print(f\"Train dataset size: {len(dataset['train'])}\")\n", "print(f\"Test dataset size: {len(dataset['test'])}\")\n", "\n", "# Train dataset size: 287113\n", "# Test dataset size: 11490" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "我们在配置文件中定义了一个 `prompt_template`,其可用于来构建指令提示,以提高我们模型的性能。 `prompt_template` 有“固定”的开始词和结束词,文档放在中间。这意味着我们需要确保 *“固定”模板词 + 文档* 总长不超过模型支持的最大序列长度。因此我们需要计算模型支持的最大文档长度,稍后我们会根据它来填充或截断模板中的文档。" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Prompt length: 12\n", "Max input length: 500\n" ] } ], "source": [ "prompt_length = len(tokenizer(prompt_template.format(input=\"\"))[\"input_ids\"])\n", "max_sample_length = tokenizer.model_max_length - prompt_length\n", "print(f\"Prompt length: {prompt_length}\")\n", "print(f\"Max input length: {max_sample_length}\")\n", "\n", "# Prompt length: 12\n", "# Max input length: 500" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "现在我们知道,模型支持的最大输入文档长度为 500。除了输入之外,我们还需要知道最大“目标”序列长度,我们可以通过遍历数据集中的摘要长度来得到。(代码需要运行几分钟)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "application/json": { "ascii": false, "bar_format": null, "colour": null, "elapsed": 0.012465238571166992, "initial": 0, "n": 0, "ncols": null, "nrows": null, "postfix": null, "prefix": "", "rate": null, "total": 299, "unit": "ba", "unit_divisor": 1000, "unit_scale": false }, "application/vnd.jupyter.widget-view+json": { "model_id": "32577879b38640f898e798ea8f88a801", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/299 [00:00 在微调 `T5` 模型时,不能使用 `fp16`,因为它会导致精度溢出问题,参见:[#4586](https://github.com/huggingface/transformers/issues/4586),[#10830](https://github.com/huggingface/transformers/issues/10830), [#10956](https://github.com/huggingface/transformers/pull/10956)\n", ">\n", "\n", "如开头所述,我们使用的是 p4dn.24xlarge AWS EC2 实例,该实例包含 8 张显存为 40GB 的 NVIDIA A100。这意味着我们可以使用 `bf16`,它将减少近一半的模型显存占用,使我们能够在不卸载的情况下高效训练。\n", "\n", "我们将使用 [ds_flan_t5_z3_config_bf16.json](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_flan_t5_z3_config_bf16.json)。如果你不想用 `auto` 值,可以查看 [文档](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/deepspeed#configuration)。" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "{\n", " \"bf16\": {\n", " \"enabled\": \"auto\"\n", " },\n", " \"optimizer\": {\n", " \"type\": \"AdamW\",\n", " \"params\": {\n", " \"lr\": \"auto\",\n", " \"betas\": \"auto\",\n", " \"eps\": \"auto\",\n", " \"weight_decay\": \"auto\"\n", " }\n", " },\n", " \"scheduler\": {\n", " \"type\": \"WarmupLR\",\n", " \"params\": {\n", " \"warmup_min_lr\": \"auto\",\n", " \"warmup_max_lr\": \"auto\",\n", " \"warmup_num_steps\": \"auto\"\n", " }\n", " },\n", " \"zero_optimization\": {\n", " \"stage\": 3,\n", " \"overlap_comm\": true,\n", " \"contiguous_gradients\": true,\n", " \"sub_group_size\": 1e9,\n", " \"reduce_bucket_size\": \"auto\",\n", " \"stage3_prefetch_bucket_size\": \"auto\",\n", " \"stage3_param_persistence_threshold\": \"auto\",\n", " \"stage3_max_live_parameters\": 1e9,\n", " \"stage3_max_reuse_distance\": 1e9,\n", " \"stage3_gather_16bit_weights_on_model_save\": false\n", " },\n", " \"gradient_accumulation_steps\": \"auto\",\n", " \"gradient_clipping\": \"auto\",\n", " \"steps_per_print\": 2000,\n", " \"train_batch_size\": \"auto\",\n", " \"train_micro_batch_size_per_gpu\": \"auto\",\n", " \"wall_clock_breakdown\": false\n", "}\n", "```" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "现在,该训练脚本上场了。我们根据[之前的博文](https://www.philschmid.de/fine-tune-flan-t5)准备了一个 [run_seq2seq_deepspeed.py](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/scripts/run_seq2seq_deepspeed.py) 训练脚本,它支持我们配置 deepspeed 和其他超参数,包括 `google/flan-t5-xxl` 的模型 ID。\n", "\n", "我们使用 `deepspeed` 启动器触发训练,输入给启动器的参数包括 GPU 数量、deepspeed 配置及其它超参数(如 `google/flan-t5-xxl` 的模型 ID)。" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n", "To disable this warning, you can either:\n", "\t- Avoid using `tokenizers` before the fork if possible\n", "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n", "deepspeed --num_gpus=8 scripts/run_seq2seq_deepspeed.py --model_id google/flan-t5-xxl --dataset_path data --epochs 3 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --generation_max_length 129 --lr 1e-4 --deepspeed configs/ds_flan_t5_z3_config_bf16.json\n" ] } ], "source": [ "!deepspeed --num_gpus=8 scripts/run_seq2seq_deepspeed.py \\\n", " --model_id $model_id \\\n", " --dataset_path $save_dataset_path \\\n", " --epochs 3 \\\n", " --per_device_train_batch_size 8 \\\n", " --per_device_eval_batch_size 8 \\\n", " --generation_max_length $max_target_length \\\n", " --lr 1e-4 \\\n", " --deepspeed configs/ds_flan_t5_z3_config_bf16.json " ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "DeepSpeed 先将模型加载到 CPU 上,然后将其拆分到 8 张 A100 上然后开始训练。使用 [CNN Dailymail 数据集](https://huggingface.co/datasets/cnn_dailymail)进行训练大约需要 10 个小时,费用约为 `322 美元`。" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# 结果与实验\n", "\n", "为了更好地了解硬件要求,我们对 FLAN-T5 XL 和 XXL 进行了一系列实验,以帮助我们评估和了解硬件需求以及训练这些模型的成本。\n", "\n", "下表列出了实验和相关设置的详细信息。\n", "\n", "数据集: `\"cnn_dailymail\"`\n", "- 训练样本数: `287113`\n", "- 验证样本数: `13368`\n", "\n", "超参:\n", "- epochs: `3`\n", "- 学习率: `1e-4`\n", "\n", "运行环境设置: \n", "- 4x V100 16GB: p3.8xlarge\n", "- 4x A10G 24GB: g5.24xlarge\n", "- 8x V100 16GB: p3.16xlarge\n", "- 8x A100 40GB: p4dn.24xlarge\n", "\n", "\n", "| 模型 | DeepSpeed 卸载 | 硬件 | GPU每卡batch size | 精度 | 时长 | 费用 |\n", "|-------------------|------------|--------------|--------------------|-----------|----------|--------|\n", "| FLAN-T5-XL (3B) | No | 4x V100 16GB | OOM | fp32 | - | - |\n", "| FLAN-T5-XL (3B) | No | 8x V100 16GB | 1 | fp32 | 105h | ~$2570 |\n", "| FLAN-T5-XL (3B) | No | 8x A100 40GB | 72 | bf16 | 2.5h | ~$81 |\n", "| FLAN-T5-XL (3B) | Yes | 4x V100 16GB | 8 | fp32 | 69h | ~$828 |\n", "| FLAN-T5-XL (3B) | Yes | 8x V100 16GB | 8 | fp32 | 32h | ~$768 |\n", "| FLAN-T5-XXL (11B) | No | 8x A100 40GB | 8 | bf16 | 10h | ~$322 |\n", "| FLAN-T5-XXL (11B) | Yes | 4x V100 16GB | OOM | fp32 | - | - |\n", "| FLAN-T5-XXL (11B) | Yes | 8x V100 16GB | OOM | fp32 | - | - |\n", "| FLAN-T5-XXL (11B) | Yes | 4x A10G 24GB | 24 | bf16 | 90h | ~$732 |\n", "| FLAN-T5-XXL (11B) | Yes | 8x A100 40GB | 48 | bf16 | 19h | ~$613 |\n", "\n", "我们可以看到 `bf16` 与 `fp32` 相比具有显著优势。FLAN-T5-XXL 能放进 4 张 A10G (24GB),但放不进 8 张 V100 16GB。\n", "\n", "我们的实验还表明,如果模型可以无需卸载同时以 batch size 大于 4 的配置跑在 GPU 上,其速度将比卸载模型和减小 batch size 的配置快约 2 倍且更具成本效益。" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "> 英文原文: https://www.philschmid.de/fine-tune-flan-t5-deepspeed \n", "> 原文作者:Philipp Schmid\n", "> 译者: Matrix Yao (姚伟峰),英特尔深度学习工程师,工作方向为 transformer-family 模型在各模态数据上的应用及大规模模型的训练推理。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" }, "orig_nbformat": 4, "vscode": { "interpreter": { "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1" } } }, "nbformat": 4, "nbformat_minor": 2 }