{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "WE5GJ6s7y0Xo" }, "source": [ "## Fine-tune large models using 🤗 `peft` adapters, `transformers` & `bitsandbytes`\n", "\n", "In this tutorial we will cover how we can fine-tune large language models using the very recent `peft` library and `bitsandbytes` for loading large models in 8-bit.\n", "The fine-tuning method will rely on a recent method called \"Low Rank Adapters\" (LoRA), instead of fine-tuning the entire model you just have to fine-tune these adapters and load them properly inside the model. \n", "After fine-tuning the model you can also share your adapters on the 🤗 Hub and load them very easily. Let's get started!" ] }, { "cell_type": "markdown", "metadata": { "id": "TfBzP8gWzkpv" }, "source": [ "### Install requirements\n", "\n", "First, run the cells below to install the requirements:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "otj46qRbtpnd", "outputId": "2aa109f6-3f4e-4887-a16e-336f51e7cc9a" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.3/76.3 MB\u001b[0m \u001b[31m10.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m462.8/462.8 KB\u001b[0m \u001b[31m25.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m199.7/199.7 KB\u001b[0m \u001b[31m25.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m190.3/190.3 KB\u001b[0m \u001b[31m23.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m213.0/213.0 KB\u001b[0m \u001b[31m26.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m132.0/132.0 KB\u001b[0m \u001b[31m18.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m140.6/140.6 KB\u001b[0m \u001b[31m20.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25h Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n", " Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n", " Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n", " Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n", " Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n", " Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n", "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.6/7.6 MB\u001b[0m \u001b[31m72.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", "\u001b[?25h Building wheel for transformers (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n", " Building wheel for peft (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n" ] } ], "source": [ "!pip install -q bitsandbytes datasets accelerate\n", "!pip install -q git+https://github.com/huggingface/transformers.git@main git+https://github.com/huggingface/peft.git" ] }, { "cell_type": "markdown", "metadata": { "id": "FOtwYRI3zzXI" }, "source": [ "### Model loading\n", "\n", "Here let's load the `opt-6.7b` model, its weights in half-precision (float16) are about 13GB on the Hub! If we load them in 8-bit we would require around 7GB of memory instead." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 408, "referenced_widgets": [ "d4de260ffd8a440eb87eb900fc1bb1d3", "8602b545a9f8474dbb3cc178ac0b8e60", "b46919912ee54f6f9f2ce9080be1c61a", "50374e3ab81c4626a182e61fc03b94ce", "2144bc2897dc40b29f060e30ace12275", "949ca70002ca4472bbc21fea4d7ac745", "49943c9dadca43a584b3f354ba45280c", "6123e53fb26b41f0af9a3a3348ae1afd", "285ef943d540400ab827c462945a259c", "95727290446244ccb9626f4594949675", "61b54aa6c9e94ee1bc45c15a9e3f7917", "fc2d5ffe254d425b939252ec46ec27cc", "f65af2e868244edeb0cc9402534874a8", "e466054f08004bbcabb24e400cb3c7fc", "6ea40800dfd849e3b106bae71fc53ae3", "722a01f42b7d4c38836a4546ecb38108", "1bd5179cdb474b65aa06eca3520ad37b", "04d367124a3b419ab1fa1dfd4f9004c3", "4ea667f48b9f4e1f9da7c5a0d3025b85", "9b96c63630654773acd38b8b88371f28", "cb69ae47666a4603a07a8778e2ae7d6e", "8f5e9f2d11d54fd2a08dcbff9f6da05c", "c6f712eadc4d49019b2bd355968cc155", "5fd979c05fd34311af877ce1a988ead8", "179165ff4c0e4586aaf3a40b8502f428", "cad1d8326e474a4f9ee13db9005121dd", "b049ac0b3d2e44ba9b1fc1cb2dd3de62", "921c268ecd724b6a8869dae8f81d558a", "4cf85a85c0764f15b5134c81b360e910", "5936d9eb59e147eea6482006decfe0ee", "fad61c68edc84c8197afeafded84280d", "451309d8b3ca4cd9b6f538640477039e", "c7621e14aa16421d9758321e433b92e4", "5aa74b9b30614172b07f88873cf89471", "cf39fb025e3d4635a5695135a56d9f64", "deaf9732f33446a3be015d2ec16aba76", "bdfd856dd8ee4ae09205ad9d1b9cc806", "7a49f7a55b054b6d829f290a1a426a7a", "5b695486a0ac442b8b6a8ef2ebf4e57a", "03be7cb91cba4afab795aab7aa242ee1", "8f7890f54d514f80b1eb17905cf0f964", "ae660d51267544a89f0ad199cc12b6fa", "d5bb8d7359274c8e9cc79df563175137", "98d5a80ec50c46c18f9ee991e2982115", "e73e5388182040a8937ccf1748171a87", "f797f7b8b13f4b3cb871522d34498631", "41f47c864e094cacb1c550d37ddfe80e", "104d982b444947fe8e4fbb2c2f082616", "2cf8581583c641fb95d1e16aff7d4cd1", "11808e9097424dd0b22a5af6c77813f3", "dc08d237860e4788a8ceeff4518c2612", "7bbf2c40a4ae46fe9e5885db08975263", "f00cbc4a89f5492787ec489da65ff70b", "8a7cd2194113493a841b00d034b5f1ba", "e9673363e85448d494ac9ac5d7ba0efb", "a994beafbf3f4c20880a7bbe3898db36", "7769c261781f4f5483c7e9d58c1a5573", "4a713a8fe16f4e81aa841c69711fa136", "08d036904ecf47ed88e129ba6e2b285c", "9f1182fdddab43b59ee98bc701965a17", "c2302f535b114ad780bfa440445c2e28", "8fe032f285ba4858b8efd68119c217b0", "4ca695fca3d140c6ab4e1e1d34df807b", "b77ceaf55dc04810963cdd01126478f6", "0ccdefcb25e14d229b2634ffae4a6d3d", "11d6b952503c4824b36b66e228f87599", "1e9391f6c89c4d08859ef3413edb19be", "039bbda2402f469eb21ba7ec7ea589a6", "5da6eef8fb0048219159f38a68727b64", "c006e62ee6d04831b2b89273ef04a8ac", "af0634c2539a43989902daef47776901", "62fea00ef0364af287e6097b964d00c4", "9e5ef73f8b244845a6b9002fa5c35d15", "5cb09c84e1e144e0a92580fb5e1ce2fe", "0f561c1660744251a8f710b69d434c87", "9e5f870ef80242f5af09fad70f84ea62", "6f173ca73dd545deb22b8cd0470d925f", "4e6d5943bc374b388b93ed115e44b6a5", "cbe2a6ea41834e95a27d6f02c3c0eeec", "7beb5f4efefc4593abac253df74d1405", "370cefbaeebf41f582b7507ad493055f", "e3df9dda16e244aa9b61d54f9e21ef40", "23d2ad64a17041b7a006dad1e041e0a1", "c1fd6a1234274a44b838a09f3f5380c6", "81bb51d088374394becd9a45ec3b17d4", "9e762779e5434bb7afcc295b61c2f4e4", "f539b7a4665449de9eac209a20629969", "32d528db79ad4f6f836ab2e0df5ac426", "1ca7684b79c5438fa06b047bd2b3283f", "a07688185bff4c4b8cbed3af3b4cf802", "0272f1d9f93f4dd788363a8409cdfd69", "27b41d23d2c64127ba3ae8464958f855", "2b54032c0d8e4a2897aed1ac1c79af14", "ed8fa1048e814f2fa3666899fc42e55a", "97daf559100c44ac983562fea93c5fac", "72e511b775604d899ff5b3fa2ebe9fc4", "da946f86590447d2ab98b9da468fa66b", "54fe79d5c7254117a2209927a7248dd4", "1d122e4eaad54e06961288484f31e18b", "d46b5725c35142a89617e46c0e8d3679", "c5493c23fd5542738ffd1ff5f09a6a67", "a1a80d3460984c2496ada5a634875934", "598c5584ffba4f26815c4e87bb1595c4", "fe93f25323604447be0bb1d24a0c2c59", "00c2e2d3ee8b45818ba84da12c6b11e2", "6ccea64c2e614a9fbdcc2f716cecaea0", "63fc9a9eebca4f2db2ed8a385fc5e204", "9a4860dfeac944db85e6e532599bc1cb", "3b946e1bbab24629b98307275fbe7cbb", "d9d36f8ff5f747bf90fbc8a7d35a6664" ] }, "id": "cg3fiQOvmI3Q", "outputId": "135a7675-6a4d-4786-b5dc-34cb867f40c7" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "bee2f575b3e64c30b2f3afa137802406", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Loading checkpoint shards: 0%| | 0/2 [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import os\n", "\n", "import torch\n", "import torch.nn as nn\n", "import bitsandbytes as bnb\n", "from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM\n", "\n", "model = AutoModelForCausalLM.from_pretrained(\"facebook/opt-6.7b\", load_in_8bit=True)\n", "\n", "tokenizer = AutoTokenizer.from_pretrained(\"facebook/opt-6.7b\")" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "id": "9fTSZntA1iUG" }, "source": [ "### Prepare model for training\n", "\n", "Some pre-processing needs to be done before training such an int8 model using `peft`, therefore let's import an utiliy function `prepare_model_for_int8_training` that will: \n", "- Casts all the non `int8` modules to full precision (`fp32`) for stability\n", "- Add a `forward_hook` to the input embedding layer to enable gradient computation of the input hidden states\n", "- Enable gradient checkpointing for more memory-efficient training" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "T-gy-LxM0yAi" }, "outputs": [], "source": [ "from peft import prepare_model_for_int8_training\n", "\n", "model = prepare_model_for_int8_training(model)" ] }, { "cell_type": "markdown", "metadata": { "id": "KwOTr7B3NlM3" }, "source": [ "### Apply LoRA\n", "\n", "Here comes the magic with `peft`! Let's load a `PeftModel` and specify that we are going to use low-rank adapters (LoRA) using `get_peft_model` utility function from `peft`." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "4W1j6lxaNnxC" }, "outputs": [], "source": [ "def print_trainable_parameters(model):\n", " \"\"\"\n", " Prints the number of trainable parameters in the model.\n", " \"\"\"\n", " trainable_params = 0\n", " all_param = 0\n", " for _, param in model.named_parameters():\n", " all_param += param.numel()\n", " if param.requires_grad:\n", " trainable_params += param.numel()\n", " print(\n", " f\"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}\"\n", " )" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "4iwHGzKBN6wk", "outputId": "039f7175-14c9-42b4-a078-80d27aab161c" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "trainable params: 8388608 || all params: 6666862592 || trainable%: 0.12582542214183376\n" ] } ], "source": [ "from peft import LoraConfig, get_peft_model\n", "\n", "config = LoraConfig(\n", " r=16, lora_alpha=32, target_modules=[\"q_proj\", \"v_proj\"], lora_dropout=0.05, bias=\"none\", task_type=\"CAUSAL_LM\"\n", ")\n", "\n", "model = get_peft_model(model, config)\n", "print_trainable_parameters(model)" ] }, { "cell_type": "markdown", "metadata": { "id": "QdjWif4CVXR6" }, "source": [ "### Training" ] }, { "cell_type": "markdown", "metadata": { "id": "b_RPiO7f3ClX" }, "source": [] }, { "cell_type": "code", "execution_count": 9, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": [ "f357166c6e5f43f39d0a287ca6d6f60e", "805a10c2fd794ff692e8ebeefd65f2eb", "3730f843399d4ba48e98383563283e94", "f855aced7ca2485ea720604359deaa18", "6d728366de1a4bacb1ba1939c5e0146f", "c57c7cf35bf04f3bb3b2b0d8ce7feb31", "2113970533604749a808fa1b98269b0e", "fc15c6d6eb3049a3b8542b332dd8a3f2", "b6c06e70d7ed48f1840086489300cb3d", "49504531deaf4449938bea751d1ec4e7", "7dfe540a75864cf390b3bed20ab1dcd9", "c81d20fe47ce4b7594427830d71504d7", "a14542c8431c48b48a614cfd0d41f03c", "856f3dcf949741acb394f252186a1d7e", "865bae11c917492a9a1ef7286a493bd5", "2561b7a7c1694f229d30d2b1eeb14b2f", "96bd12acd63a4232b2f4ac159cc4a768", "746c8a2fd0ee458bad85cc75ac333e43", "ce6de6f9ddde4a6d8094a2b96eac3a4e", "62ceac028f144120a24e75afbaccf306", "2f953abe023240558580dcff3f0c033e", "c2b42681b8bb47e3895d6105240c5812", "ae58b3f129644729aa2f890b1712fc6e", "74a968ce2dc34189b7dfbb39276f5138", "d4f5b19f75e246df9c688f625792e8ba", "5cd7a5bbab5f4f2daa53d984568ea630", "7288f4fd6bb44089baef9f20f50e0e04", "57ff7d85e56f49239d1eeedd43b88d22", "4d5cbca8d5e647669faeea32de761dcd", "a6ce291698ad460394433a49000c1d25", "aa54b2a9b43848b0902d135beffb806b", "8cd2073cd3304b2ca4b997127aa88bd1", "f51c7f18977447e2bca36e1da3e1be4f", "b8c29c1accf4479da91d393dd59ca82b", "5fa269b8704e4509bf8cd918b657ae23", "ac91fc78fb04436b80a0a2cbe4380c2b", "5db87509bf56435eafaea4cf1153a5d3", "840981ef419a48918cf000f89807890e", "2c27ab87b28946e39681a29cc6f14ea9", "c4ab408eb1344da0bb15a9a6760818e4", "d54e8d69575f49eb977da64abc5ceb0c", "548299d96190406391cab49ad29cc5d6", "c41a9d785e884ab0a58117d17ac7d228", "256c56849e8545018514fd52f1247501", "041944011c204558a4315c27ec5dabce", "1247ee7aeac5406e93e22313d54ef54a", "ed2736d862a94d8f9db9ba6037016071", "ddc333530c13446a91cf332846bfa22f", "3d2325879cfa48048e81c44e2c2444d5", "7573358a50bd446486b4c5528a298fae", "663c338f84c94b63bc05b0fb6835d99f", "3185bd8ecbde4f26b8ed0f92cf79e14f", "ddc36fdbdd634dc489f658bead61e7ee", "d7e33c29d410414eb452d121edd9920e", "d3511ce1754b41969e5c36a5b33ac466", "9225dd50bcea435289fa2eba64087076", "a23144916f02459b966b9830f0a1d64c", "c9b718882fec4254bce1f33fa9373921", "f27905d0073e493cb9dcd174c0f15e35", "0f634721a70d4e749cb616e55ab747b3", "e1f62cbd805d4b8aa9aba7e345c21c82", "893d8cb7857147ba82ac86d140f69a5c", "4884bf82f5814e049c47cf6d496aab08", "432fc8277ebc492f91d6b46ed073ccb4", "c423a3cc0b504828b11077c77268ed92", "668dd47eec9942bcad0af209772cf8e6", "a618f430e06645eb9c95bf16bb6ea59a", "bba20d9bf1974a7f8d1ebcb9f5c4cd49", "5cec06ccf9074e019ea1f7daf17a0319", "faf24b3ed994422f8dd806ae0cc30531", "79f7929f423d493caf86b548e91e8a42", "23cb4d44c4534bae80357c8a9f6b86ba", "4fa82dbe7d3a4b3996b1fa1cf9c23498", "6eee224b244a4cafb3fc0f3b16eb0d03", "73344836205f4569a4113c24985bd5b1", "86dff0987bf040da99c8f2846da26d86", "63f8ad255d2147128bdc26f47fdf2528" ] }, "id": "AQ_HCYruWIHU", "outputId": "e9baadaf-e202-413d-87f0-74c730d78408" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f357166c6e5f43f39d0a287ca6d6f60e", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading readme: 0%| | 0.00/5.55k [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:datasets.builder:Using custom data configuration Abirate--english_quotes-6e72855d06356857\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Downloading and preparing dataset json/Abirate--english_quotes to /root/.cache/huggingface/datasets/Abirate___json/Abirate--english_quotes-6e72855d06356857/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c81d20fe47ce4b7594427830d71504d7", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading data files: 0%| | 0/1 [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "ae58b3f129644729aa2f890b1712fc6e", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading data: 0%| | 0.00/647k [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b8c29c1accf4479da91d393dd59ca82b", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Extracting data files: 0%| | 0/1 [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "041944011c204558a4315c27ec5dabce", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Generating train split: 0 examples [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/Abirate___json/Abirate--english_quotes-6e72855d06356857/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51. Subsequent calls will reuse this data.\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "9225dd50bcea435289fa2eba64087076", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/1 [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a618f430e06645eb9c95bf16bb6ea59a", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/3 [00:00, ?ba/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "The model is loaded in 8-bit precision. To train this model you need to add additional modules inside the model such as adapters using `peft` library and freeze the model weights. Please check the examples in https://github.com/huggingface/peft for more details.\n", "max_steps is given, it will override any value given in num_train_epochs\n", "Using cuda_amp half precision backend\n", "The following columns in the training set don't have a corresponding argument in `PeftModelForCausalLM.forward` and have been ignored: author, tags, quote. If author, tags, quote are not expected by `PeftModelForCausalLM.forward`, you can safely ignore this message.\n", "/usr/local/lib/python3.8/dist-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning\n", " warnings.warn(\n", "***** Running training *****\n", " Num examples = 2508\n", " Num Epochs = 2\n", " Instantaneous batch size per device = 4\n", " Total train batch size (w. parallel, distributed & accumulation) = 16\n", " Gradient Accumulation steps = 4\n", " Total optimization steps = 200\n", " Number of trainable parameters = 8388608\n", "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py:298: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization\n", " warnings.warn(f\"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization\")\n" ] }, { "data": { "text/html": [ "\n", "
Step | \n", "Training Loss | \n", "
---|---|
1 | \n", "2.364400 | \n", "
2 | \n", "2.200400 | \n", "
3 | \n", "2.302300 | \n", "
4 | \n", "2.184700 | \n", "
5 | \n", "1.878700 | \n", "
6 | \n", "2.307200 | \n", "
7 | \n", "2.193800 | \n", "
8 | \n", "2.446200 | \n", "
9 | \n", "2.458900 | \n", "
10 | \n", "2.020000 | \n", "
11 | \n", "1.941200 | \n", "
12 | \n", "1.931000 | \n", "
13 | \n", "2.055900 | \n", "
14 | \n", "1.975100 | \n", "
15 | \n", "2.015100 | \n", "
16 | \n", "2.095600 | \n", "
17 | \n", "1.768300 | \n", "
18 | \n", "2.155700 | \n", "
19 | \n", "2.402300 | \n", "
20 | \n", "2.124600 | \n", "
21 | \n", "2.314900 | \n", "
22 | \n", "1.908500 | \n", "
23 | \n", "2.078800 | \n", "
24 | \n", "1.941900 | \n", "
25 | \n", "1.879800 | \n", "
26 | \n", "1.927500 | \n", "
27 | \n", "1.371400 | \n", "
28 | \n", "1.977600 | \n", "
29 | \n", "2.055000 | \n", "
30 | \n", "1.915800 | \n", "
31 | \n", "1.958100 | \n", "
32 | \n", "2.195900 | \n", "
33 | \n", "2.001000 | \n", "
34 | \n", "2.025000 | \n", "
35 | \n", "1.576900 | \n", "
36 | \n", "1.879800 | \n", "
37 | \n", "1.821600 | \n", "
38 | \n", "1.727800 | \n", "
39 | \n", "1.995700 | \n", "
40 | \n", "1.698600 | \n", "
41 | \n", "2.129300 | \n", "
42 | \n", "2.025800 | \n", "
43 | \n", "1.696500 | \n", "
44 | \n", "1.984700 | \n", "
45 | \n", "2.051100 | \n", "
46 | \n", "2.054400 | \n", "
47 | \n", "1.765600 | \n", "
48 | \n", "2.063100 | \n", "
49 | \n", "1.746900 | \n", "
50 | \n", "1.873000 | \n", "
51 | \n", "2.391300 | \n", "
52 | \n", "2.494100 | \n", "
53 | \n", "2.072300 | \n", "
54 | \n", "1.808000 | \n", "
55 | \n", "1.911900 | \n", "
56 | \n", "2.168100 | \n", "
57 | \n", "2.166100 | \n", "
58 | \n", "1.921500 | \n", "
59 | \n", "1.856000 | \n", "
60 | \n", "1.652800 | \n", "
61 | \n", "1.605000 | \n", "
62 | \n", "2.032500 | \n", "
63 | \n", "1.822100 | \n", "
64 | \n", "1.623600 | \n", "
65 | \n", "1.923200 | \n", "
66 | \n", "2.053200 | \n", "
67 | \n", "2.114300 | \n", "
68 | \n", "1.807700 | \n", "
69 | \n", "1.857800 | \n", "
70 | \n", "1.854600 | \n", "
71 | \n", "2.023000 | \n", "
72 | \n", "1.864900 | \n", "
73 | \n", "1.769300 | \n", "
74 | \n", "1.837700 | \n", "
75 | \n", "1.742200 | \n", "
76 | \n", "1.895900 | \n", "
77 | \n", "1.922800 | \n", "
78 | \n", "2.325300 | \n", "
79 | \n", "2.231200 | \n", "
80 | \n", "2.309500 | \n", "
81 | \n", "1.945700 | \n", "
82 | \n", "2.072100 | \n", "
83 | \n", "1.917400 | \n", "
84 | \n", "2.004600 | \n", "
85 | \n", "1.951700 | \n", "
86 | \n", "1.450600 | \n", "
87 | \n", "1.785600 | \n", "
88 | \n", "1.668000 | \n", "
89 | \n", "1.903100 | \n", "
90 | \n", "1.709800 | \n", "
91 | \n", "2.312900 | \n", "
92 | \n", "2.092100 | \n", "
93 | \n", "2.319600 | \n", "
94 | \n", "1.603100 | \n", "
95 | \n", "1.740000 | \n", "
96 | \n", "1.670500 | \n", "
97 | \n", "1.611600 | \n", "
98 | \n", "1.728900 | \n", "
99 | \n", "2.285200 | \n", "
100 | \n", "1.957800 | \n", "
101 | \n", "1.676700 | \n", "
102 | \n", "1.656300 | \n", "
103 | \n", "1.612400 | \n", "
104 | \n", "1.848900 | \n", "
105 | \n", "1.870000 | \n", "
106 | \n", "1.954000 | \n", "
107 | \n", "2.192200 | \n", "
108 | \n", "1.637600 | \n", "
109 | \n", "1.208700 | \n", "
110 | \n", "2.254200 | \n", "
111 | \n", "1.832100 | \n", "
112 | \n", "2.119600 | \n", "
113 | \n", "2.126400 | \n", "
114 | \n", "1.915700 | \n", "
115 | \n", "1.587500 | \n", "
116 | \n", "1.564800 | \n", "
117 | \n", "1.742700 | \n", "
118 | \n", "1.712600 | \n", "
119 | \n", "1.727900 | \n", "
120 | \n", "2.361500 | \n", "
121 | \n", "2.070300 | \n", "
122 | \n", "1.878500 | \n", "
123 | \n", "1.846600 | \n", "
124 | \n", "2.061700 | \n", "
125 | \n", "2.149700 | \n", "
126 | \n", "1.940600 | \n", "
127 | \n", "2.098300 | \n", "
128 | \n", "1.734100 | \n", "
129 | \n", "2.111700 | \n", "
130 | \n", "1.887600 | \n", "
131 | \n", "1.716300 | \n", "
132 | \n", "2.070000 | \n", "
133 | \n", "1.782200 | \n", "
134 | \n", "1.955200 | \n", "
135 | \n", "1.762900 | \n", "
136 | \n", "1.954700 | \n", "
137 | \n", "1.687100 | \n", "
138 | \n", "1.979100 | \n", "
139 | \n", "1.634600 | \n", "
140 | \n", "1.801200 | \n", "
141 | \n", "1.954100 | \n", "
142 | \n", "1.833900 | \n", "
143 | \n", "2.051400 | \n", "
144 | \n", "1.921200 | \n", "
145 | \n", "1.787500 | \n", "
146 | \n", "1.825400 | \n", "
147 | \n", "1.363400 | \n", "
148 | \n", "1.977400 | \n", "
149 | \n", "1.768300 | \n", "
150 | \n", "2.226700 | \n", "
151 | \n", "1.945500 | \n", "
"
],
"text/plain": [
"
Copy a token from your Hugging Face\ntokens page and paste it below.
Immediately click login after copying\nyour token or it might be stored in plain text in this notebook file.