text
stringlengths
7
328k
id
stringlengths
14
166
metadata
dict
__index_level_0__
int64
0
459
// File only needed for VSCode users to have proper Docker based interpreters { "name": "accelerate_dev_environment", "build": { // ACTION NEEDED: comment/uncomment the relevant line depending on whether you are in a CPU/GPU environment "dockerfile": "../docker/accelerate-cpu/Dockerfile" // "dockerfile": "../docker/accelerate-gpu/Dockerfile" }, "runArgs": [ // ACTION NEEDED: uncomment the next line if your local machine has GPUs available // "--gpus", "all", // Enable the docker container to access system resources "--ipc", "host" ], "remoteEnv": { "PYTHONPATH": "${containerEnv:PATH}:${containerWorkspaceFolder}" }, "customizations": { "vscode": { "extensions": [ // Ensure we have IntelliSense in VSCode when running inside container "ms-python.python" ] } }, "workspaceFolder": "/workspaces/accelerate", // Need git for VSCode to color code modifications. Only runs when building environment. "onCreateCommand": "apt-get update && apt-get install -y git && pip install -e '.[dev]'" }
accelerate/.devcontainer/devcontainer.json/0
{ "file_path": "accelerate/.devcontainer/devcontainer.json", "repo_id": "accelerate", "token_count": 459 }
0
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <br> <img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/accelerate_logo.png" width="400"/> <br> <p> <p align="center"> <!-- Uncomment when CircleCI is set up <a href="https://circleci.com/gh/huggingface/accelerate"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master"> </a> --> <a href="https://github.com/huggingface/accelerate/blob/main/LICENSE"> <img alt="License" src="https://img.shields.io/github/license/huggingface/accelerate.svg?color=blue"> </a> <a href="https://huggingface.co/docs/accelerate/index.html"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/accelerate/index.html.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/accelerate/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/accelerate.svg"> </a> <a href="https://github.com/huggingface/accelerate/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> </p> <h3 align="center"> <p>Run your *raw* PyTorch training script on any kind of device </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://raw.githubusercontent.com/huggingface/accelerate/main/docs/source/imgs/course_banner.png"></a> </h3> ## Easy to integrate 🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16. 🤗 Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged. Here is an example: ```diff import torch import torch.nn.functional as F from datasets import load_dataset + from accelerate import Accelerator + accelerator = Accelerator() - device = 'cpu' + device = accelerator.device model = torch.nn.Transformer().to(device) optimizer = torch.optim.Adam(model.parameters()) dataset = load_dataset('my_dataset') data = torch.utils.data.DataLoader(dataset, shuffle=True) + model, optimizer, data = accelerator.prepare(model, optimizer, data) model.train() for epoch in range(10): for source, targets in data: source = source.to(device) targets = targets.to(device) optimizer.zero_grad() output = model(source) loss = F.cross_entropy(output, targets) - loss.backward() + accelerator.backward(loss) optimizer.step() ``` As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp8, fp16, bf16). In particular, the same code can then be run without modification on your local machine for debugging or your training environment. 🤗 Accelerate even handles the device placement for you (which requires a few more changes to your code, but is safer in general), so you can even simplify your training loop further: ```diff import torch import torch.nn.functional as F from datasets import load_dataset + from accelerate import Accelerator - device = 'cpu' + accelerator = Accelerator() - model = torch.nn.Transformer().to(device) + model = torch.nn.Transformer() optimizer = torch.optim.Adam(model.parameters()) dataset = load_dataset('my_dataset') data = torch.utils.data.DataLoader(dataset, shuffle=True) + model, optimizer, data = accelerator.prepare(model, optimizer, data) model.train() for epoch in range(10): for source, targets in data: - source = source.to(device) - targets = targets.to(device) optimizer.zero_grad() output = model(source) loss = F.cross_entropy(output, targets) - loss.backward() + accelerator.backward(loss) optimizer.step() ``` Want to learn more? Check out the [documentation](https://huggingface.co/docs/accelerate) or have a look at our [examples](https://github.com/huggingface/accelerate/tree/main/examples). ## Launching script 🤗 Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to use `torch.distributed.run` or to write a specific launcher for TPU training! On your machine(s) just run: ```bash accelerate config ``` and answer the questions asked. This will generate a config file that will be used automatically to properly set the default options when doing ```bash accelerate launch my_script.py --args_to_my_script ``` For instance, here is how you would run the GLUE example on the MRPC task (from the root of the repo): ```bash accelerate launch examples/nlp_example.py ``` This CLI tool is **optional**, and you can still use `python my_script.py` or `python -m torchrun my_script.py` at your convenience. You can also directly pass in the arguments you would to `torchrun` as arguments to `accelerate launch` if you wish to not run` accelerate config`. For example, here is how to launch on two GPUs: ```bash accelerate launch --multi_gpu --num_processes 2 examples/nlp_example.py ``` To learn more, check the CLI documentation available [here](https://huggingface.co/docs/accelerate/package_reference/cli). ## Launching multi-CPU run using MPI 🤗 Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI on [this page](https://www.open-mpi.org/faq/?category=building#easy-build). You can use Intel MPI or MVAPICH as well. Once you have MPI setup on your cluster, just run: ```bash accelerate config ``` Answer the questions that are asked, selecting to run using multi-CPU, and answer "yes" when asked if you want accelerate to launch mpirun. Then, use `accelerate launch` with your script like: ```bash accelerate launch examples/nlp_example.py ``` Alternatively, you can use mpirun directly, without using the CLI like: ```bash mpirun -np 2 python examples/nlp_example.py ``` ## Launching training using DeepSpeed 🤗 Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using just `accelerate config`. However, if you desire to tweak your DeepSpeed related args from your Python script, we provide you the `DeepSpeedPlugin`. ```python from accelerate import Accelerator, DeepSpeedPlugin # deepspeed needs to know your gradient accumulation steps beforehand, so don't forget to pass it # Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeed deepspeed_plugin = DeepSpeedPlugin(zero_stage=2, gradient_accumulation_steps=2) accelerator = Accelerator(mixed_precision='fp16', deepspeed_plugin=deepspeed_plugin) # How to save your 🤗 Transformer? accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained(save_dir, save_function=accelerator.save, state_dict=accelerator.get_state_dict(model)) ``` Note: DeepSpeed support is experimental for now. In case you get into some problem, please open an issue. ## Launching your training from a notebook 🤗 Accelerate also provides a `notebook_launcher` function you can use in a notebook to launch a distributed training. This is especially useful for Colab or Kaggle notebooks with a TPU backend. Just define your training loop in a `training_function` then in your last cell, add: ```python from accelerate import notebook_launcher notebook_launcher(training_function) ``` An example can be found in [this notebook](https://github.com/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/accelerate_examples/simple_nlp_example.ipynb) ## Why should I use 🤗 Accelerate? You should use 🤗 Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library. In fact, the whole API of 🤗 Accelerate is in one class, the `Accelerator` object. ## Why shouldn't I use 🤗 Accelerate? You shouldn't use 🤗 Accelerate if you don't want to write a training loop yourself. There are plenty of high-level libraries above PyTorch that will offer you that, 🤗 Accelerate is not one of them. ## Frameworks using 🤗 Accelerate If you like the simplicity of 🤗 Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of 🤗 Accelerate are listed below: * [Amphion](https://github.com/open-mmlab/Amphion) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development. * [Animus](https://github.com/Scitator/animus) is a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them within [IExperiment](https://github.com/Scitator/animus/blob/main/animus/core.py#L76). * [Catalyst](https://github.com/catalyst-team/catalyst#getting-started) is a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides a [Runner](https://catalyst-team.github.io/catalyst/api/core.html#runner) to connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic. * [fastai](https://github.com/fastai/fastai#installing) is a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides a [Learner](https://docs.fast.ai/learner.html#Learner) to handle the training, fine-tuning, and inference of deep learning algorithms. * [Finetuner](https://github.com/jina-ai/finetuner) is a service that enables models to create higher-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems, clustering, duplication detection, anomaly detection, or other uses. * [InvokeAI](https://github.com/invoke-ai/InvokeAI) is a creative engine for Stable Diffusion models, offering industry-leading WebUI, terminal usage support, and serves as the foundation for many commercial products. * [Kornia](https://kornia.readthedocs.io/en/latest/get-started/introduction.html) is a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides a [Trainer](https://kornia.readthedocs.io/en/latest/x.html#kornia.x.Trainer) with the specific purpose to train and fine-tune the supported deep learning algorithms within the library. * [Open Assistant](https://projects.laion.ai/Open-Assistant/) is a chat-based assistant that understands tasks, can interact with their party systems, and retrieve information dynamically to do so. * [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) is a lightweight training library, with a streamlined feature set centered around a general-purpose [Trainer](https://pytorch-accelerated.readthedocs.io/en/latest/trainer.html), that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves! * [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is an open-source browser-based easy-to-use interface based on the Gradio library for Stable Diffusion. * [torchkeras](https://github.com/lyhue1991/torchkeras) is a simple tool for training pytorch model just in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric. * [transformers](https://github.com/huggingface/transformers) as a tool for helping train state-of-the-art machine learning models in PyTorch, Tensorflow, and JAX. (Accelerate is the backend for the PyTorch side). ## Installation This repository is tested on Python 3.8+ and PyTorch 1.10.0+ You should install 🤗 Accelerate in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). First, create a virtual environment with the version of Python you're going to use and activate it. Then, you will need to install PyTorch: refer to the [official installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform. Then 🤗 Accelerate can be installed using pip as follows: ```bash pip install accelerate ``` ## Supported integrations - CPU only - multi-CPU on one node (machine) - multi-CPU on several nodes (machines) - single GPU - multi-GPU on one node (machine) - multi-GPU on several nodes (machines) - TPU - FP16/BFloat16 mixed precision - FP8 mixed precision with [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) - DeepSpeed support (Experimental) - PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental) - Megatron-LM support (Experimental) ## Citing 🤗 Accelerate If you use 🤗 Accelerate in your publication, please cite it by using the following BibTeX entry. ```bibtex @Misc{accelerate, title = {Accelerate: Training and inference at scale made simple, efficient and adaptable.}, author = {Sylvain Gugger and Lysandre Debut and Thomas Wolf and Philipp Schmid and Zachary Mueller and Sourab Mangrulkar and Marc Sun and Benjamin Bossan}, howpublished = {\url{https://github.com/huggingface/accelerate}}, year = {2022} } ```
accelerate/README.md/0
{ "file_path": "accelerate/README.md", "repo_id": "accelerate", "token_count": 4493 }
1
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TPU training A [TPU (Tensor Processing Unit)](https://cloud.google.com/tpu/docs/intro-to-tpu) is a type of hardware specifically designed for training models efficiently. Accelerate supports TPU training, but there are a few things you should be aware of, namely graph compilation. This tutorial briefly discusses compilation, and for more details, take a look at the [Training on TPUs with Accelerate](../concept_guides/training_tpu) guide. ## Compilation A TPU creates a graph of all the operations in the training step such as the forward pass, backward pass and optimizer step. This is why the first training step always takes a while because building and compiling this graph takes time. But once compilation is complete, it is cached and all subsequent steps are much faster. The key is to avoid compiling your code again or else training is super slow. This means all your operations must be exactly the same: * all tensors in your batches must have the same length (for example, no dynamic padding for NLP tasks) * your code must be static (for example, no layers with for loops that have different lengths depending on the input such as a LSTM) ## Weight tying A common language model design is to tie the weights of the embedding and softmax layers. However, moving the model to a TPU (either yourself or passing it to the [`~Accelerator.prepare`] method) breaks the weight tying and you'll need to retie the weights. To add special behavior (like weight tying) in your script for TPUs, set [`~Accelerator.distributed_type`] to `DistributedType.TPU` first. Then you can use the [`~transformers.PreTrainedModel.tie_weights`] method to tie the weights. ```py if accelerator.distributed_type == DistributedType.TPU: model.tie_weights() ```
accelerate/docs/source/basic_tutorials/tpu.md/0
{ "file_path": "accelerate/docs/source/basic_tutorials/tpu.md", "repo_id": "accelerate", "token_count": 629 }
2
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Fully Sharded Data Parallel To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model. This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters. To read more about it and the benefits, check out the [Fully Sharded Data Parallel blog](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/). We have integrated the latest PyTorch's Fully Sharded Data Parallel (FSDP) training feature. All you need to do is enable it through the config. ## How it works out of the box On your machine(s) just run: ```bash accelerate config ``` and answer the questions asked. This will generate a config file that will be used automatically to properly set the default options when doing ```bash accelerate launch my_script.py --args_to_my_script ``` For instance, here is how you would run `examples/nlp_example.py` (from the root of the repo) with FSDP enabled: ```bash compute_environment: LOCAL_MACHINE debug: false distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: false fsdp_cpu_ram_efficient_loading: true fsdp_offload_params: false fsdp_sharding_strategy: FULL_SHARD fsdp_state_dict_type: SHARDED_STATE_DICT fsdp_sync_module_states: true fsdp_transformer_layer_cls_to_wrap: BertLayer fsdp_use_orig_params: true machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` ```bash accelerate launch examples/nlp_example.py ``` Currently, `Accelerate` supports the following config through the CLI: `fsdp_sharding_strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD (DDP), [4] HYBRID_SHARD (shards optimizer states, gradients and parameters within each node while each node has full copy), [5] HYBRID_SHARD_ZERO2 (shards optimizer states and gradients within each node while each node has full copy). For more information, please refer the official [PyTorch docs](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.ShardingStrategy). `fsdp_offload_params` : Decides Whether to offload parameters and gradients to CPU `fsdp_auto_wrap_policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP `fsdp_transformer_layer_cls_to_wrap`: Only applicable for 🤗 Transformers. When using `fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP`, a user may provide a comma-separated string of transformer layer class names (case-sensitive) to wrap, e.g., `BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`. This is important because submodules that share weights (e.g., embedding layers) should not end up in different FSDP wrapped units. Using this policy, wrapping happens for each block containing Multi-Head Attention followed by a couple of MLP layers. Remaining layers including the shared embeddings are conveniently wrapped in same outermost FSDP unit. Therefore, use this for transformer-based models. You can use the `model._no_split_modules` for 🤗 Transformer models by answering `yes` to `Do you want to use the model's `_no_split_modules` to wrap. It will try to use `model._no_split_modules` when possible. `fsdp_min_num_params`: minimum number of parameters when using `fsdp_auto_wrap_policy=SIZE_BASED_WRAP`. `fsdp_backward_prefetch_policy`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH `fsdp_forward_prefetch`: if True, then FSDP explicitly prefetches the next upcoming all-gather while executing in the forward pass. Should only be used for static-graph models since the prefetching follows the first iteration’s execution order. i.e., if the sub-modules' order changes dynamically during the model's execution do not enable this feature. `fsdp_state_dict_type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT `fsdp_use_orig_params`: If True, allows non-uniform `requires_grad` during init, which means support for interspersed frozen and trainable parameters. This setting is useful in cases such as parameter-efficient fine-tuning as discussed in [this post](https://dev-discuss.pytorch.org/t/rethinking-pytorch-fully-sharded-data-parallel-fsdp-from-first-principles/1019). This option also allows one to have multiple optimizer param groups. This should be `True` when creating an optimizer before preparing/wrapping the model with FSDP. `fsdp_cpu_ram_efficient_loading`: Only applicable for 🤗 Transformers models. If True, only the first process loads the pretrained model checkpoint while all other processes have empty weights. This should be set to False if you experience errors when loading the pretrained 🤗 Transformers model via `from_pretrained` method. When this setting is True `fsdp_sync_module_states` also must to be True, otherwise all the processes except the main process would have random weights leading to unexpected behaviour during training. For this to work, make sure the distributed process group is initialized before calling Transformers `from_pretrained` method. When using 🤗 Trainer API, the distributed process group is initialized when you create an instance of `TrainingArguments` class. `fsdp_sync_module_states`: If True, each individually wrapped FSDP unit will broadcast module parameters from rank 0. For additional and more nuanced control, you can specify other FSDP parameters via `FullyShardedDataParallelPlugin`. When creating `FullyShardedDataParallelPlugin` object, pass it the parameters that weren't part of the accelerate config or if you want to override them. The FSDP parameters will be picked based on the accelerate config file or launch command arguments and other parameters that you will pass directly through the `FullyShardedDataParallelPlugin` object will set/override that. Below is an example: ```py from accelerate import FullyShardedDataParallelPlugin from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig fsdp_plugin = FullyShardedDataParallelPlugin( state_dict_config=FullStateDictConfig(offload_to_cpu=False, rank0_only=False), optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=False, rank0_only=False), ) accelerator = Accelerator(fsdp_plugin=fsdp_plugin) ``` ## Saving and loading The new recommended way of checkpointing when using FSDP models is to use `SHARDED_STATE_DICT` as `StateDictType` when setting up the accelerate config. Below is the code snippet to save using `save_state` utility of accelerate. ```py accelerator.save_state("ckpt") ``` Inspect the checkpoint folder to see model and optimizer as shards per process: ``` ls ckpt # optimizer_0 pytorch_model_0 random_states_0.pkl random_states_1.pkl scheduler.bin cd ckpt ls optimizer_0 # __0_0.distcp __1_0.distcp ls pytorch_model_0 # __0_0.distcp __1_0.distcp ``` To load them back for resuming the training, use the `load_state` utility of accelerate ```py accelerator.load_state("ckpt") ``` When using transformers `save_pretrained`, pass `state_dict=accelerator.get_state_dict(model)` to save the model state dict. Below is an example: ```diff unwrapped_model.save_pretrained( args.output_dir, is_main_process=accelerator.is_main_process, save_function=accelerator.save, + state_dict=accelerator.get_state_dict(model), ) ``` ### State Dict `accelerator.get_state_dict` will call the underlying `model.state_dict` implementation using `FullStateDictConfig(offload_to_cpu=True, rank0_only=True)` context manager to get the state dict only for rank 0 and it will be offloaded to CPU. You can then pass `state` into the `save_pretrained` method. There are several modes for `StateDictType` and `FullStateDictConfig` that you can use to control the behavior of `state_dict`. For more information, see the [PyTorch documentation](https://pytorch.org/docs/stable/fsdp.html). ## Mapping between FSDP sharding strategies and DeepSpeed ZeRO Stages * `FULL_SHARD` maps to the DeepSpeed `ZeRO Stage-3`. Shards optimizer states, gradients and parameters. * `SHARD_GRAD_OP` maps to the DeepSpeed `ZeRO Stage-2`. Shards optimizer states and gradients. * `NO_SHARD` maps to `ZeRO Stage-0`. No sharding wherein each GPU has full copy of model, optimizer states and gradients. * `HYBRID_SHARD` maps to `ZeRO++ Stage-3` wherein `zero_hpz_partition_size=<num_gpus_per_node>`. Here, this will shard optimizer states, gradients and parameters within each node while each node has full copy. ## A few caveats to be aware of - In case of multiple models, pass the optimizers to the prepare call in the same order as corresponding models else `accelerator.save_state()` and `accelerator.load_state()` will result in wrong/unexpected behaviour. - This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of 🤗 `Transformers` library. For more control, users can leverage the `FullyShardedDataParallelPlugin`. After creating an instance of this class, users can pass it to the Accelerator class instantiation. For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code.
accelerate/docs/source/usage_guides/fsdp.md/0
{ "file_path": "accelerate/docs/source/usage_guides/fsdp.md", "repo_id": "accelerate", "token_count": 3064 }
3
# Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse from typing import List import evaluate import numpy as np import torch from datasets import DatasetDict, load_dataset # New Code # # We'll be using StratifiedKFold for this example from sklearn.model_selection import StratifiedKFold from torch.optim import AdamW from torch.utils.data import DataLoader from transformers import AutoModelForSequenceClassification, AutoTokenizer, get_linear_schedule_with_warmup, set_seed from accelerate import Accelerator, DistributedType ######################################################################## # This is a fully working simple example to use Accelerate, # specifically showcasing how to perform Cross Validation, # and builds off the `nlp_example.py` script. # # This example trains a Bert base model on GLUE MRPC # in any of the following settings (with the same script): # - single CPU or single GPU # - multi GPUS (using PyTorch distributed mode) # - (multi) TPUs # - fp16 (mixed-precision) or fp32 (normal precision) # # To help focus on the differences in the code, building `DataLoaders` # was refactored into its own function. # New additions from the base script can be found quickly by # looking for the # New Code # tags # # To run it in each of these various modes, follow the instructions # in the readme for examples: # https://github.com/huggingface/accelerate/tree/main/examples # ######################################################################## MAX_GPU_BATCH_SIZE = 16 EVAL_BATCH_SIZE = 32 # New Code # # We need a different `get_dataloaders` function that will build dataloaders by index def get_fold_dataloaders( accelerator: Accelerator, dataset: DatasetDict, train_idxs: List[int], valid_idxs: List[int], batch_size: int = 16 ): """ Gets a set of train, valid, and test dataloaders for a particular fold Args: accelerator (`Accelerator`): The main `Accelerator` object train_idxs (list of `int`): The split indices for the training dataset valid_idxs (list of `int`): The split indices for the validation dataset batch_size (`int`): The size of the minibatch. Default is 16 """ tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") datasets = DatasetDict( { "train": dataset["train"].select(train_idxs), "validation": dataset["train"].select(valid_idxs), "test": dataset["validation"], } ) def tokenize_function(examples): # max_length=None => use the model max length (it's actually the default) outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None) return outputs # Apply the method we just defined to all the examples in all the splits of the dataset # starting with the main process first: with accelerator.main_process_first(): tokenized_datasets = datasets.map( tokenize_function, batched=True, remove_columns=["idx", "sentence1", "sentence2"], ) # We also rename the 'label' column to 'labels' which is the expected name for labels by the models of the # transformers library tokenized_datasets = tokenized_datasets.rename_column("label", "labels") def collate_fn(examples): # On TPU it's best to pad everything to the same length or training will be very slow. max_length = 128 if accelerator.distributed_type == DistributedType.XLA else None # When using mixed precision we want round multiples of 8/16 if accelerator.mixed_precision == "fp8": pad_to_multiple_of = 16 elif accelerator.mixed_precision != "no": pad_to_multiple_of = 8 else: pad_to_multiple_of = None return tokenizer.pad( examples, padding="longest", max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_tensors="pt", ) # Instantiate dataloaders. train_dataloader = DataLoader( tokenized_datasets["train"], shuffle=True, collate_fn=collate_fn, batch_size=batch_size ) eval_dataloader = DataLoader( tokenized_datasets["validation"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) test_dataloader = DataLoader( tokenized_datasets["test"], shuffle=False, collate_fn=collate_fn, batch_size=EVAL_BATCH_SIZE ) return train_dataloader, eval_dataloader, test_dataloader def training_function(config, args): # New Code # test_predictions = [] # Download the dataset datasets = load_dataset("glue", "mrpc") # Create our splits kfold = StratifiedKFold(n_splits=int(args.num_folds)) # Initialize accelerator accelerator = Accelerator(cpu=args.cpu, mixed_precision=args.mixed_precision) # Sample hyper-parameters for learning rate, batch size, seed and a few other HPs lr = config["lr"] num_epochs = int(config["num_epochs"]) seed = int(config["seed"]) batch_size = int(config["batch_size"]) metric = evaluate.load("glue", "mrpc") # If the batch size is too big we use gradient accumulation gradient_accumulation_steps = 1 if batch_size > MAX_GPU_BATCH_SIZE and accelerator.distributed_type != DistributedType.XLA: gradient_accumulation_steps = batch_size // MAX_GPU_BATCH_SIZE batch_size = MAX_GPU_BATCH_SIZE set_seed(seed) # New Code # # Create our folds: folds = kfold.split(np.zeros(datasets["train"].num_rows), datasets["train"]["label"]) test_references = [] # Iterate over them for i, (train_idxs, valid_idxs) in enumerate(folds): train_dataloader, eval_dataloader, test_dataloader = get_fold_dataloaders( accelerator, datasets, train_idxs, valid_idxs, ) # Instantiate the model (we build the model here so that the seed also control new weights initialization) model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True) # We could avoid this line since the accelerator is set with `device_placement=True` (default value). # Note that if you are placing tensors on devices manually, this line absolutely needs to be before the optimizer # creation otherwise training will not work on TPU (`accelerate` will kindly throw an error to make us aware of that). model = model.to(accelerator.device) # Instantiate optimizer optimizer = AdamW(params=model.parameters(), lr=lr) # Instantiate scheduler lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=100, num_training_steps=(len(train_dataloader) * num_epochs) // gradient_accumulation_steps, ) # Prepare everything # There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the # prepare method. model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader, lr_scheduler ) # Now we train the model for epoch in range(num_epochs): model.train() for step, batch in enumerate(train_dataloader): # We could avoid this line since we set the accelerator with `device_placement=True`. batch.to(accelerator.device) outputs = model(**batch) loss = outputs.loss loss = loss / gradient_accumulation_steps accelerator.backward(loss) if step % gradient_accumulation_steps == 0: optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() for step, batch in enumerate(eval_dataloader): # We could avoid this line since we set the accelerator with `device_placement=True`. batch.to(accelerator.device) with torch.no_grad(): outputs = model(**batch) predictions = outputs.logits.argmax(dim=-1) predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"])) metric.add_batch( predictions=predictions, references=references, ) eval_metric = metric.compute() # Use accelerator.print to print only on the main process. accelerator.print(f"epoch {epoch}:", eval_metric) # New Code # # We also run predictions on the test set at the very end fold_predictions = [] for step, batch in enumerate(test_dataloader): # We could avoid this line since we set the accelerator with `device_placement=True`. batch.to(accelerator.device) with torch.no_grad(): outputs = model(**batch) predictions = outputs.logits predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"])) fold_predictions.append(predictions.cpu()) if i == 0: # We need all of the test predictions test_references.append(references.cpu()) # Use accelerator.print to print only on the main process. test_predictions.append(torch.cat(fold_predictions, dim=0)) # We now need to release all our memory and get rid of the current model, optimizer, etc accelerator.free_memory() # New Code # # Finally we check the accuracy of our folded results: test_references = torch.cat(test_references, dim=0) preds = torch.stack(test_predictions, dim=0).sum(dim=0).div(int(args.num_folds)).argmax(dim=-1) test_metric = metric.compute(predictions=preds, references=test_references) accelerator.print("Average test metrics from all folds:", test_metric) def main(): parser = argparse.ArgumentParser(description="Simple example of training script.") parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16", "fp8"], help="Whether to use mixed precision. Choose" "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." "and an Nvidia Ampere GPU.", ) parser.add_argument("--cpu", action="store_true", help="If passed, will train on the CPU.") # New Code # parser.add_argument("--num_folds", type=int, default=3, help="The number of splits to perform across the dataset") args = parser.parse_args() config = {"lr": 2e-5, "num_epochs": 3, "seed": 42, "batch_size": 16} training_function(config, args) if __name__ == "__main__": main()
accelerate/examples/by_feature/cross_validation.py/0
{ "file_path": "accelerate/examples/by_feature/cross_validation.py", "repo_id": "accelerate", "token_count": 4458 }
4
{ "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "overlap_comm": true, "contiguous_gradients": true, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "sub_group_size": 1e9, "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": "auto" }, "gradient_accumulation_steps": 1, "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false }
accelerate/examples/deepspeed_config_templates/zero_stage3_config.json/0
{ "file_path": "accelerate/examples/deepspeed_config_templates/zero_stage3_config.json", "repo_id": "accelerate", "token_count": 657 }
5
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from manim import * class Stage4(Scene): def construct(self): mem = Rectangle(height=0.5,width=0.5) fill = Rectangle(height=0.46,width=0.46).set_stroke(width=0) meta_mem = Rectangle(height=0.25,width=0.25) cpu_left_col_base = [mem.copy() for i in range(6)] cpu_right_col_base = [mem.copy() for i in range(6)] cpu_left_col = VGroup(*cpu_left_col_base).arrange(UP, buff=0) cpu_right_col = VGroup(*cpu_right_col_base).arrange(UP, buff=0) cpu_rects = VGroup(cpu_left_col,cpu_right_col).arrange(RIGHT, buff=0) cpu_text = Text("CPU", font_size=24) cpu = Group(cpu_rects,cpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN) cpu.move_to([-2.5,-.5,0]) self.add(cpu) gpu_base = [mem.copy() for i in range(4)] gpu_rect = VGroup(*gpu_base).arrange(UP,buff=0) gpu_text = Text("GPU", font_size=24) gpu = Group(gpu_rect,gpu_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN) gpu.move_to([-1,-1,0]) self.add(gpu) model_base = [mem.copy() for i in range(6)] model_rect = VGroup(*model_base).arrange(RIGHT,buff=0) model_text = Text("Model", font_size=24) model = Group(model_rect,model_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN) model.move_to([3, -1., 0]) self.add(model) model_cpu_arr = [] model_meta_arr = [] for i,rect in enumerate(model_base): rect.set_stroke(YELLOW) cpu_target = Rectangle(height=0.46/4,width=0.46/3).set_stroke(width=0.).set_fill(YELLOW, opacity=0.7) if i == 0: cpu_target.next_to(cpu_left_col_base[0].get_corner(DOWN+LEFT), buff=0.02, direction=UP) cpu_target.set_x(cpu_target.get_x()+0.1) elif i == 3: cpu_target.next_to(model_cpu_arr[0], direction=UP, buff=0.) else: cpu_target.next_to(model_cpu_arr[i-1], direction=RIGHT, buff=0.) self.add(cpu_target) model_cpu_arr.append(cpu_target) self.add(*model_cpu_arr, *model_meta_arr) disk_left_col_base = [meta_mem.copy() for i in range(6)] disk_right_col_base = [meta_mem.copy() for i in range(6)] disk_left_col = VGroup(*disk_left_col_base).arrange(UP, buff=0) disk_right_col = VGroup(*disk_right_col_base).arrange(UP, buff=0) disk_rects = VGroup(disk_left_col,disk_right_col).arrange(RIGHT, buff=0) disk_text = Text("Disk", font_size=24) disk = Group(disk_rects,disk_text).arrange(DOWN, buff=0.5, aligned_edge=DOWN) disk.move_to([-4.,-1.25,0]) self.add(disk_text, disk_rects) cpu_disk_arr = [] for i in range(6): target = fill.copy().set_fill(BLUE, opacity=0.8) target.move_to(disk_left_col_base[i]).scale(0.5) cpu_disk_arr.append(target) self.add(*cpu_disk_arr) key = Square(side_length=2.2) key.move_to([-5, 2, 0]) key_text = MarkupText( f"<b>Key:</b>\n\n<span fgcolor='{YELLOW}'>●</span> Empty Model", font_size=18, ) key_text.move_to([-5, 2.4, 0]) self.add(key_text, key) blue_text = MarkupText( f"<span fgcolor='{BLUE}'>●</span> Checkpoint", font_size=18, ) blue_text.next_to(key_text, DOWN*2.4, aligned_edge=key_text.get_left()) self.add(blue_text) step_5 = MarkupText( f'The offloaded weights are all sent to the CPU.', font_size=24 ) step_5.move_to([2, 2, 0]) self.play(Write(step_5, run_time=3)) for i in range(6): rect = cpu_disk_arr[i] cp2 = rect.copy().set_fill(BLUE, opacity=0.8).scale(2.0) cp2.generate_target() cp2.target.move_to(model_base[i]) if i == 0: rect.set_fill(BLUE, opacity=0.8) rect.generate_target() rect.target.move_to(cpu_left_col_base[0]).scale(2.0) self.remove(*model_meta_arr, *model_cpu_arr, ) else: rect.generate_target() rect.target.move_to(cpu_left_col_base[i]).scale(2.0) self.play( MoveToTarget(rect), MoveToTarget(cp2), model_base[i].animate.set_stroke(WHITE) ) self.play(FadeOut(step_5)) step_5 = MarkupText( f'Finally, hooks are added to each weight in the model\nto transfer the weights from CPU to GPU\n\t\tand back when needed.', font_size=24 ) step_5.move_to([2, 2, 0]) self.play(Write(step_5, run_time=3)) arrows = [] animations = [] for i in range(6): a = Arrow(start=UP, end=DOWN, color=RED, buff=.5) a.next_to(model_base[i].get_left(), UP, buff=0.2) arrows.append(a) animations.append(Write(a)) self.play(*animations) self.wait()
accelerate/manim_animations/big_model_inference/stage_4.py/0
{ "file_path": "accelerate/manim_animations/big_model_inference/stage_4.py", "repo_id": "accelerate", "token_count": 2919 }
6
#!/usr/bin/env python # Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os from ...utils.constants import SAGEMAKER_PARALLEL_EC2_INSTANCES, TORCH_DYNAMO_MODES from ...utils.dataclasses import ComputeEnvironment, SageMakerDistributedType from ...utils.imports import is_boto3_available from .config_args import SageMakerConfig from .config_utils import ( DYNAMO_BACKENDS, _ask_field, _ask_options, _convert_dynamo_backend, _convert_mixed_precision, _convert_sagemaker_distributed_mode, _convert_yes_no_to_bool, ) if is_boto3_available(): import boto3 # noqa: F401 def _create_iam_role_for_sagemaker(role_name): iam_client = boto3.client("iam") sagemaker_trust_policy = { "Version": "2012-10-17", "Statement": [ {"Effect": "Allow", "Principal": {"Service": "sagemaker.amazonaws.com"}, "Action": "sts:AssumeRole"} ], } try: # create the role, associated with the chosen trust policy iam_client.create_role( RoleName=role_name, AssumeRolePolicyDocument=json.dumps(sagemaker_trust_policy, indent=2) ) policy_document = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sagemaker:*", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "ecr:BatchCheckLayerAvailability", "ecr:GetAuthorizationToken", "cloudwatch:PutMetricData", "cloudwatch:GetMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:GetLogEvents", "s3:CreateBucket", "s3:ListBucket", "s3:GetBucketLocation", "s3:GetObject", "s3:PutObject", ], "Resource": "*", } ], } # attach policy to role iam_client.put_role_policy( RoleName=role_name, PolicyName=f"{role_name}_policy_permission", PolicyDocument=json.dumps(policy_document, indent=2), ) except iam_client.exceptions.EntityAlreadyExistsException: print(f"role {role_name} already exists. Using existing one") def _get_iam_role_arn(role_name): iam_client = boto3.client("iam") return iam_client.get_role(RoleName=role_name)["Role"]["Arn"] def get_sagemaker_input(): credentials_configuration = _ask_options( "How do you want to authorize?", ["AWS Profile", "Credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) "], int, ) aws_profile = None if credentials_configuration == 0: aws_profile = _ask_field("Enter your AWS Profile name: [default] ", default="default") os.environ["AWS_PROFILE"] = aws_profile else: print( "Note you will need to provide AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY when you launch you training script with," "`accelerate launch --aws_access_key_id XXX --aws_secret_access_key YYY`" ) aws_access_key_id = _ask_field("AWS Access Key ID: ") os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id aws_secret_access_key = _ask_field("AWS Secret Access Key: ") os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key aws_region = _ask_field("Enter your AWS Region: [us-east-1]", default="us-east-1") os.environ["AWS_DEFAULT_REGION"] = aws_region role_management = _ask_options( "Do you already have an IAM Role for executing Amazon SageMaker Training Jobs?", ["Provide IAM Role name", "Create new IAM role using credentials"], int, ) if role_management == 0: iam_role_name = _ask_field("Enter your IAM role name: ") else: iam_role_name = "accelerate_sagemaker_execution_role" print(f'Accelerate will create an iam role "{iam_role_name}" using the provided credentials') _create_iam_role_for_sagemaker(iam_role_name) is_custom_docker_image = _ask_field( "Do you want to use custom Docker image? [yes/NO]: ", _convert_yes_no_to_bool, default=False, error_message="Please enter yes or no.", ) docker_image = None if is_custom_docker_image: docker_image = _ask_field("Enter your Docker image: ", lambda x: str(x).lower()) is_sagemaker_inputs_enabled = _ask_field( "Do you want to provide SageMaker input channels with data locations? [yes/NO]: ", _convert_yes_no_to_bool, default=False, error_message="Please enter yes or no.", ) sagemaker_inputs_file = None if is_sagemaker_inputs_enabled: sagemaker_inputs_file = _ask_field( "Enter the path to the SageMaker inputs TSV file with columns (channel_name, data_location): ", lambda x: str(x).lower(), ) is_sagemaker_metrics_enabled = _ask_field( "Do you want to enable SageMaker metrics? [yes/NO]: ", _convert_yes_no_to_bool, default=False, error_message="Please enter yes or no.", ) sagemaker_metrics_file = None if is_sagemaker_metrics_enabled: sagemaker_metrics_file = _ask_field( "Enter the path to the SageMaker metrics TSV file with columns (metric_name, metric_regex): ", lambda x: str(x).lower(), ) distributed_type = _ask_options( "What is the distributed mode?", ["No distributed training", "Data parallelism"], _convert_sagemaker_distributed_mode, ) dynamo_config = {} use_dynamo = _ask_field( "Do you wish to optimize your script with torch dynamo?[yes/NO]:", _convert_yes_no_to_bool, default=False, error_message="Please enter yes or no.", ) if use_dynamo: prefix = "dynamo_" dynamo_config[prefix + "backend"] = _ask_options( "Which dynamo backend would you like to use?", [x.lower() for x in DYNAMO_BACKENDS], _convert_dynamo_backend, default=2, ) use_custom_options = _ask_field( "Do you want to customize the defaults sent to torch.compile? [yes/NO]: ", _convert_yes_no_to_bool, default=False, error_message="Please enter yes or no.", ) if use_custom_options: dynamo_config[prefix + "mode"] = _ask_options( "Which mode do you want to use?", TORCH_DYNAMO_MODES, lambda x: TORCH_DYNAMO_MODES[int(x)], default="default", ) dynamo_config[prefix + "use_fullgraph"] = _ask_field( "Do you want the fullgraph mode or it is ok to break model into several subgraphs? [yes/NO]: ", _convert_yes_no_to_bool, default=False, error_message="Please enter yes or no.", ) dynamo_config[prefix + "use_dynamic"] = _ask_field( "Do you want to enable dynamic shape tracing? [yes/NO]: ", _convert_yes_no_to_bool, default=False, error_message="Please enter yes or no.", ) ec2_instance_query = "Which EC2 instance type you want to use for your training?" if distributed_type != SageMakerDistributedType.NO: ec2_instance_type = _ask_options( ec2_instance_query, SAGEMAKER_PARALLEL_EC2_INSTANCES, lambda x: SAGEMAKER_PARALLEL_EC2_INSTANCES[int(x)] ) else: ec2_instance_query += "? [ml.p3.2xlarge]:" ec2_instance_type = _ask_field(ec2_instance_query, lambda x: str(x).lower(), default="ml.p3.2xlarge") debug = False if distributed_type != SageMakerDistributedType.NO: debug = _ask_field( "Should distributed operations be checked while running for errors? This can avoid timeout issues but will be slower. [yes/NO]: ", _convert_yes_no_to_bool, default=False, error_message="Please enter yes or no.", ) num_machines = 1 if distributed_type in (SageMakerDistributedType.DATA_PARALLEL, SageMakerDistributedType.MODEL_PARALLEL): num_machines = _ask_field( "How many machines do you want use? [1]: ", int, default=1, ) mixed_precision = _ask_options( "Do you wish to use FP16 or BF16 (mixed precision)?", ["no", "fp16", "bf16", "fp8"], _convert_mixed_precision, ) if use_dynamo and mixed_precision == "no": print( "Torch dynamo used without mixed precision requires TF32 to be efficient. Accelerate will enable it by default when launching your scripts." ) return SageMakerConfig( image_uri=docker_image, compute_environment=ComputeEnvironment.AMAZON_SAGEMAKER, distributed_type=distributed_type, use_cpu=False, dynamo_config=dynamo_config, ec2_instance_type=ec2_instance_type, profile=aws_profile, region=aws_region, iam_role_name=iam_role_name, mixed_precision=mixed_precision, num_machines=num_machines, sagemaker_inputs_file=sagemaker_inputs_file, sagemaker_metrics_file=sagemaker_metrics_file, debug=debug, )
accelerate/src/accelerate/commands/config/sagemaker.py/0
{ "file_path": "accelerate/src/accelerate/commands/config/sagemaker.py", "repo_id": "accelerate", "token_count": 4784 }
7
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import math from types import MethodType from typing import Any, Dict, List, Optional, Tuple, Union from .state import PartialState from .utils import ( calculate_maximum_sizes, convert_bytes, copy_tensor_to_devices, ignorant_find_batch_size, infer_auto_device_map, is_pippy_available, pad_input_tensors, send_to_device, ) if is_pippy_available(): from pippy.IR import Pipe, PipeSplitWrapper, annotate_split_points from pippy.PipelineStage import PipelineStage def generate_device_map(model, num_processes: int = 1, no_split_module_classes=None, max_memory: dict = None): """ Calculates the device map for `model` with an offset for PiPPy """ if num_processes == 1: return infer_auto_device_map(model, no_split_module_classes=no_split_module_classes, clean_result=False) if max_memory is None: model_size, shared = calculate_maximum_sizes(model) # Split into `n` chunks for each GPU memory = (model_size + shared[0]) / num_processes memory = convert_bytes(memory) value, ending = memory.split(" ") # Add a chunk to deal with potential extra shared memory instances memory = math.ceil(float(value)) * 1.1 memory = f"{memory} {ending}" max_memory = {i: memory for i in range(num_processes)} device_map = infer_auto_device_map( model, max_memory=max_memory, no_split_module_classes=no_split_module_classes, clean_result=False, ) return device_map def find_pippy_batch_size(args, kwargs): found_batch_size = None if args is not None: for arg in args: found_batch_size = ignorant_find_batch_size(arg) if found_batch_size is not None: break if kwargs is not None and found_batch_size is None: for kwarg in kwargs.values(): found_batch_size = ignorant_find_batch_size(kwarg) if found_batch_size is not None: break return found_batch_size def build_pipeline(model, split_points, args, kwargs, num_chunks): """ Attaches the split points to the model based on `self.device_map` and generates a `PipelineStage`. Requires passing in needed `args` and `kwargs` as the model needs on the CPU. Users can pass in custom `num_chunks` as an optional hyper-parameter. By default will use `AcceleratorState.num_processes` """ # We need to annotate the split points in the model for PiPPy state = PartialState() annotate_split_points(model, {split_point: PipeSplitWrapper.SplitPoint.BEGINNING for split_point in split_points}) found_batch_size = find_pippy_batch_size(args, kwargs) if found_batch_size != num_chunks: if args is not None: args = pad_input_tensors(args, found_batch_size, num_chunks) if kwargs is not None: kwargs = pad_input_tensors(kwargs, found_batch_size, num_chunks) pipe = Pipe.from_tracing(model, num_chunks=num_chunks, example_args=args, example_kwargs=kwargs) stage = PipelineStage(pipe, state.local_process_index, device=state.device) return stage def pippy_forward(forward, num_chunks, gather_output, *args, **kwargs): state = PartialState() output = None if state.num_processes == 1: output = forward(*args, **kwargs) elif state.is_local_main_process: found_batch_size = find_pippy_batch_size(args, kwargs) if found_batch_size is None: raise ValueError("Could not find batch size from args or kwargs") else: if found_batch_size != num_chunks: args = pad_input_tensors(args, found_batch_size, num_chunks) kwargs = pad_input_tensors(kwargs, found_batch_size, num_chunks) forward(*args, **kwargs) elif state.is_last_process: output = forward() else: forward() if gather_output: # Each node will get a copy of the full output which is only on the last GPU output = copy_tensor_to_devices(output) return output def prepare_pippy( model, split_points: Optional[Union[str, List[str]]] = "auto", no_split_module_classes: Optional[List[str]] = None, example_args: Optional[Tuple[Any]] = (), example_kwargs: Optional[Dict[str, Any]] = None, num_chunks: Optional[int] = None, gather_output: Optional[bool] = False, ): """ Wraps `model` for pipeline parallel inference. Args: model (`torch.nn.Module`): A model we want to split for pipeline-parallel inference split_points (`str` or `List[str]`, defaults to 'auto'): How to generate the split points and chunk the model across each GPU. 'auto' will find the best balanced split given any model. Should be a list of layer names in the model to split by otherwise. no_split_module_classes (`List[str]`): A list of class names for layers we don't want to be split. example_args (tuple of model inputs): The expected inputs for the model that uses order-based inputs. Recommended to use this method if possible. example_kwargs (dict of model inputs) The expected inputs for the model that uses dictionary-based inputs. This is a *highly* limiting structure that requires the same keys be present at *all* inference calls. Not recommended unless the prior condition is true for all cases. num_chunks (`int`, defaults to the number of available GPUs): The number of different stages the Pipeline will have. By default it will assign one chunk per GPU, but this can be tuned and played with. In general one should have num_chunks >= num_gpus. gather_output (`bool`, defaults to `False`): If `True`, the output from the last GPU (which holds the true outputs) is sent across to all GPUs. """ if not is_pippy_available(): raise ImportError( "`pippy` was not found to be installed on your system. Please " "install using `pip install torchpippy` or ensure you have at least version 0.2.0" ) state = PartialState() example_args = send_to_device(example_args, "cpu") example_kwargs = send_to_device(example_kwargs, "cpu") if num_chunks is None: num_chunks = state.num_processes if split_points == "auto": device_map = generate_device_map(model, num_chunks, no_split_module_classes=no_split_module_classes) split_points = [] for i in range(1, num_chunks): split_points.append(next(k for k, v in device_map.items() if v == i)) model.hf_split_points = split_points stage = build_pipeline(model, split_points, example_args, example_kwargs, num_chunks) model._original_forward = model.forward model._original_call = model.__call__ model.pippy_stage = stage model.hf_split_points = split_points def forward(*args, **kwargs): return pippy_forward(stage.forward, num_chunks, gather_output, *args, **kwargs) # To act like a decorator so that it can be popped when doing `extract_model_from_parallel` # Note: creates an infinite recursion loop with `generate` model_forward = MethodType(forward, model) forward.__wrapped__ = model_forward model.forward = forward return model
accelerate/src/accelerate/inference.py/0
{ "file_path": "accelerate/src/accelerate/inference.py", "repo_id": "accelerate", "token_count": 2991 }
8
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import torch from torchvision.models import resnet34 from transformers import ( BertConfig, BertForMaskedLM, GPT2Config, GPT2ForSequenceClassification, T5Config, T5ForConditionalGeneration, ) from accelerate import PartialState from accelerate.inference import prepare_pippy from accelerate.utils import DistributedType, send_to_device, set_seed model_to_config = { "t5": (T5ForConditionalGeneration, T5Config, 1024), "bert": (BertForMaskedLM, BertConfig, 512), "gpt2": (GPT2ForSequenceClassification, GPT2Config, 1024), } def get_model_and_data_for_text(model_name, device, num_processes: int = 2): initializer, config, seq_len = model_to_config[model_name] config_args = {} # Eventually needed for batch inference tests on gpt-2 when bs != 1 # if model_name == "gpt2": # config_args["pad_token_id"] = 0 model_config = config(**config_args) model = initializer(model_config) return model, torch.randint( low=0, high=model_config.vocab_size, size=(num_processes, seq_len), device=device, dtype=torch.int64, requires_grad=False, ) def test_gpt2(batch_size: int = 2): set_seed(42) state = PartialState() model, inputs = get_model_and_data_for_text("gpt2", "cpu", batch_size) model = prepare_pippy(model, example_args=(inputs,), no_split_module_classes=model._no_split_modules) # For inference args need to be a tuple inputs = inputs.to("cuda") with torch.no_grad(): output = model(inputs) # Zach: Check that we just grab the real outputs we need at the end if not state.is_last_process: assert output is None, "Output was not generated on just the last process!" else: assert output is not None, "Output was not generated in the last process!" def test_t5(batch_size: int = 2): set_seed(42) state = PartialState() model, inputs = get_model_and_data_for_text("t5", "cpu", batch_size) example_inputs = {"input_ids": inputs, "decoder_input_ids": inputs} model = prepare_pippy( model, no_split_module_classes=model._no_split_modules, example_kwargs=example_inputs, ) # For inference args need to be a tuple inputs = send_to_device(example_inputs, "cuda:0") with torch.no_grad(): output = model(*inputs.values()) # Zach: Check that we just grab the real outputs we need at the end if not state.is_last_process: assert output is None, "Output was not generated on just the last process!" else: assert output is not None, "Output was not generated in the last process!" def test_resnet(batch_size: int = 2): set_seed(42) state = PartialState() model = resnet34() input_tensor = torch.rand(batch_size, 3, 224, 224) model = prepare_pippy( model, example_args=(input_tensor,), ) inputs = send_to_device(input_tensor, "cuda:0") with torch.no_grad(): output = model(inputs) # Zach: Check that we just grab the real outputs we need at the end if not state.is_last_process: assert output is None, "Output was not generated on just the last process!" else: assert output is not None, "Output was not generated in the last process!" if __name__ == "__main__": state = PartialState() state.print("Testing pippy integration...") if state.distributed_type == DistributedType.MULTI_GPU: state.print("Testing GPT2...") test_gpt2() # Issue: When modifying the tokenizer for batch GPT2 inference, there's an issue # due to references # NameError: cannot access free variable 'chunk_args_list' where it is not associated with a value in enclosing scope # test_gpt2(3) state.print("Testing T5...") test_t5() test_t5(1) test_t5(3) state.print("Testing CV model...") test_resnet() test_resnet(3) else: print("Less than two GPUs found, not running tests!")
accelerate/src/accelerate/test_utils/scripts/external_deps/test_pippy.py/0
{ "file_path": "accelerate/src/accelerate/test_utils/scripts/external_deps/test_pippy.py", "repo_id": "accelerate", "token_count": 1729 }
9
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import torch from ..logging import get_logger from .constants import FSDP_MODEL_NAME, FSDP_PYTORCH_VERSION, OPTIMIZER_NAME from .imports import is_torch_distributed_available from .modeling import is_peft_model from .versions import is_torch_version if is_torch_version(">=", FSDP_PYTORCH_VERSION) and is_torch_distributed_available(): import torch.distributed.checkpoint as dist_cp from torch.distributed.checkpoint.default_planner import DefaultLoadPlanner, DefaultSavePlanner from torch.distributed.checkpoint.optimizer import load_sharded_optimizer_state_dict from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP from torch.distributed.fsdp.fully_sharded_data_parallel import StateDictType logger = get_logger(__name__) def _get_model_state_dict(model, adapter_only=False): if adapter_only and is_peft_model(model): from peft import get_peft_model_state_dict return get_peft_model_state_dict(model, adapter_name=model.active_adapter) else: return model.state_dict() def _set_model_state_dict(model, state_dict, adapter_only=False): if adapter_only and is_peft_model(model): from peft import set_peft_model_state_dict return set_peft_model_state_dict(model, state_dict, adapter_name=model.active_adapter) else: return model.load_state_dict(state_dict) def save_fsdp_model(fsdp_plugin, accelerator, model, output_dir, model_index=0, adapter_only=False): os.makedirs(output_dir, exist_ok=True) if fsdp_plugin.state_dict_type == StateDictType.FULL_STATE_DICT: # FSDP raises error when single GPU is used with `offload_to_cpu=True` for FULL_STATE_DICT # so, only enable it when num_processes>1 is_multi_process = accelerator.num_processes > 1 fsdp_plugin.state_dict_config.offload_to_cpu = is_multi_process fsdp_plugin.state_dict_config.rank0_only = is_multi_process with FSDP.state_dict_type( model, fsdp_plugin.state_dict_type, fsdp_plugin.state_dict_config, fsdp_plugin.optim_state_dict_config ): state_dict = _get_model_state_dict(model, adapter_only=adapter_only) if fsdp_plugin.state_dict_type == StateDictType.FULL_STATE_DICT: weights_name = f"{FSDP_MODEL_NAME}.bin" if model_index == 0 else f"{FSDP_MODEL_NAME}_{model_index}.bin" output_model_file = os.path.join(output_dir, weights_name) if accelerator.process_index == 0: logger.info(f"Saving model to {output_model_file}") torch.save(state_dict, output_model_file) logger.info(f"Model saved to {output_model_file}") elif fsdp_plugin.state_dict_type == StateDictType.LOCAL_STATE_DICT: weights_name = ( f"{FSDP_MODEL_NAME}_rank{accelerator.process_index}.bin" if model_index == 0 else f"{FSDP_MODEL_NAME}_{model_index}_rank{accelerator.process_index}.bin" ) output_model_file = os.path.join(output_dir, weights_name) logger.info(f"Saving model to {output_model_file}") torch.save(state_dict, output_model_file) logger.info(f"Model saved to {output_model_file}") elif fsdp_plugin.state_dict_type == StateDictType.SHARDED_STATE_DICT: ckpt_dir = os.path.join(output_dir, f"{FSDP_MODEL_NAME}_{model_index}") os.makedirs(ckpt_dir, exist_ok=True) logger.info(f"Saving model to {ckpt_dir}") state_dict = {"model": state_dict} dist_cp.save_state_dict( state_dict=state_dict, storage_writer=dist_cp.FileSystemWriter(ckpt_dir), planner=DefaultSavePlanner(), ) logger.info(f"Model saved to {ckpt_dir}") def load_fsdp_model(fsdp_plugin, accelerator, model, input_dir, model_index=0, adapter_only=False): accelerator.wait_for_everyone() if fsdp_plugin.state_dict_type == StateDictType.FULL_STATE_DICT: # FSDP raises error when single GPU is used with `offload_to_cpu=True` for FULL_STATE_DICT # so, only enable it when num_processes>1 is_multi_process = accelerator.num_processes > 1 fsdp_plugin.state_dict_config.offload_to_cpu = is_multi_process fsdp_plugin.state_dict_config.rank0_only = is_multi_process with FSDP.state_dict_type( model, fsdp_plugin.state_dict_type, fsdp_plugin.state_dict_config, fsdp_plugin.optim_state_dict_config ): if fsdp_plugin.state_dict_type == StateDictType.FULL_STATE_DICT: if type(model) != FSDP and accelerator.process_index != 0: if not fsdp_plugin.sync_module_states: raise ValueError( "Set the `sync_module_states` flag to `True` so that model states are synced across processes when " "initializing FSDP object" ) return weights_name = f"{FSDP_MODEL_NAME}.bin" if model_index == 0 else f"{FSDP_MODEL_NAME}_{model_index}.bin" input_model_file = os.path.join(input_dir, weights_name) logger.info(f"Loading model from {input_model_file}") state_dict = torch.load(input_model_file) logger.info(f"Model loaded from {input_model_file}") elif fsdp_plugin.state_dict_type == StateDictType.LOCAL_STATE_DICT: weights_name = ( f"{FSDP_MODEL_NAME}_rank{accelerator.process_index}.bin" if model_index == 0 else f"{FSDP_MODEL_NAME}_{model_index}_rank{accelerator.process_index}.bin" ) input_model_file = os.path.join(input_dir, weights_name) logger.info(f"Loading model from {input_model_file}") state_dict = torch.load(input_model_file) logger.info(f"Model loaded from {input_model_file}") elif fsdp_plugin.state_dict_type == StateDictType.SHARDED_STATE_DICT: ckpt_dir = ( os.path.join(input_dir, f"{FSDP_MODEL_NAME}_{model_index}") if f"{FSDP_MODEL_NAME}" not in input_dir else input_dir ) logger.info(f"Loading model from {ckpt_dir}") state_dict = {"model": _get_model_state_dict(model, adapter_only=adapter_only)} dist_cp.load_state_dict( state_dict=state_dict, storage_reader=dist_cp.FileSystemReader(ckpt_dir), planner=DefaultLoadPlanner(), ) state_dict = state_dict["model"] logger.info(f"Model loaded from {ckpt_dir}") load_result = _set_model_state_dict(model, state_dict, adapter_only=adapter_only) return load_result def save_fsdp_optimizer(fsdp_plugin, accelerator, optimizer, model, output_dir, optimizer_index=0): os.makedirs(output_dir, exist_ok=True) with FSDP.state_dict_type( model, fsdp_plugin.state_dict_type, fsdp_plugin.state_dict_config, fsdp_plugin.optim_state_dict_config ): optim_state = FSDP.optim_state_dict(model, optimizer) if fsdp_plugin.state_dict_type == StateDictType.FULL_STATE_DICT: if accelerator.process_index == 0: optim_state_name = ( f"{OPTIMIZER_NAME}.bin" if optimizer_index == 0 else f"{OPTIMIZER_NAME}_{optimizer_index}.bin" ) output_optimizer_file = os.path.join(output_dir, optim_state_name) logger.info(f"Saving Optimizer state to {output_optimizer_file}") torch.save(optim_state, output_optimizer_file) logger.info(f"Optimizer state saved in {output_optimizer_file}") else: ckpt_dir = os.path.join(output_dir, f"{OPTIMIZER_NAME}_{optimizer_index}") os.makedirs(ckpt_dir, exist_ok=True) logger.info(f"Saving Optimizer state to {ckpt_dir}") dist_cp.save_state_dict( state_dict={"optimizer": optim_state}, storage_writer=dist_cp.FileSystemWriter(ckpt_dir), planner=DefaultSavePlanner(), ) logger.info(f"Optimizer state saved in {ckpt_dir}") def load_fsdp_optimizer(fsdp_plugin, accelerator, optimizer, model, input_dir, optimizer_index=0, adapter_only=False): accelerator.wait_for_everyone() with FSDP.state_dict_type( model, fsdp_plugin.state_dict_type, fsdp_plugin.state_dict_config, fsdp_plugin.optim_state_dict_config ): if fsdp_plugin.state_dict_type == StateDictType.FULL_STATE_DICT: optim_state = None if accelerator.process_index == 0 or not fsdp_plugin.optim_state_dict_config.rank0_only: optimizer_name = ( f"{OPTIMIZER_NAME}.bin" if optimizer_index == 0 else f"{OPTIMIZER_NAME}_{optimizer_index}.bin" ) input_optimizer_file = os.path.join(input_dir, optimizer_name) logger.info(f"Loading Optimizer state from {input_optimizer_file}") optim_state = torch.load(input_optimizer_file) logger.info(f"Optimizer state loaded from {input_optimizer_file}") else: ckpt_dir = ( os.path.join(input_dir, f"{OPTIMIZER_NAME}_{optimizer_index}") if f"{OPTIMIZER_NAME}" not in input_dir else input_dir ) logger.info(f"Loading Optimizer from {ckpt_dir}") optim_state = load_sharded_optimizer_state_dict( model_state_dict=_get_model_state_dict(model, adapter_only=adapter_only), optimizer_key="optimizer", storage_reader=dist_cp.FileSystemReader(ckpt_dir), ) optim_state = optim_state["optimizer"] logger.info(f"Optimizer loaded from {ckpt_dir}") flattened_osd = FSDP.optim_state_dict_to_load(model=model, optim=optimizer, optim_state_dict=optim_state) optimizer.load_state_dict(flattened_osd)
accelerate/src/accelerate/utils/fsdp_utils.py/0
{ "file_path": "accelerate/src/accelerate/utils/fsdp_utils.py", "repo_id": "accelerate", "token_count": 4830 }
10
{ "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "weight_decay": "auto", "torch_adam": true, "adam_w_mode": true } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": "auto" }, "gradient_accumulation_steps": 1, "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false }
accelerate/tests/deepspeed/ds_config_zero3.json/0
{ "file_path": "accelerate/tests/deepspeed/ds_config_zero3.json", "repo_id": "accelerate", "token_count": 825 }
11
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import unittest import torch import torch.nn as nn from torch.fx import symbolic_trace from accelerate.hooks import ( AlignDevicesHook, ModelHook, SequentialHook, add_hook_to_module, attach_align_device_hook, remove_hook_from_module, remove_hook_from_submodules, ) from accelerate.test_utils import require_multi_gpu class ModelForTest(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(3, 4) self.batchnorm = nn.BatchNorm1d(4) self.linear2 = nn.Linear(4, 5) def forward(self, x): return self.linear2(self.batchnorm(self.linear1(x))) class PreForwardHook(ModelHook): def pre_forward(self, module, *args, **kwargs): return (args[0] + 1,) + args[1:], kwargs class PostForwardHook(ModelHook): def post_forward(self, module, output): return output + 1 class HooksModelTester(unittest.TestCase): def test_add_and_remove_hooks(self): test_model = ModelForTest() test_hook = ModelHook() add_hook_to_module(test_model, test_hook) assert test_model._hf_hook == test_hook assert hasattr(test_model, "_old_forward") # Check adding the hook did not change the name or the signature assert test_model.forward.__name__ == "forward" assert list(inspect.signature(test_model.forward).parameters) == ["x"] remove_hook_from_module(test_model) assert not hasattr(test_model, "_hf_hook") assert not hasattr(test_model, "_old_forward") def test_append_and_remove_hooks(self): test_model = ModelForTest() test_hook = ModelHook() add_hook_to_module(test_model, test_hook) add_hook_to_module(test_model, test_hook, append=True) assert isinstance(test_model._hf_hook, SequentialHook) is True assert len(test_model._hf_hook.hooks) == 2 assert hasattr(test_model, "_old_forward") # Check adding the hook did not change the name or the signature assert test_model.forward.__name__ == "forward" assert list(inspect.signature(test_model.forward).parameters) == ["x"] remove_hook_from_module(test_model) assert not hasattr(test_model, "_hf_hook") assert not hasattr(test_model, "_old_forward") def test_pre_forward_hook_is_executed(self): test_model = ModelForTest() x = torch.randn(2, 3) expected = test_model(x + 1) expected2 = test_model(x + 2) test_hook = PreForwardHook() add_hook_to_module(test_model, test_hook) output1 = test_model(x) assert torch.allclose(output1, expected, atol=1e-5) # Attaching a hook to a model when it already has one replaces, does not chain test_hook = PreForwardHook() add_hook_to_module(test_model, test_hook) output1 = test_model(x) assert torch.allclose(output1, expected, atol=1e-5) # You need to use the sequential hook to chain two or more hooks test_hook = SequentialHook(PreForwardHook(), PreForwardHook()) add_hook_to_module(test_model, test_hook) output2 = test_model(x) assert torch.allclose(output2, expected2, atol=1e-5) def test_post_forward_hook_is_executed(self): test_model = ModelForTest() x = torch.randn(2, 3) output = test_model(x) test_hook = PostForwardHook() add_hook_to_module(test_model, test_hook) output1 = test_model(x) assert torch.allclose(output1, (output + 1), atol=1e-5) # Attaching a hook to a model when it already has one replaces, does not chain test_hook = PostForwardHook() add_hook_to_module(test_model, test_hook) output1 = test_model(x) assert torch.allclose(output1, (output + 1), atol=1e-5) # You need to use the sequential hook to chain two or more hooks test_hook = SequentialHook(PostForwardHook(), PostForwardHook()) add_hook_to_module(test_model, test_hook) output2 = test_model(x) assert torch.allclose(output2, output + 2, atol=1e-5) def test_no_grad_in_hook(self): test_model = ModelForTest() x = torch.randn(2, 3) output = test_model(x) test_hook = PostForwardHook() add_hook_to_module(test_model, test_hook) output1 = test_model(x) assert torch.allclose(output1, (output + 1)) assert output1.requires_grad test_hook.no_grad = True output1 = test_model(x) assert not output1.requires_grad @require_multi_gpu def test_align_devices_as_model_parallelism(self): model = ModelForTest() # Everything is on CPU assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") # This will move each submodule on different devices add_hook_to_module(model.linear1, AlignDevicesHook(execution_device=0)) add_hook_to_module(model.batchnorm, AlignDevicesHook(execution_device=0)) add_hook_to_module(model.linear2, AlignDevicesHook(execution_device=1)) assert model.linear1.weight.device == torch.device(0) assert model.batchnorm.weight.device == torch.device(0) assert model.batchnorm.running_mean.device == torch.device(0) assert model.linear2.weight.device == torch.device(1) # We can still make a forward pass. The input does not need to be on any particular device x = torch.randn(2, 3) output = model(x) assert output.device == torch.device(1) # We can add a general hook to put back output on same device as input. add_hook_to_module(model, AlignDevicesHook(io_same_device=True)) x = torch.randn(2, 3).to(0) output = model(x) assert output.device == torch.device(0) def test_align_devices_as_cpu_offload(self): model = ModelForTest() # Everything is on CPU assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") # This will move each submodule on different devices hook_kwargs = {"execution_device": 0 if torch.cuda.is_available() else "cpu", "offload": True} add_hook_to_module(model.linear1, AlignDevicesHook(**hook_kwargs)) add_hook_to_module(model.batchnorm, AlignDevicesHook(**hook_kwargs)) add_hook_to_module(model.linear2, AlignDevicesHook(**hook_kwargs)) # Parameters have been offloaded, so on the meta device assert model.linear1.weight.device == torch.device("meta") assert model.batchnorm.weight.device == torch.device("meta") assert model.linear2.weight.device == torch.device("meta") # Buffers are not included in the offload by default, so are on the execution device device = torch.device(hook_kwargs["execution_device"]) assert model.batchnorm.running_mean.device == device x = torch.randn(2, 3) output = model(x) assert output.device == device # Removing hooks loads back the weights in the model. remove_hook_from_module(model.linear1) remove_hook_from_module(model.batchnorm) remove_hook_from_module(model.linear2) assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") # Now test with buffers included in the offload hook_kwargs = { "execution_device": 0 if torch.cuda.is_available() else "cpu", "offload": True, "offload_buffers": True, } add_hook_to_module(model.linear1, AlignDevicesHook(**hook_kwargs)) add_hook_to_module(model.batchnorm, AlignDevicesHook(**hook_kwargs)) add_hook_to_module(model.linear2, AlignDevicesHook(**hook_kwargs)) # Parameters have been offloaded, so on the meta device, buffers included assert model.linear1.weight.device == torch.device("meta") assert model.batchnorm.weight.device == torch.device("meta") assert model.linear2.weight.device == torch.device("meta") assert model.batchnorm.running_mean.device == torch.device("meta") x = torch.randn(2, 3) output = model(x) assert output.device == device # Removing hooks loads back the weights in the model. remove_hook_from_module(model.linear1) remove_hook_from_module(model.batchnorm) remove_hook_from_module(model.linear2) assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") def test_attach_align_device_hook_as_cpu_offload(self): model = ModelForTest() # Everything is on CPU assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") # This will move each submodule on different devices execution_device = 0 if torch.cuda.is_available() else "cpu" attach_align_device_hook(model, execution_device=execution_device, offload=True) # Parameters have been offloaded, so on the meta device assert model.linear1.weight.device == torch.device("meta") assert model.batchnorm.weight.device == torch.device("meta") assert model.linear2.weight.device == torch.device("meta") # Buffers are not included in the offload by default, so are on the execution device device = torch.device(execution_device) assert model.batchnorm.running_mean.device == device x = torch.randn(2, 3) output = model(x) assert output.device == device # Removing hooks loads back the weights in the model. remove_hook_from_submodules(model) assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") # Now test with buffers included in the offload attach_align_device_hook(model, execution_device=execution_device, offload=True, offload_buffers=True) # Parameters have been offloaded, so on the meta device, buffers included assert model.linear1.weight.device == torch.device("meta") assert model.batchnorm.weight.device == torch.device("meta") assert model.linear2.weight.device == torch.device("meta") assert model.batchnorm.running_mean.device == torch.device("meta") x = torch.randn(2, 3) output = model(x) assert output.device == device # Removing hooks loads back the weights in the model. remove_hook_from_submodules(model) assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") def test_attach_align_device_hook_as_cpu_offload_with_weight_map(self): model = ModelForTest() # Everything is on CPU assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") # This will move each submodule on different devices execution_device = 0 if torch.cuda.is_available() else "cpu" attach_align_device_hook( model, execution_device=execution_device, offload=True, weights_map=model.state_dict() ) # Parameters have been offloaded, so on the meta device assert model.linear1.weight.device == torch.device("meta") assert model.batchnorm.weight.device == torch.device("meta") assert model.linear2.weight.device == torch.device("meta") # Buffers are not included in the offload by default, so are on the execution device device = torch.device(execution_device) assert model.batchnorm.running_mean.device == device x = torch.randn(2, 3) output = model(x) assert output.device == device # Removing hooks loads back the weights in the model. remove_hook_from_submodules(model) assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") # Now test with buffers included in the offload attach_align_device_hook( model, execution_device=execution_device, offload=True, weights_map=model.state_dict(), offload_buffers=True, ) # Parameters have been offloaded, so on the meta device, buffers included assert model.linear1.weight.device == torch.device("meta") assert model.batchnorm.weight.device == torch.device("meta") assert model.linear2.weight.device == torch.device("meta") assert model.batchnorm.running_mean.device == torch.device("meta") x = torch.randn(2, 3) output = model(x) assert output.device == device # Removing hooks loads back the weights in the model. remove_hook_from_submodules(model) assert model.linear1.weight.device == torch.device("cpu") assert model.batchnorm.weight.device == torch.device("cpu") assert model.linear2.weight.device == torch.device("cpu") def test_add_remove_hook_fx_graph_module(self): with torch.no_grad(): test_model = ModelForTest() test_hook = ModelHook() x = torch.randn(2, 3) output1 = test_model(x) graph_model = symbolic_trace(test_model) output2 = graph_model(x) assert torch.allclose(output1, output2) add_hook_to_module(graph_model, test_hook) remove_hook_from_module(graph_model, recurse=True) # We want to make sure that `add_hook_to_module` and `remove_hook_from_module` yields back an fx.GraphModule # that behaves correctly (for example that is not frozen, see https://github.com/huggingface/accelerate/pull/2369). # For that, we add a sigmoid node to the FX graph and make sure that the new output (output3 below) is different than # the original model's output. linear2_node = None for node in graph_model.graph.nodes: if node.name == "linear2": linear2_node = node assert linear2_node is not None graph_model.graph.inserting_after(linear2_node) new_node = graph_model.graph.create_node( op="call_function", target=torch.sigmoid, args=(linear2_node,), name="relu" ) output_node = None for node in graph_model.graph.nodes: if node.name == "output": output_node = node assert output_node is not None output_node.replace_input_with(linear2_node, new_node) graph_model.graph.lint() graph_model.recompile() output3 = graph_model(x) # Now the output is expected to be different since we modified the graph. assert not torch.allclose(output1, output3)
accelerate/tests/test_hooks.py/0
{ "file_path": "accelerate/tests/test_hooks.py", "repo_id": "accelerate", "token_count": 6551 }
12
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import csv import json import logging import os import re import subprocess import tempfile import unittest import zipfile from pathlib import Path from typing import Optional from unittest import mock import numpy as np import torch # We use TF to parse the logs from accelerate import Accelerator from accelerate.test_utils.testing import ( MockingTestCase, TempDirTestCase, require_clearml, require_comet_ml, require_dvclive, require_pandas, require_tensorboard, require_wandb, skip, ) from accelerate.tracking import CometMLTracker, GeneralTracker from accelerate.utils import ( ProjectConfiguration, is_comet_ml_available, is_dvclive_available, is_tensorboard_available, ) if is_comet_ml_available(): from comet_ml import OfflineExperiment if is_tensorboard_available(): import struct import tensorboard.compat.proto.event_pb2 as event_pb2 if is_dvclive_available(): from dvclive.plots.metric import Metric from dvclive.serialize import load_yaml from dvclive.utils import parse_metrics logger = logging.getLogger(__name__) @require_tensorboard class TensorBoardTrackingTest(unittest.TestCase): def test_init_trackers(self): project_name = "test_project_with_config" with tempfile.TemporaryDirectory() as dirpath: accelerator = Accelerator(log_with="tensorboard", project_dir=dirpath) config = {"num_iterations": 12, "learning_rate": 1e-2, "some_boolean": False, "some_string": "some_value"} accelerator.init_trackers(project_name, config) accelerator.end_training() for child in Path(f"{dirpath}/{project_name}").glob("*/**"): log = list(filter(lambda x: x.is_file(), child.iterdir()))[0] assert str(log) != "" def test_log(self): project_name = "test_project_with_log" with tempfile.TemporaryDirectory() as dirpath: accelerator = Accelerator(log_with="tensorboard", project_dir=dirpath) accelerator.init_trackers(project_name) values = {"total_loss": 0.1, "iteration": 1, "my_text": "some_value"} accelerator.log(values, step=0) accelerator.end_training() # Logged values are stored in the outermost-tfevents file and can be read in as a TFRecord # Names are randomly generated each time log = list(filter(lambda x: x.is_file(), Path(f"{dirpath}/{project_name}").iterdir()))[0] assert str(log) != "" def test_log_with_tensor(self): project_name = "test_project_with_log" with tempfile.TemporaryDirectory() as dirpath: accelerator = Accelerator(log_with="tensorboard", project_dir=dirpath) accelerator.init_trackers(project_name) values = {"tensor": torch.tensor(1)} accelerator.log(values, step=0) accelerator.end_training() # Logged values are stored in the outermost-tfevents file and can be read in as a TFRecord # Names are randomly generated each time log = list(filter(lambda x: x.is_file(), Path(f"{dirpath}/{project_name}").iterdir()))[0] # Reading implementation based on https://github.com/pytorch/pytorch/issues/45327#issuecomment-703757685 with open(log, "rb") as f: data = f.read() found_tensor = False while data: header = struct.unpack("Q", data[:8]) event_str = data[12 : 12 + int(header[0])] # 8+4 data = data[12 + int(header[0]) + 4 :] event = event_pb2.Event() event.ParseFromString(event_str) if event.HasField("summary"): for value in event.summary.value: if value.simple_value == 1.0 and value.tag == "tensor": found_tensor = True assert found_tensor, "Converted tensor was not found in the log file!" def test_project_dir(self): with self.assertRaisesRegex(ValueError, "Logging with `tensorboard` requires a `logging_dir`"): _ = Accelerator(log_with="tensorboard") with tempfile.TemporaryDirectory() as dirpath: _ = Accelerator(log_with="tensorboard", project_dir=dirpath) def test_project_dir_with_config(self): config = ProjectConfiguration(total_limit=30) with tempfile.TemporaryDirectory() as dirpath: _ = Accelerator(log_with="tensorboard", project_dir=dirpath, project_config=config) @require_wandb @mock.patch.dict(os.environ, {"WANDB_MODE": "offline"}) class WandBTrackingTest(TempDirTestCase, MockingTestCase): def setUp(self): super().setUp() # wandb let's us override where logs are stored to via the WANDB_DIR env var self.add_mocks(mock.patch.dict(os.environ, {"WANDB_DIR": self.tmpdir})) @staticmethod def parse_log(log: str, section: str, record: bool = True): """ Parses wandb log for `section` and returns a dictionary of all items in that section. Section names are based on the output of `wandb sync --view --verbose` and items starting with "Record" in that result """ # Big thanks to the W&B team for helping us parse their logs pattern = rf"{section} ([\S\s]*?)\n\n" if record: pattern = rf"Record: {pattern}" cleaned_record = re.findall(pattern, log)[0] # A config if section == "config" or section == "history": cleaned_record = re.findall(r'"([a-zA-Z0-9_.,]+)', cleaned_record) return {key: val for key, val in zip(cleaned_record[0::2], cleaned_record[1::2])} # Everything else else: return dict(re.findall(r'(\w+): "([^\s]+)"', cleaned_record)) @skip def test_wandb(self): project_name = "test_project_with_config" accelerator = Accelerator(log_with="wandb") config = {"num_iterations": 12, "learning_rate": 1e-2, "some_boolean": False, "some_string": "some_value"} kwargs = {"wandb": {"tags": ["my_tag"]}} accelerator.init_trackers(project_name, config, kwargs) values = {"total_loss": 0.1, "iteration": 1, "my_text": "some_value"} accelerator.log(values, step=0) accelerator.end_training() # The latest offline log is stored at wandb/latest-run/*.wandb for child in Path(f"{self.tmpdir}/wandb/latest-run").glob("*"): if child.is_file() and child.suffix == ".wandb": cmd = ["wandb", "sync", "--view", "--verbose", str(child)] content = subprocess.check_output(cmd, encoding="utf8", errors="ignore") break # Check HPS through careful parsing and cleaning logged_items = self.parse_log(content, "config") assert logged_items["num_iterations"] == "12" assert logged_items["learning_rate"] == "0.01" assert logged_items["some_boolean"] == "false" assert logged_items["some_string"] == "some_value" assert logged_items["some_string"] == "some_value" # Run tags logged_items = self.parse_log(content, "run", False) assert logged_items["tags"] == "my_tag" # Actual logging logged_items = self.parse_log(content, "history") assert logged_items["total_loss"] == "0.1" assert logged_items["iteration"] == "1" assert logged_items["my_text"] == "some_value" assert logged_items["_step"] == "0" # Comet has a special `OfflineExperiment` we need to use for testing def offline_init(self, run_name: str, tmpdir: str): self.run_name = run_name self.writer = OfflineExperiment(project_name=run_name, offline_directory=tmpdir) logger.info(f"Initialized offline CometML project {self.run_name}") logger.info("Make sure to log any initial configurations with `self.store_init_configuration` before training!") @require_comet_ml @mock.patch.object(CometMLTracker, "__init__", offline_init) class CometMLTest(unittest.TestCase): @staticmethod def get_value_from_key(log_list, key: str, is_param: bool = False): "Extracts `key` from Comet `log`" for log in log_list: j = json.loads(log)["payload"] if is_param and "param" in j.keys(): if j["param"]["paramName"] == key: return j["param"]["paramValue"] if "log_other" in j.keys(): if j["log_other"]["key"] == key: return j["log_other"]["val"] if "metric" in j.keys(): if j["metric"]["metricName"] == key: return j["metric"]["metricValue"] def test_init_trackers(self): with tempfile.TemporaryDirectory() as d: tracker = CometMLTracker("test_project_with_config", d) accelerator = Accelerator(log_with=tracker) config = {"num_iterations": 12, "learning_rate": 1e-2, "some_boolean": False, "some_string": "some_value"} accelerator.init_trackers(None, config) accelerator.end_training() log = os.listdir(d)[0] # Comet is nice, it's just a zip file here # We parse the raw logs p = os.path.join(d, log) archive = zipfile.ZipFile(p, "r") log = archive.open("messages.json").read().decode("utf-8") list_of_json = log.split("\n")[:-1] assert self.get_value_from_key(list_of_json, "num_iterations", True) == 12 assert self.get_value_from_key(list_of_json, "learning_rate", True) == 0.01 assert self.get_value_from_key(list_of_json, "some_boolean", True) is False assert self.get_value_from_key(list_of_json, "some_string", True) == "some_value" def test_log(self): with tempfile.TemporaryDirectory() as d: tracker = CometMLTracker("test_project_with_config", d) accelerator = Accelerator(log_with=tracker) accelerator.init_trackers(None) values = {"total_loss": 0.1, "iteration": 1, "my_text": "some_value"} accelerator.log(values, step=0) accelerator.end_training() log = os.listdir(d)[0] # Comet is nice, it's just a zip file here # We parse the raw logs p = os.path.join(d, log) archive = zipfile.ZipFile(p, "r") log = archive.open("messages.json").read().decode("utf-8") list_of_json = log.split("\n")[:-1] assert self.get_value_from_key(list_of_json, "curr_step", True) == 0 assert self.get_value_from_key(list_of_json, "total_loss") == 0.1 assert self.get_value_from_key(list_of_json, "iteration") == 1 assert self.get_value_from_key(list_of_json, "my_text") == "some_value" @require_clearml class ClearMLTest(TempDirTestCase, MockingTestCase): def setUp(self): super().setUp() # ClearML offline session location is stored in CLEARML_CACHE_DIR self.add_mocks(mock.patch.dict(os.environ, {"CLEARML_CACHE_DIR": self.tmpdir})) @staticmethod def _get_offline_dir(accelerator): from clearml.config import get_offline_dir return get_offline_dir(task_id=accelerator.get_tracker("clearml", unwrap=True).id) @staticmethod def _get_metrics(offline_dir): metrics = [] with open(os.path.join(offline_dir, "metrics.jsonl")) as f: json_lines = f.readlines() for json_line in json_lines: metrics.extend(json.loads(json_line)) return metrics def test_init_trackers(self): from clearml import Task from clearml.utilities.config import text_to_config_dict Task.set_offline(True) accelerator = Accelerator(log_with="clearml") config = {"num_iterations": 12, "learning_rate": 1e-2, "some_boolean": False, "some_string": "some_value"} accelerator.init_trackers("test_project_with_config", config) offline_dir = ClearMLTest._get_offline_dir(accelerator) accelerator.end_training() with open(os.path.join(offline_dir, "task.json")) as f: offline_session = json.load(f) clearml_offline_config = text_to_config_dict(offline_session["configuration"]["General"]["value"]) assert config == clearml_offline_config def test_log(self): from clearml import Task Task.set_offline(True) accelerator = Accelerator(log_with="clearml") accelerator.init_trackers("test_project_with_log") values_with_iteration = {"should_be_under_train": 1, "eval_value": 2, "test_value": 3.1, "train_value": 4.1} accelerator.log(values_with_iteration, step=1) single_values = {"single_value_1": 1.1, "single_value_2": 2.2} accelerator.log(single_values) offline_dir = ClearMLTest._get_offline_dir(accelerator) accelerator.end_training() metrics = ClearMLTest._get_metrics(offline_dir) assert (len(values_with_iteration) + len(single_values)) == len(metrics) for metric in metrics: if metric["metric"] == "Summary": assert metric["variant"] in single_values assert metric["value"] == single_values[metric["variant"]] elif metric["metric"] == "should_be_under_train": assert metric["variant"] == "train" assert metric["iter"] == 1 assert metric["value"] == values_with_iteration["should_be_under_train"] else: values_with_iteration_key = metric["variant"] + "_" + metric["metric"] assert values_with_iteration_key in values_with_iteration assert metric["iter"] == 1 assert metric["value"] == values_with_iteration[values_with_iteration_key] def test_log_images(self): from clearml import Task Task.set_offline(True) accelerator = Accelerator(log_with="clearml") accelerator.init_trackers("test_project_with_log_images") base_image = np.eye(256, 256, dtype=np.uint8) * 255 base_image_3d = np.concatenate((np.atleast_3d(base_image), np.zeros((256, 256, 2), dtype=np.uint8)), axis=2) images = { "base_image": base_image, "base_image_3d": base_image_3d, } accelerator.get_tracker("clearml").log_images(images, step=1) offline_dir = ClearMLTest._get_offline_dir(accelerator) accelerator.end_training() images_saved = Path(os.path.join(offline_dir, "data")).rglob("*.jpeg") assert len(list(images_saved)) == len(images) def test_log_table(self): from clearml import Task Task.set_offline(True) accelerator = Accelerator(log_with="clearml") accelerator.init_trackers("test_project_with_log_table") accelerator.get_tracker("clearml").log_table( "from lists with columns", columns=["A", "B", "C"], data=[[1, 3, 5], [2, 4, 6]] ) accelerator.get_tracker("clearml").log_table("from lists", data=[["A2", "B2", "C2"], [7, 9, 11], [8, 10, 12]]) offline_dir = ClearMLTest._get_offline_dir(accelerator) accelerator.end_training() metrics = ClearMLTest._get_metrics(offline_dir) assert len(metrics) == 2 for metric in metrics: assert metric["metric"] in ("from lists", "from lists with columns") plot = json.loads(metric["plot_str"]) if metric["metric"] == "from lists with columns": print(plot["data"][0]) self.assertCountEqual(plot["data"][0]["header"]["values"], ["A", "B", "C"]) self.assertCountEqual(plot["data"][0]["cells"]["values"], [[1, 2], [3, 4], [5, 6]]) else: self.assertCountEqual(plot["data"][0]["header"]["values"], ["A2", "B2", "C2"]) self.assertCountEqual(plot["data"][0]["cells"]["values"], [[7, 8], [9, 10], [11, 12]]) @require_pandas def test_log_table_pandas(self): import pandas as pd from clearml import Task Task.set_offline(True) accelerator = Accelerator(log_with="clearml") accelerator.init_trackers("test_project_with_log_table_pandas") accelerator.get_tracker("clearml").log_table( "from df", dataframe=pd.DataFrame({"A": [1, 2], "B": [3, 4], "C": [5, 6]}), step=1 ) offline_dir = ClearMLTest._get_offline_dir(accelerator) accelerator.end_training() metrics = ClearMLTest._get_metrics(offline_dir) assert len(metrics) == 1 assert metrics[0]["metric"] == "from df" plot = json.loads(metrics[0]["plot_str"]) self.assertCountEqual(plot["data"][0]["header"]["values"], [["A"], ["B"], ["C"]]) self.assertCountEqual(plot["data"][0]["cells"]["values"], [[1, 2], [3, 4], [5, 6]]) class MyCustomTracker(GeneralTracker): "Basic tracker that writes to a csv for testing" _col_names = [ "total_loss", "iteration", "my_text", "learning_rate", "num_iterations", "some_boolean", "some_string", ] name = "my_custom_tracker" requires_logging_directory = False def __init__(self, dir: str): self.f = open(f"{dir}/log.csv", "w+") self.writer = csv.DictWriter(self.f, fieldnames=self._col_names) self.writer.writeheader() @property def tracker(self): return self.writer def store_init_configuration(self, values: dict): logger.info("Call init") self.writer.writerow(values) def log(self, values: dict, step: Optional[int]): logger.info("Call log") self.writer.writerow(values) def finish(self): self.f.close() class CustomTrackerTestCase(unittest.TestCase): def test_init_trackers(self): with tempfile.TemporaryDirectory() as d: tracker = MyCustomTracker(d) accelerator = Accelerator(log_with=tracker) config = {"num_iterations": 12, "learning_rate": 1e-2, "some_boolean": False, "some_string": "some_value"} accelerator.init_trackers("Some name", config) accelerator.end_training() with open(f"{d}/log.csv") as f: data = csv.DictReader(f) data = next(data) truth = { "total_loss": "", "iteration": "", "my_text": "", "learning_rate": "0.01", "num_iterations": "12", "some_boolean": "False", "some_string": "some_value", } assert data == truth def test_log(self): with tempfile.TemporaryDirectory() as d: tracker = MyCustomTracker(d) accelerator = Accelerator(log_with=tracker) accelerator.init_trackers("Some name") values = {"total_loss": 0.1, "iteration": 1, "my_text": "some_value"} accelerator.log(values, step=0) accelerator.end_training() with open(f"{d}/log.csv") as f: data = csv.DictReader(f) data = next(data) truth = { "total_loss": "0.1", "iteration": "1", "my_text": "some_value", "learning_rate": "", "num_iterations": "", "some_boolean": "", "some_string": "", } assert data == truth @require_dvclive @mock.patch("dvclive.live.get_dvc_repo", return_value=None) class DVCLiveTrackingTest(unittest.TestCase): def test_init_trackers(self, mock_repo): project_name = "test_project_with_config" with tempfile.TemporaryDirectory() as dirpath: accelerator = Accelerator(log_with="dvclive") config = { "num_iterations": 12, "learning_rate": 1e-2, "some_boolean": False, "some_string": "some_value", } init_kwargs = {"dvclive": {"dir": dirpath, "save_dvc_exp": False, "dvcyaml": None}} accelerator.init_trackers(project_name, config, init_kwargs) accelerator.end_training() live = accelerator.trackers[0].live params = load_yaml(live.params_file) assert params == config def test_log(self, mock_repo): project_name = "test_project_with_log" with tempfile.TemporaryDirectory() as dirpath: accelerator = Accelerator(log_with="dvclive", project_dir=dirpath) init_kwargs = {"dvclive": {"dir": dirpath, "save_dvc_exp": False, "dvcyaml": None}} accelerator.init_trackers(project_name, init_kwargs=init_kwargs) values = {"total_loss": 0.1, "iteration": 1, "my_text": "some_value"} # Log step 0 accelerator.log(values) # Log step 1 accelerator.log(values) # Log step 3 (skip step 2) accelerator.log(values, step=3) accelerator.end_training() live = accelerator.trackers[0].live logs, latest = parse_metrics(live) assert latest.pop("step") == 3 assert latest == values scalars = os.path.join(live.plots_dir, Metric.subfolder) for val in values.keys(): val_path = os.path.join(scalars, f"{val}.tsv") steps = [int(row["step"]) for row in logs[val_path]] assert steps == [0, 1, 3]
accelerate/tests/test_tracking.py/0
{ "file_path": "accelerate/tests/test_tracking.py", "repo_id": "accelerate", "token_count": 10034 }
13
# Model arguments model_name_or_path: mistralai/Mistral-7B-v0.1 model_revision: main torch_dtype: bfloat16 use_flash_attention_2: true # Data training arguments chat_template: "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}" dataset_mixer: HuggingFaceH4/grok-conversation-harmless: 0.15 HuggingFaceH4/ultrachat_200k: 1.0 dataset_splits: - train_sft - test_sft preprocessing_num_workers: 12 # SFT trainer config bf16: true do_eval: true do_train: true evaluation_strategy: epoch # One of ["no", "steps", "epoch"] gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: False hub_model_id: mistral-7b-sft-constitutional-ai hub_strategy: every_save learning_rate: 2.0e-05 log_level: info logging_steps: 5 logging_strategy: steps lr_scheduler_type: cosine max_seq_length: 2048 max_steps: -1 num_train_epochs: 1 output_dir: data/mistral-7b-sft-constitutional-ai overwrite_output_dir: true per_device_eval_batch_size: 8 per_device_train_batch_size: 8 push_to_hub: true remove_unused_columns: true report_to: - tensorboard save_strategy: "steps" save_steps: 100 save_total_limit: 1 seed: 42 warmup_ratio: 0.1
alignment-handbook/recipes/constitutional-ai/sft/config_grok.yaml/0
{ "file_path": "alignment-handbook/recipes/constitutional-ai/sft/config_grok.yaml", "repo_id": "alignment-handbook", "token_count": 610 }
14
# Model arguments model_name_or_path: mistralai/Mistral-7B-v0.1 model_revision: main torch_dtype: bfloat16 use_flash_attention_2: true # Data training arguments chat_template: "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}" dataset_mixer: HuggingFaceH4/ultrachat_200k: 1.0 dataset_splits: - train_sft - test_sft preprocessing_num_workers: 12 # SFT trainer config bf16: true do_eval: true evaluation_strategy: epoch gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: False hub_model_id: zephyr-7b-sft-full hub_strategy: every_save learning_rate: 2.0e-05 log_level: info logging_steps: 5 logging_strategy: steps lr_scheduler_type: cosine max_seq_length: 2048 max_steps: -1 num_train_epochs: 1 output_dir: data/zephyr-7b-sft-full overwrite_output_dir: true per_device_eval_batch_size: 8 per_device_train_batch_size: 16 push_to_hub: true remove_unused_columns: true report_to: - tensorboard save_strategy: "steps" save_steps: 100 save_total_limit: 1 seed: 42 warmup_ratio: 0.1
alignment-handbook/recipes/zephyr-7b-beta/sft/config_full.yaml/0
{ "file_path": "alignment-handbook/recipes/zephyr-7b-beta/sft/config_full.yaml", "repo_id": "alignment-handbook", "token_count": 568 }
15
# coding=utf-8 # Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import re import packaging.version REPLACE_PATTERNS = { "init": (re.compile(r'^__version__\s+=\s+"([^"]+)"\s*$', re.MULTILINE), '__version__ = "VERSION"\n'), "setup": (re.compile(r'^(\s*)version\s*=\s*"[^"]+",', re.MULTILINE), r'\1version="VERSION",'), } REPLACE_FILES = { "init": "src/alignment/__init__.py", "setup": "setup.py", } README_FILE = "README.md" def update_version_in_file(fname, version, pattern): """Update the version in one file using a specific pattern.""" with open(fname, "r", encoding="utf-8", newline="\n") as f: code = f.read() re_pattern, replace = REPLACE_PATTERNS[pattern] replace = replace.replace("VERSION", version) code = re_pattern.sub(replace, code) with open(fname, "w", encoding="utf-8", newline="\n") as f: f.write(code) def global_version_update(version, patch=False): """Update the version in all needed files.""" for pattern, fname in REPLACE_FILES.items(): update_version_in_file(fname, version, pattern) def get_version(): """Reads the current version in the __init__.""" with open(REPLACE_FILES["init"], "r") as f: code = f.read() default_version = REPLACE_PATTERNS["init"][0].search(code).groups()[0] return packaging.version.parse(default_version) def pre_release_work(patch=False): """Do all the necessary pre-release steps.""" # First let's get the default version: base version if we are in dev, bump minor otherwise. default_version = get_version() if patch and default_version.is_devrelease: raise ValueError("Can't create a patch version from the dev branch, checkout a released version!") if default_version.is_devrelease: default_version = default_version.base_version elif patch: default_version = f"{default_version.major}.{default_version.minor}.{default_version.micro + 1}" else: default_version = f"{default_version.major}.{default_version.minor + 1}.0" # Now let's ask nicely if that's the right one. version = input(f"Which version are you releasing? [{default_version}]") if len(version) == 0: version = default_version print(f"Updating version to {version}.") global_version_update(version, patch=patch) def post_release_work(): """Do all the necessary post-release steps.""" # First let's get the current version current_version = get_version() dev_version = f"{current_version.major}.{current_version.minor + 1}.0.dev0" current_version = current_version.base_version # Check with the user we got that right. version = input(f"Which version are we developing now? [{dev_version}]") if len(version) == 0: version = dev_version print(f"Updating version to {version}.") global_version_update(version) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--post_release", action="store_true", help="Whether this is pre or post release.") parser.add_argument("--patch", action="store_true", help="Whether or not this is a patch release.") args = parser.parse_args() if not args.post_release: pre_release_work(patch=args.patch) elif args.patch: print("Nothing to do after a patch :-)") else: post_release_work()
alignment-handbook/src/alignment/release.py/0
{ "file_path": "alignment-handbook/src/alignment/release.py", "repo_id": "alignment-handbook", "token_count": 1384 }
16
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
candle/LICENSE-APACHE/0
{ "file_path": "candle/LICENSE-APACHE", "repo_id": "candle", "token_count": 3168 }
17
# Porting a custom kernel
candle/candle-book/src/cuda/porting.md/0
{ "file_path": "candle/candle-book/src/cuda/porting.md", "repo_id": "candle", "token_count": 7 }
18
# Simplified ## How its works This program implements a neural network to predict the winner of the second round of elections based on the results of the first round. Basic moments: 1. A multilayer perceptron with two hidden layers is used. The first hidden layer has 4 neurons, the second has 2 neurons. 2. The input is a vector of 2 numbers - the percentage of votes for the first and second candidates in the first stage. 3. The output is the number 0 or 1, where 1 means that the first candidate will win in the second stage, 0 means that he will lose. 4. For training, samples with real data on the results of the first and second stages of different elections are used. 5. The model is trained by backpropagation using gradient descent and the cross-entropy loss function. 6. Model parameters (weights of neurons) are initialized randomly, then optimized during training. 7. After training, the model is tested on a deferred sample to evaluate the accuracy. 8. If the accuracy on the test set is below 100%, the model is considered underfit and the learning process is repeated. Thus, this neural network learns to find hidden relationships between the results of the first and second rounds of voting in order to make predictions for new data. ```rust,ignore {{#include ../simplified.rs:book_training_simplified1}} ``` ```rust,ignore {{#include ../simplified.rs:book_training_simplified2}} ``` ```rust,ignore {{#include ../simplified.rs:book_training_simplified3}} ``` ## Example output ```bash Trying to train neural network. Epoch: 1 Train loss: 4.42555 Test accuracy: 0.00% Epoch: 2 Train loss: 0.84677 Test accuracy: 33.33% Epoch: 3 Train loss: 2.54335 Test accuracy: 33.33% Epoch: 4 Train loss: 0.37806 Test accuracy: 33.33% Epoch: 5 Train loss: 0.36647 Test accuracy: 100.00% real_life_votes: [13, 22] neural_network_prediction_result: 0.0 ```
candle/candle-book/src/training/simplified.md/0
{ "file_path": "candle/candle-book/src/training/simplified.md", "repo_id": "candle", "token_count": 530 }
19
use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT}; use crate::{CpuStorage, DType, Layout, Result, Shape}; pub trait BackendStorage: Sized { type Device: BackendDevice; fn try_clone(&self, _: &Layout) -> Result<Self>; fn dtype(&self) -> DType; fn device(&self) -> &Self::Device; // Maybe this should return a Cow instead so that no copy is done on the cpu case. fn to_cpu_storage(&self) -> Result<CpuStorage>; fn affine(&self, _: &Layout, _: f64, _: f64) -> Result<Self>; fn powf(&self, _: &Layout, _: f64) -> Result<Self>; fn elu(&self, _: &Layout, _: f64) -> Result<Self>; fn reduce_op(&self, _: ReduceOp, _: &Layout, _: &[usize]) -> Result<Self>; fn cmp(&self, _: CmpOp, _: &Self, _: &Layout, _: &Layout) -> Result<Self>; fn to_dtype(&self, _: &Layout, _: DType) -> Result<Self>; fn unary_impl<B: UnaryOpT>(&self, _: &Layout) -> Result<Self>; fn binary_impl<B: BinaryOpT>(&self, _: &Self, _: &Layout, _: &Layout) -> Result<Self>; fn where_cond(&self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout) -> Result<Self>; fn conv1d( &self, _l: &Layout, _kernel: &Self, _kernel_l: &Layout, _params: &crate::conv::ParamsConv1D, ) -> Result<Self>; fn conv_transpose1d( &self, _l: &Layout, _kernel: &Self, _kernel_l: &Layout, _params: &crate::conv::ParamsConvTranspose1D, ) -> Result<Self>; fn conv2d( &self, _l: &Layout, _kernel: &Self, _kernel_l: &Layout, _params: &crate::conv::ParamsConv2D, ) -> Result<Self>; fn conv_transpose2d( &self, _l: &Layout, _kernel: &Self, _kernel_l: &Layout, _params: &crate::conv::ParamsConvTranspose2D, ) -> Result<Self>; fn avg_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self>; fn max_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self>; fn upsample_nearest1d(&self, _: &Layout, _: usize) -> Result<Self>; fn upsample_nearest2d(&self, _: &Layout, _: usize, _: usize) -> Result<Self>; fn gather(&self, _: &Layout, _: &Self, _: &Layout, _: usize) -> Result<Self>; fn scatter_add( &self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout, _: usize, ) -> Result<Self>; fn index_select(&self, _: &Self, _: &Layout, _: &Layout, _: usize) -> Result<Self>; fn index_add( &self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout, _: usize, ) -> Result<Self>; fn matmul( &self, _: &Self, _: (usize, usize, usize, usize), _: &Layout, _: &Layout, ) -> Result<Self>; fn copy_strided_src(&self, _: &mut Self, _: usize, _: &Layout) -> Result<()>; #[allow(clippy::too_many_arguments)] // Similar to cudaMemcpy2D, though values are in elements and not in bytes. fn copy2d( &self, _: &mut Self, _d1: usize, _d2: usize, _src_stride1: usize, _dst_stride1: usize, _src_offset: usize, _dst_offset: usize, ) -> Result<()>; } pub trait BackendDevice: Sized + std::fmt::Debug + Clone { type Storage: BackendStorage; // TODO: Make the usize generic and part of a generic DeviceLocation. fn new(_: usize) -> Result<Self>; fn location(&self) -> crate::DeviceLocation; fn same_device(&self, _: &Self) -> bool; fn zeros_impl(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage>; fn ones_impl(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage>; fn storage_from_cpu_storage(&self, _: &CpuStorage) -> Result<Self::Storage>; fn rand_uniform(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage>; fn rand_normal(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage>; fn set_seed(&self, _: u64) -> Result<()>; }
candle/candle-core/src/backend.rs/0
{ "file_path": "candle/candle-core/src/backend.rs", "repo_id": "candle", "token_count": 1920 }
20
#![allow(dead_code)] use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT}; use crate::{CpuStorage, DType, Error, Layout, Result, Shape}; #[derive(Debug, Clone)] pub struct CudaDevice; #[derive(Debug)] pub struct CudaStorage; macro_rules! fail { () => { unimplemented!("cuda support has not been enabled, add `cuda` feature to enable.") }; } impl crate::backend::BackendStorage for CudaStorage { type Device = CudaDevice; fn try_clone(&self, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn dtype(&self) -> DType { fail!() } fn device(&self) -> &Self::Device { fail!() } fn to_cpu_storage(&self) -> Result<CpuStorage> { Err(Error::NotCompiledWithCudaSupport) } fn affine(&self, _: &Layout, _: f64, _: f64) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn powf(&self, _: &Layout, _: f64) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn elu(&self, _: &Layout, _: f64) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn reduce_op(&self, _: ReduceOp, _: &Layout, _: &[usize]) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn cmp(&self, _: CmpOp, _: &Self, _: &Layout, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn to_dtype(&self, _: &Layout, _: DType) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn unary_impl<B: UnaryOpT>(&self, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn binary_impl<B: BinaryOpT>(&self, _: &Self, _: &Layout, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn where_cond(&self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn conv1d( &self, _: &Layout, _: &Self, _: &Layout, _: &crate::conv::ParamsConv1D, ) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn conv_transpose1d( &self, _: &Layout, _: &Self, _: &Layout, _: &crate::conv::ParamsConvTranspose1D, ) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn conv2d( &self, _: &Layout, _: &Self, _: &Layout, _: &crate::conv::ParamsConv2D, ) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn conv_transpose2d( &self, _l: &Layout, _kernel: &Self, _kernel_l: &Layout, _params: &crate::conv::ParamsConvTranspose2D, ) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn index_select(&self, _: &Self, _: &Layout, _: &Layout, _: usize) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn gather(&self, _: &Layout, _: &Self, _: &Layout, _: usize) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn scatter_add( &self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout, _: usize, ) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn index_add( &self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout, _: usize, ) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn matmul( &self, _: &Self, _: (usize, usize, usize, usize), _: &Layout, _: &Layout, ) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn copy_strided_src(&self, _: &mut Self, _: usize, _: &Layout) -> Result<()> { Err(Error::NotCompiledWithCudaSupport) } fn copy2d( &self, _: &mut Self, _: usize, _: usize, _: usize, _: usize, _: usize, _: usize, ) -> Result<()> { Err(Error::NotCompiledWithCudaSupport) } fn avg_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn max_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn upsample_nearest1d(&self, _: &Layout, _: usize) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn upsample_nearest2d(&self, _: &Layout, _: usize, _: usize) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } } impl crate::backend::BackendDevice for CudaDevice { type Storage = CudaStorage; fn new(_: usize) -> Result<Self> { Err(Error::NotCompiledWithCudaSupport) } fn set_seed(&self, _: u64) -> Result<()> { Err(Error::NotCompiledWithCudaSupport) } fn location(&self) -> crate::DeviceLocation { fail!() } fn same_device(&self, _: &Self) -> bool { fail!() } fn zeros_impl(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> { Err(Error::NotCompiledWithCudaSupport) } fn ones_impl(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> { Err(Error::NotCompiledWithCudaSupport) } fn storage_from_cpu_storage(&self, _: &CpuStorage) -> Result<Self::Storage> { Err(Error::NotCompiledWithCudaSupport) } fn rand_uniform(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage> { Err(Error::NotCompiledWithCudaSupport) } fn rand_normal(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage> { Err(Error::NotCompiledWithCudaSupport) } }
candle/candle-core/src/dummy_cuda_backend.rs/0
{ "file_path": "candle/candle-core/src/dummy_cuda_backend.rs", "repo_id": "candle", "token_count": 2782 }
21
//! Support for the GGUF file format. //! //! Spec: https://github.com/philpax/ggml/blob/gguf-spec/docs/gguf.md use super::{GgmlDType, QTensor}; use crate::{Device, Result}; use byteorder::{LittleEndian, ReadBytesExt, WriteBytesExt}; use std::collections::HashMap; pub const DEFAULT_ALIGNMENT: u64 = 32; #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum Magic { Gguf, } impl TryFrom<u32> for Magic { type Error = crate::Error; fn try_from(value: u32) -> Result<Self> { let magic = match value { 0x46554747 | 0x47475546 => Self::Gguf, _ => crate::bail!("unknown magic 0x{value:08x}"), }; Ok(magic) } } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum VersionedMagic { GgufV1, GgufV2, GgufV3, } impl VersionedMagic { fn read<R: std::io::Read>(reader: &mut R) -> Result<Self> { let magic = reader.read_u32::<LittleEndian>()?; let magic = Magic::try_from(magic)?; let version = reader.read_u32::<LittleEndian>()?; let versioned_magic = match (magic, version) { (Magic::Gguf, 1) => Self::GgufV1, (Magic::Gguf, 2) => Self::GgufV2, (Magic::Gguf, 3) => Self::GgufV3, _ => crate::bail!("gguf: unsupported magic/version {magic:?}/{version}"), }; Ok(versioned_magic) } } #[derive(Debug)] pub struct TensorInfo { pub ggml_dtype: GgmlDType, pub shape: crate::Shape, pub offset: u64, } impl TensorInfo { pub fn read<R: std::io::Seek + std::io::Read>( &self, reader: &mut R, tensor_data_offset: u64, device: &Device, ) -> Result<QTensor> { let tensor_elems = self.shape.elem_count(); let block_size = self.ggml_dtype.block_size(); if tensor_elems % block_size != 0 { crate::bail!( "the number of elements {tensor_elems} is not divisible by the block size {block_size}" ) } let size_in_bytes = tensor_elems / block_size * self.ggml_dtype.type_size(); let mut raw_data = vec![0u8; size_in_bytes]; reader.seek(std::io::SeekFrom::Start(tensor_data_offset + self.offset))?; reader.read_exact(&mut raw_data)?; super::ggml_file::qtensor_from_ggml( self.ggml_dtype, &raw_data, self.shape.dims().to_vec(), device, ) } } #[derive(Debug)] pub struct Content { pub magic: VersionedMagic, pub metadata: HashMap<String, Value>, pub tensor_infos: HashMap<String, TensorInfo>, pub tensor_data_offset: u64, } fn read_string<R: std::io::Read>(reader: &mut R, magic: &VersionedMagic) -> Result<String> { let len = match magic { VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize, VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { reader.read_u64::<LittleEndian>()? as usize } }; let mut v = vec![0u8; len]; reader.read_exact(&mut v)?; // GGUF strings are supposed to be non-null terminated but in practice this happens. while let Some(0) = v.last() { v.pop(); } // GGUF strings are utf8 encoded but there are cases that don't seem to be valid. Ok(String::from_utf8_lossy(&v).into_owned()) } #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] pub enum ValueType { // The value is a 8-bit unsigned integer. U8, // The value is a 8-bit signed integer. I8, // The value is a 16-bit unsigned little-endian integer. U16, // The value is a 16-bit signed little-endian integer. I16, // The value is a 32-bit unsigned little-endian integer. U32, // The value is a 32-bit signed little-endian integer. I32, // The value is a 64-bit unsigned little-endian integer. U64, // The value is a 64-bit signed little-endian integer. I64, // The value is a 32-bit IEEE754 floating point number. F32, // The value is a 64-bit IEEE754 floating point number. F64, // The value is a boolean. // 1-byte value where 0 is false and 1 is true. // Anything else is invalid, and should be treated as either the model being invalid or the reader being buggy. Bool, // The value is a UTF-8 non-null-terminated string, with length prepended. String, // The value is an array of other values, with the length and type prepended. /// // Arrays can be nested, and the length of the array is the number of elements in the array, not the number of bytes. Array, } #[derive(Debug, Clone)] pub enum Value { U8(u8), I8(i8), U16(u16), I16(i16), U32(u32), I32(i32), U64(u64), I64(i64), F32(f32), F64(f64), Bool(bool), String(String), Array(Vec<Value>), } impl Value { pub fn value_type(&self) -> ValueType { match self { Self::U8(_) => ValueType::U8, Self::I8(_) => ValueType::I8, Self::U16(_) => ValueType::U16, Self::I16(_) => ValueType::I16, Self::U32(_) => ValueType::U32, Self::I32(_) => ValueType::I32, Self::U64(_) => ValueType::U64, Self::I64(_) => ValueType::I64, Self::F32(_) => ValueType::F32, Self::F64(_) => ValueType::F64, Self::Bool(_) => ValueType::Bool, Self::String(_) => ValueType::String, Self::Array(_) => ValueType::Array, } } pub fn to_u8(&self) -> Result<u8> { match self { Self::U8(v) => Ok(*v), v => crate::bail!("not a u8 {v:?}"), } } pub fn to_i8(&self) -> Result<i8> { match self { Self::I8(v) => Ok(*v), v => crate::bail!("not a i8 {v:?}"), } } pub fn to_u16(&self) -> Result<u16> { match self { Self::U16(v) => Ok(*v), v => crate::bail!("not a u16 {v:?}"), } } pub fn to_i16(&self) -> Result<i16> { match self { Self::I16(v) => Ok(*v), v => crate::bail!("not a i16 {v:?}"), } } pub fn to_u32(&self) -> Result<u32> { match self { Self::U32(v) => Ok(*v), v => crate::bail!("not a u32 {v:?}"), } } pub fn to_i32(&self) -> Result<i32> { match self { Self::I32(v) => Ok(*v), v => crate::bail!("not a i32 {v:?}"), } } pub fn to_u64(&self) -> Result<u64> { match self { Self::U64(v) => Ok(*v), v => crate::bail!("not a u64 {v:?}"), } } pub fn to_i64(&self) -> Result<i64> { match self { Self::I64(v) => Ok(*v), v => crate::bail!("not a i64 {v:?}"), } } pub fn to_f32(&self) -> Result<f32> { match self { Self::F32(v) => Ok(*v), v => crate::bail!("not a f32 {v:?}"), } } pub fn to_f64(&self) -> Result<f64> { match self { Self::F64(v) => Ok(*v), v => crate::bail!("not a f64 {v:?}"), } } pub fn to_bool(&self) -> Result<bool> { match self { Self::Bool(v) => Ok(*v), v => crate::bail!("not a bool {v:?}"), } } pub fn to_vec(&self) -> Result<&Vec<Value>> { match self { Self::Array(v) => Ok(v), v => crate::bail!("not a vec {v:?}"), } } pub fn to_string(&self) -> Result<&String> { match self { Self::String(v) => Ok(v), v => crate::bail!("not a string {v:?}"), } } fn read<R: std::io::Read>( reader: &mut R, value_type: ValueType, magic: &VersionedMagic, ) -> Result<Self> { let v = match value_type { ValueType::U8 => Self::U8(reader.read_u8()?), ValueType::I8 => Self::I8(reader.read_i8()?), ValueType::U16 => Self::U16(reader.read_u16::<LittleEndian>()?), ValueType::I16 => Self::I16(reader.read_i16::<LittleEndian>()?), ValueType::U32 => Self::U32(reader.read_u32::<LittleEndian>()?), ValueType::I32 => Self::I32(reader.read_i32::<LittleEndian>()?), ValueType::U64 => Self::U64(reader.read_u64::<LittleEndian>()?), ValueType::I64 => Self::I64(reader.read_i64::<LittleEndian>()?), ValueType::F32 => Self::F32(reader.read_f32::<LittleEndian>()?), ValueType::F64 => Self::F64(reader.read_f64::<LittleEndian>()?), ValueType::Bool => match reader.read_u8()? { 0 => Self::Bool(false), 1 => Self::Bool(true), b => crate::bail!("unexpected bool value {b}"), }, ValueType::String => Self::String(read_string(reader, magic)?), ValueType::Array => { let value_type = reader.read_u32::<LittleEndian>()?; let value_type = ValueType::from_u32(value_type)?; let len = match magic { VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize, VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { reader.read_u64::<LittleEndian>()? as usize } }; let mut vs = Vec::with_capacity(len); for _ in 0..len { vs.push(Value::read(reader, value_type, magic)?) } Self::Array(vs) } }; Ok(v) } fn write<W: std::io::Write>(&self, w: &mut W) -> Result<()> { match self { &Self::U8(v) => w.write_u8(v)?, &Self::I8(v) => w.write_i8(v)?, &Self::U16(v) => w.write_u16::<LittleEndian>(v)?, &Self::I16(v) => w.write_i16::<LittleEndian>(v)?, &Self::U32(v) => w.write_u32::<LittleEndian>(v)?, &Self::I32(v) => w.write_i32::<LittleEndian>(v)?, &Self::U64(v) => w.write_u64::<LittleEndian>(v)?, &Self::I64(v) => w.write_i64::<LittleEndian>(v)?, &Self::F32(v) => w.write_f32::<LittleEndian>(v)?, &Self::F64(v) => w.write_f64::<LittleEndian>(v)?, &Self::Bool(v) => w.write_u8(u8::from(v))?, Self::String(v) => write_string(w, v.as_str())?, Self::Array(v) => { // The `Value` type does not enforce that all the values in an Array have the same // type. let value_type = if v.is_empty() { // Doesn't matter, the array is empty. ValueType::U32 } else { let value_type: std::collections::HashSet<_> = v.iter().map(|elem| elem.value_type()).collect(); if value_type.len() != 1 { crate::bail!("multiple value-types in the same array {value_type:?}") } value_type.into_iter().next().unwrap() }; w.write_u32::<LittleEndian>(value_type.to_u32())?; w.write_u64::<LittleEndian>(v.len() as u64)?; for elem in v.iter() { elem.write(w)? } } } Ok(()) } } impl ValueType { fn from_u32(v: u32) -> Result<Self> { let v = match v { 0 => Self::U8, 1 => Self::I8, 2 => Self::U16, 3 => Self::I16, 4 => Self::U32, 5 => Self::I32, 6 => Self::F32, 7 => Self::Bool, 8 => Self::String, 9 => Self::Array, 10 => Self::U64, 11 => Self::I64, 12 => Self::F64, v => crate::bail!("unrecognized value-type {v:#08x}"), }; Ok(v) } fn to_u32(self) -> u32 { match self { Self::U8 => 0, Self::I8 => 1, Self::U16 => 2, Self::I16 => 3, Self::U32 => 4, Self::I32 => 5, Self::F32 => 6, Self::Bool => 7, Self::String => 8, Self::Array => 9, Self::U64 => 10, Self::I64 => 11, Self::F64 => 12, } } } impl Content { pub fn read<R: std::io::Seek + std::io::Read>(reader: &mut R) -> Result<Self> { let magic = VersionedMagic::read(reader)?; let tensor_count = match magic { VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize, VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { reader.read_u64::<LittleEndian>()? as usize } }; let metadata_kv_count = match magic { VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize, VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { reader.read_u64::<LittleEndian>()? as usize } }; let mut metadata = HashMap::new(); for _idx in 0..metadata_kv_count { let key = read_string(reader, &magic)?; let value_type = reader.read_u32::<LittleEndian>()?; let value_type = ValueType::from_u32(value_type)?; let value = Value::read(reader, value_type, &magic)?; metadata.insert(key, value); } let mut tensor_infos = HashMap::new(); for _idx in 0..tensor_count { let tensor_name = read_string(reader, &magic)?; let n_dimensions = reader.read_u32::<LittleEndian>()?; let mut dimensions: Vec<usize> = match magic { VersionedMagic::GgufV1 => { let mut dimensions = vec![0; n_dimensions as usize]; reader.read_u32_into::<LittleEndian>(&mut dimensions)?; dimensions.into_iter().map(|c| c as usize).collect() } VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { let mut dimensions = vec![0; n_dimensions as usize]; reader.read_u64_into::<LittleEndian>(&mut dimensions)?; dimensions.into_iter().map(|c| c as usize).collect() } }; dimensions.reverse(); let ggml_dtype = reader.read_u32::<LittleEndian>()?; let ggml_dtype = GgmlDType::from_u32(ggml_dtype)?; let offset = reader.read_u64::<LittleEndian>()?; tensor_infos.insert( tensor_name, TensorInfo { shape: crate::Shape::from(dimensions), offset, ggml_dtype, }, ); } let position = reader.stream_position()?; let alignment = match metadata.get("general.alignment") { Some(Value::U8(v)) => *v as u64, Some(Value::U16(v)) => *v as u64, Some(Value::U32(v)) => *v as u64, Some(Value::I8(v)) if *v >= 0 => *v as u64, Some(Value::I16(v)) if *v >= 0 => *v as u64, Some(Value::I32(v)) if *v >= 0 => *v as u64, _ => DEFAULT_ALIGNMENT, }; let tensor_data_offset = (position + alignment - 1) / alignment * alignment; Ok(Self { magic, metadata, tensor_infos, tensor_data_offset, }) } pub fn tensor<R: std::io::Seek + std::io::Read>( &self, reader: &mut R, name: &str, device: &Device, ) -> Result<QTensor> { let tensor_info = match self.tensor_infos.get(name) { Some(tensor_info) => tensor_info, None => crate::bail!("cannot find tensor info for {name}"), }; tensor_info.read(reader, self.tensor_data_offset, device) } } fn write_string<W: std::io::Write>(w: &mut W, str: &str) -> Result<()> { let bytes = str.as_bytes(); w.write_u64::<LittleEndian>(bytes.len() as u64)?; w.write_all(bytes)?; Ok(()) } pub fn write<W: std::io::Seek + std::io::Write>( w: &mut W, metadata: &[(&str, &Value)], tensors: &[(&str, &QTensor)], ) -> Result<()> { w.write_u32::<LittleEndian>(0x46554747)?; w.write_u32::<LittleEndian>(2)?; // version 2. w.write_u64::<LittleEndian>(tensors.len() as u64)?; w.write_u64::<LittleEndian>(metadata.len() as u64)?; for (name, value) in metadata.iter() { write_string(w, name)?; w.write_u32::<LittleEndian>(value.value_type().to_u32())?; value.write(w)?; } let mut offset = 0usize; let mut offsets = Vec::with_capacity(tensors.len()); for (name, tensor) in tensors.iter() { write_string(w, name)?; let dims = tensor.shape().dims(); w.write_u32::<LittleEndian>(dims.len() as u32)?; for &dim in dims.iter().rev() { w.write_u64::<LittleEndian>(dim as u64)?; } w.write_u32::<LittleEndian>(tensor.dtype().to_u32())?; w.write_u64::<LittleEndian>(offset as u64)?; offsets.push(offset); let size_in_bytes = tensor.storage_size_in_bytes(); let padding = 31 - (31 + size_in_bytes) % 32; offset += size_in_bytes + padding; } let pos = w.stream_position()? as usize; let padding = 31 - (31 + pos) % 32; w.write_all(&vec![0u8; padding])?; let tensor_start_pos = w.stream_position()? as usize; for (offset, (_name, tensor)) in offsets.iter().zip(tensors.iter()) { let pos = w.stream_position()? as usize; if tensor_start_pos + offset != pos { crate::bail!( "internal error, unexpected current position {tensor_start_pos} {offset} {pos}" ) } let data = tensor.data()?; let size_in_bytes = data.len(); w.write_all(&data)?; let padding = 31 - (31 + size_in_bytes) % 32; w.write_all(&vec![0u8; padding])?; } Ok(()) }
candle/candle-core/src/quantized/gguf_file.rs/0
{ "file_path": "candle/candle-core/src/quantized/gguf_file.rs", "repo_id": "candle", "token_count": 9397 }
22
// Variables are wrappers around tensors that can be modified, they are typically used for holding // weights and being modified by gradient descent. // We do not expose a public way to create variables as this would break the invariant that the // tensor within a variable is actually with `is_variable` set to `true`. use crate::{DType, Device, Error, Result, Shape, Tensor}; /// A variable is a wrapper around a tensor, however variables can have their content modified /// whereas tensors are immutable. #[derive(Clone, Debug)] pub struct Var(Tensor); impl std::fmt::Display for Var { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { std::fmt::Display::fmt(&self.0, f) } } impl std::ops::Deref for Var { type Target = Tensor; fn deref(&self) -> &Self::Target { self.0.as_ref() } } impl Var { pub fn zeros<S: Into<Shape>>(shape: S, dtype: DType, device: &Device) -> Result<Self> { let inner = Tensor::zeros_impl(shape, dtype, device, true)?; Ok(Self(inner)) } pub fn ones<S: Into<Shape>>(shape: S, dtype: DType, device: &Device) -> Result<Self> { let inner = Tensor::ones_impl(shape, dtype, device, true)?; Ok(Self(inner)) } pub fn from_tensor(t: &Tensor) -> Result<Self> { let inner = t.make_var()?; Ok(Self(inner)) } pub fn rand_f64<S: Into<Shape>>( lo: f64, up: f64, s: S, dtype: DType, device: &Device, ) -> Result<Self> { let inner = Tensor::rand_f64_impl(lo, up, s, dtype, device, true)?; Ok(Self(inner)) } pub fn randn_f64<S: Into<Shape>>( mean: f64, std: f64, s: S, dtype: DType, device: &Device, ) -> Result<Self> { let inner = Tensor::randn_f64_impl(mean, std, s, dtype, device, true)?; Ok(Self(inner)) } pub fn rand<S: Into<Shape>, T: crate::FloatDType>( lo: T, up: T, s: S, device: &Device, ) -> Result<Self> { let inner = Tensor::rand_impl(lo, up, s, device, true)?; Ok(Self(inner)) } pub fn randn<S: Into<Shape>, T: crate::FloatDType>( mean: T, std: T, s: S, device: &Device, ) -> Result<Self> { let inner = Tensor::randn_impl(mean, std, s, device, true)?; Ok(Self(inner)) } /// Creates a new tensor on the specified device using the content and shape of the input. /// This is similar to `new` but the resulting tensor is a variable. pub fn new<A: crate::device::NdArray>(array: A, device: &Device) -> Result<Self> { let shape = array.shape()?; let inner = Tensor::new_impl(array, shape, device, true)?; Ok(Self(inner)) } pub fn from_vec<S: Into<Shape>, D: crate::WithDType>( data: Vec<D>, shape: S, device: &Device, ) -> Result<Self> { let inner = Tensor::from_vec_impl(data, shape, device, true)?; Ok(Self(inner)) } pub fn from_slice<S: Into<Shape>, D: crate::WithDType>( array: &[D], shape: S, device: &Device, ) -> Result<Self> { let inner = Tensor::new_impl(array, shape.into(), device, true)?; Ok(Self(inner)) } pub fn as_detached_tensor(&self) -> Tensor { self.0.detach() } pub fn as_tensor(&self) -> &Tensor { &self.0 } /// Consumes this `Var` and return the underlying tensor. pub fn into_inner(self) -> Tensor { self.0 } /// Sets the content of the inner tensor, this does not require a mutable reference as inner /// mutability is used. pub fn set(&self, src: &Tensor) -> Result<()> { if self.same_storage(src) { let msg = "cannot set a variable to a tensor that is derived from its value"; Err(Error::CannotSetVar { msg }.bt())? } let (mut dst, layout) = self.storage_mut_and_layout(); if !layout.is_contiguous() { let msg = "cannot set a non-contiguous variable"; Err(Error::CannotSetVar { msg }.bt())? } let (src, src_l) = src.storage_and_layout(); if layout.shape() != src_l.shape() { Err(Error::ShapeMismatchBinaryOp { lhs: layout.shape().clone(), rhs: src_l.shape().clone(), op: "set", } .bt())? } src.copy_strided_src(&mut dst, layout.start_offset(), src_l)?; Ok(()) } }
candle/candle-core/src/variable.rs/0
{ "file_path": "candle/candle-core/src/variable.rs", "repo_id": "candle", "token_count": 2057 }
23
# candle-bert Bert is a general large language model. In this example it can be used for two different tasks: - Compute sentence embeddings for a prompt. - Compute similarities between a set of sentences. ## Sentence embeddings Bert is used to compute the sentence embeddings for a prompt. The model weights are downloaded from the hub on the first run. ```bash cargo run --example bert --release -- --prompt "Here is a test sentence" > [[[ 0.0798, -0.0665, -0.0247, ..., -0.1082, -0.1000, -0.2751], > [ 0.4218, 0.2690, 0.2740, ..., 0.3889, 1.3503, 0.9908], > [ 0.0466, 0.3041, -0.1143, ..., 0.4427, 0.6926, -0.1515], > ... > [ 0.3396, 0.4320, -0.4408, ..., 0.9212, 0.2331, -0.6777], > [ 0.2789, 0.7539, 0.4306, ..., -0.0095, 0.3375, -1.7529], > [ 0.6737, 0.7882, 0.0548, ..., 0.1836, 0.7299, -0.6617]]] > Tensor[[1, 7, 384], f32] ``` ### Custom models You can specify different models, such as BGE, with the `--model-id` flag: ```bash cargo run --example bert --release -- \ --model-id BAAI/bge-large-zh-v1.5 \ --prompt "Here is a test sentence" Loaded and encoded 435.70775ms [[[ 3.0944e-1, -7.8455e-5, -1.2768e0, ..., 1.3755e-2, -3.2371e-1, 2.3819e-1], [-2.8506e-1, 1.9953e-1, -1.3076e0, ..., 6.9819e-2, 1.0833e-2, -1.1512e0], [ 3.9892e-1, 2.0000e-1, -9.3178e-1, ..., -4.1393e-1, -4.9644e-2, -3.3786e-1], ... [ 6.0345e-1, 3.5744e-1, -1.2672e0, ..., -6.9165e-1, -3.4973e-3, -8.4214e-1], [ 3.9218e-1, -3.2735e-1, -1.3123e0, ..., -4.9318e-1, -5.1334e-1, -3.6391e-1], [ 3.0978e-1, 2.5662e-4, -1.2773e0, ..., 1.3357e-2, -3.2390e-1, 2.3858e-1]]] Tensor[[1, 9, 1024], f32] Took 176.744667ms ``` ### Gelu approximation You can get a speedup by using an approximation of the gelu activation, with a small loss of precision, by passing the `--approximate-gelu` flag: ```bash $ cargo run --example bert --release -- \ --model-id BAAI/bge-large-zh-v1.5 \ --prompt "Here is a test sentence" \ --approximate-gelu Loaded and encoded 244.388042ms [[[ 3.1048e-1, -6.0339e-4, -1.2758e0, ..., 1.3718e-2, -3.2362e-1, 2.3775e-1], [-2.8354e-1, 1.9984e-1, -1.3077e0, ..., 6.9390e-2, 9.9681e-3, -1.1531e0], [ 3.9947e-1, 1.9917e-1, -9.3178e-1, ..., -4.1301e-1, -5.0719e-2, -3.3955e-1], ... [ 6.0499e-1, 3.5664e-1, -1.2642e0, ..., -6.9134e-1, -3.4581e-3, -8.4471e-1], [ 3.9311e-1, -3.2812e-1, -1.3105e0, ..., -4.9291e-1, -5.1270e-1, -3.6543e-1], [ 3.1082e-1, -2.6737e-4, -1.2762e0, ..., 1.3319e-2, -3.2381e-1, 2.3815e-1]]] Tensor[[1, 9, 1024], f32] Took 116.840791ms ``` ## Similarities In this example, Bert is used to compute the sentence embeddings for a set of sentences (hardcoded in the examples). Then cosine similarities are computed for each sentence pair and they are reported by decreasing values, hence the first reported pair contains the two sentences that have the highest similarity score. The sentence embeddings are computed using average pooling through all the sentence tokens, including some potential padding. ```bash cargo run --example bert --release > score: 0.85 'The new movie is awesome' 'The new movie is so great' > score: 0.61 'The cat sits outside' 'The cat plays in the garden' > score: 0.52 'I love pasta' 'Do you like pizza?' > score: 0.23 'The new movie is awesome' 'Do you like pizza?' > score: 0.22 'I love pasta' 'The new movie is awesome' ```
candle/candle-examples/examples/bert/README.md/0
{ "file_path": "candle/candle-examples/examples/bert/README.md", "repo_id": "candle", "token_count": 1564 }
24
# candle-distilbert DistilBert is a distiled version of the Bert model. ## Sentence embeddings DistilBert is used to compute the sentence embeddings for a prompt. The model weights are downloaded from the hub on the first run. ```bash cargo run --example distilbert --release -- --prompt "Here is a test sentence" > [[[ 0.5109, 0.1280, -0.2635, ..., 0.3462, -1.0434, 0.1441], > [ 0.1735, 0.0818, -0.5549, ..., 0.3472, -0.8264, -0.0244], > [ 0.0702, -0.1311, -0.4914, ..., 0.3483, -0.6194, 0.1829], > ... > [ 0.2993, -0.0106, -0.4640, ..., 0.2844, -0.6732, 0.0042], > [ 0.1066, -0.0081, -0.4299, ..., 0.3435, -0.7729, 0.0190], > [ 0.8903, 0.2055, -0.2541, ..., 0.3208, -0.6585, 0.0586]]] > Tensor[[1, 7, 768], f32] ```
candle/candle-examples/examples/distilbert/README.md/0
{ "file_path": "candle/candle-examples/examples/distilbert/README.md", "repo_id": "candle", "token_count": 367 }
25
// https://github.com/karpathy/llama2.c #[cfg(feature = "accelerate")] extern crate accelerate_src; #[cfg(feature = "mkl")] extern crate intel_mkl_src; use candle_transformers::models::llama2_c as model; use candle_transformers::models::llama2_c_weights as weights; use candle_transformers::models::quantized_llama2_c as qmodel; mod training; use clap::{Parser, Subcommand}; use anyhow::{Error as E, Result}; use byteorder::{LittleEndian, ReadBytesExt}; use candle::{IndexOp, Tensor}; use candle_transformers::generation::LogitsProcessor; use std::io::Write; use tokenizers::Tokenizer; use model::{Cache, Config, Llama}; use qmodel::QLlama; use weights::TransformerWeights; #[derive(Parser, Debug, Clone)] struct InferenceCmd { /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, #[arg(long, default_value = "")] prompt: String, /// Config file in binary or safetensors format. #[arg(long)] config: Option<String>, #[arg(long, default_value = "karpathy/tinyllamas")] model_id: String, /// The model to be used when getting it from the hub. Possible /// values are 'stories15M.bin', 'stories42M.bin', see more at: /// https://huggingface.co/karpathy/tinyllamas/tree/main #[arg(long, default_value = "stories15M.bin")] which_model: String, } #[derive(Parser, Debug, Clone)] struct EvaluationCmd { /// A directory with the pre-tokenized dataset in the format generated by the tinystories.py /// script from llama2.c https://github.com/karpathy/llama2.c #[arg(long)] pretokenized_dir: Option<String>, #[arg(long, default_value_t = 32)] batch_size: usize, /// Config file in binary format. #[arg(long)] config: Option<String>, #[arg(long, default_value = "karpathy/tinyllamas")] model_id: String, /// The model to be used when getting it from the hub. Possible /// values are 'stories15M.bin', 'stories42M.bin', see more at: /// https://huggingface.co/karpathy/tinyllamas/tree/main #[arg(long, default_value = "stories15M.bin")] which_model: String, } #[derive(Parser, Debug, Clone)] pub struct TrainingCmd { /// A directory with the pre-tokenized dataset in the format generated by the tinystories.py /// script from llama2.c https://github.com/karpathy/llama2.c #[arg(long)] pretokenized_dir: String, #[arg(long, default_value_t = 32)] batch_size: usize, #[arg(long, default_value_t = 0.001)] learning_rate: f64, } #[derive(Subcommand, Debug, Clone)] enum Task { Inference(InferenceCmd), Eval(EvaluationCmd), Train(TrainingCmd), } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] pub struct Args { /// The task to be performed, inference, training or evaluation. #[command(subcommand)] task: Option<Task>, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Tokenizer config file. #[arg(long)] tokenizer: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, } impl Args { fn tokenizer(&self) -> Result<Tokenizer> { let tokenizer_path = match &self.tokenizer { Some(config) => std::path::PathBuf::from(config), None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("hf-internal-testing/llama-tokenizer".to_string()); api.get("tokenizer.json")? } }; Tokenizer::from_file(tokenizer_path).map_err(E::msg) } } fn main() -> anyhow::Result<()> { let args = Args::parse(); match &args.task { None => { let cmd = InferenceCmd { temperature: None, top_p: None, prompt: "".to_string(), config: None, model_id: "karpathy/tinyllamas".to_string(), which_model: "stories15M.bin".to_string(), }; run_inference(&cmd, &args)? } Some(Task::Inference(cmd)) => run_inference(cmd, &args)?, Some(Task::Eval(cmd)) => run_eval(cmd, &args)?, Some(Task::Train(cmd)) => training::run(cmd, &args)?, } Ok(()) } enum Model { Llama(Llama), QLlama(QLlama), } impl Model { fn forward(&self, xs: &Tensor, pos: usize, cache: &mut Cache) -> anyhow::Result<Tensor> { match self { Self::Llama(l) => Ok(l.forward(xs, pos, cache)?), Self::QLlama(l) => Ok(l.forward(xs, pos, cache)?), } } } fn run_eval(args: &EvaluationCmd, common_args: &Args) -> Result<()> { use std::io::BufRead; let config_path = match &args.config { Some(config) => std::path::PathBuf::from(config), None => { let api = hf_hub::api::sync::Api::new()?; println!("loading the model weights from {}", args.model_id); let api = api.model(args.model_id.clone()); api.get(&args.which_model)? } }; let tokenizer = common_args.tokenizer()?; let device = candle_examples::device(common_args.cpu)?; let mut file = std::fs::File::open(config_path)?; let config = Config::from_reader(&mut file)?; let weights = TransformerWeights::from_reader(&mut file, &config, &device)?; let vb = weights.var_builder(&config, &device)?; let mut cache = Cache::new(false, &config, vb.pp("rot"))?; let model = Llama::load(vb, config)?; let tokens = match &args.pretokenized_dir { None => { let api = hf_hub::api::sync::Api::new()?; let model_id = "roneneldan/TinyStories"; // TODO: Make this configurable. println!("loading the evaluation dataset from {}", model_id); let api = api.dataset(model_id.to_string()); let dataset_path = api.get("TinyStories-valid.txt")?; let file = std::fs::File::open(dataset_path)?; let file = std::io::BufReader::new(file); let mut tokens = vec![]; for line in file.lines() { let line = line?.replace("<|endoftext|>", "<s>"); let line = tokenizer.encode(line, false).map_err(E::msg)?; tokens.push(line.get_ids().to_vec()) } tokens.concat() } Some(pretokenized_dir) => { // Use shard 0 for the test split, similar to llama2.c // https://github.com/karpathy/llama2.c/blob/ce05cc28cf1e3560b873bb21837638a434520a67/tinystories.py#L121 let path = std::path::PathBuf::from(pretokenized_dir).join("data00.bin"); let bytes = std::fs::read(path)?; // Tokens are encoded as u16. let mut tokens = vec![0u16; bytes.len() / 2]; std::io::Cursor::new(bytes).read_u16_into::<LittleEndian>(&mut tokens)?; tokens.into_iter().map(|u| u as u32).collect::<Vec<u32>>() } }; println!("dataset loaded and encoded: {} tokens", tokens.len()); let seq_len = model.config.seq_len; let iter = (0..tokens.len()).step_by(seq_len).flat_map(|start_idx| { if start_idx + seq_len + 1 > tokens.len() { None } else { let tokens = &tokens[start_idx..start_idx + seq_len + 1]; let inputs = Tensor::new(&tokens[..seq_len], &device); let targets = Tensor::new(&tokens[1..], &device); Some(inputs.and_then(|inputs| targets.map(|targets| (inputs, targets)))) } }); let batch_iter = candle_datasets::Batcher::new_r2(iter).batch_size(args.batch_size); for inp_tgt in batch_iter { let (inp, tgt) = inp_tgt?; let logits = model.forward(&inp, 0, &mut cache)?; let loss = candle_nn::loss::cross_entropy(&logits.flatten_to(1)?, &tgt.flatten_to(1)?)?; println!("{}", loss.to_vec0::<f32>()?); } Ok(()) } fn run_inference(args: &InferenceCmd, common_args: &Args) -> Result<()> { let config_path = match &args.config { Some(config) => std::path::PathBuf::from(config), None => { let api = hf_hub::api::sync::Api::new()?; println!("loading the model weights from {}", args.model_id); let api = api.model(args.model_id.clone()); api.get(&args.which_model)? } }; let tokenizer = common_args.tokenizer()?; let device = candle_examples::device(common_args.cpu)?; let is_gguf = config_path.extension().map_or(false, |v| v == "gguf"); let is_safetensors = config_path .extension() .map_or(false, |v| v == "safetensors"); let (model, config, mut cache) = if is_gguf { let vb = qmodel::VarBuilder::from_gguf(config_path, &device)?; let (_vocab_size, dim) = vb .get_no_shape("model.embed_tokens.weight")? .shape() .dims2()?; let config = match dim { 64 => Config::tiny_260k(), 288 => Config::tiny_15m(), 512 => Config::tiny_42m(), 768 => Config::tiny_110m(), _ => anyhow::bail!("no config for dim {dim}"), }; let freq_cis_real = vb .get( (config.seq_len, config.head_size() / 2), "rot.freq_cis_real", )? .dequantize(&device)?; let freq_cis_imag = vb .get( (config.seq_len, config.head_size() / 2), "rot.freq_cis_imag", )? .dequantize(&device)?; let fake_vb = candle_nn::VarBuilder::from_tensors( [ ("freq_cis_real".to_string(), freq_cis_real), ("freq_cis_imag".to_string(), freq_cis_imag), ] .into_iter() .collect(), candle::DType::F32, &device, ); let cache = model::Cache::new(true, &config, fake_vb)?; let model = Model::QLlama(QLlama::load(vb, config.clone())?); (model, config, cache) } else if is_safetensors { let config = Config::tiny_15m(); let tensors = candle::safetensors::load(config_path, &device)?; let vb = candle_nn::VarBuilder::from_tensors(tensors, candle::DType::F32, &device); let cache = model::Cache::new(true, &config, vb.pp("rot"))?; let model = Model::Llama(Llama::load(vb, config.clone())?); (model, config, cache) } else { let mut file = std::fs::File::open(config_path)?; let config = Config::from_reader(&mut file)?; println!("{config:?}"); let weights = TransformerWeights::from_reader(&mut file, &config, &device)?; let vb = weights.var_builder(&config, &device)?; let cache = model::Cache::new(true, &config, vb.pp("rot"))?; let model = Model::Llama(Llama::load(vb, config.clone())?); (model, config, cache) }; println!("starting the inference loop"); let mut logits_processor = LogitsProcessor::new(299792458, args.temperature, args.top_p); let mut index_pos = 0; print!("{}", args.prompt); let mut tokens = tokenizer .encode(args.prompt.clone(), true) .map_err(E::msg)? .get_ids() .to_vec(); let mut tokenizer = candle_examples::token_output_stream::TokenOutputStream::new(tokenizer); let start_gen = std::time::Instant::now(); for index in 0.. { if tokens.len() >= config.seq_len { break; } let context_size = if index > 0 { 1 } else { tokens.len() }; let ctxt = &tokens[tokens.len().saturating_sub(context_size)..]; let input = Tensor::new(ctxt, &device)?.unsqueeze(0)?; let logits = model.forward(&input, index_pos, &mut cache)?; let logits = logits.i((0, logits.dim(1)? - 1))?; let logits = if common_args.repeat_penalty == 1. || tokens.is_empty() { logits } else { let start_at = tokens.len().saturating_sub(common_args.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, common_args.repeat_penalty, &tokens[start_at..], )? }; index_pos += ctxt.len(); let next_token = logits_processor.sample(&logits)?; tokens.push(next_token); if let Some(t) = tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } if let Some(rest) = tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } let dt = start_gen.elapsed(); println!( "\n{} tokens generated ({:.2} token/s)\n", tokens.len(), tokens.len() as f64 / dt.as_secs_f64(), ); Ok(()) }
candle/candle-examples/examples/llama2-c/main.rs/0
{ "file_path": "candle/candle-examples/examples/llama2-c/main.rs", "repo_id": "candle", "token_count": 6004 }
26
# candle-mixtral: 8x7b LLM using a sparse mixture of experts. Mixtral-8x7B-v0.1 is a pretrained generative LLM with 56 billion parameters. - [Blog post](https://mistral.ai/news/mixtral-of-experts/) from Mistral announcing the model release. - [Model card](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the HuggingFace Hub. ## Running the example ```bash $ cargo run --example mixtral --release -- --prompt "def print_prime(n): " def print_prime(n): # n is the number of prime numbers to be printed i = 2 count = 0 while (count < n): if (isPrime(i)): print(i) count += 1 i += 1 def isPrime(n): for x in range(2, int(n**0.5)+1): if (n % x == 0): ... ```
candle/candle-examples/examples/mixtral/README.md/0
{ "file_path": "candle/candle-examples/examples/mixtral/README.md", "repo_id": "candle", "token_count": 322 }
27
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use clap::{Parser, ValueEnum}; use std::io::Write; use tokenizers::Tokenizer; use candle::quantized::{ggml_file, gguf_file}; use candle::Tensor; use candle_transformers::generation::LogitsProcessor; use candle_examples::token_output_stream::TokenOutputStream; use candle_transformers::models::quantized_llama as model; use model::ModelWeights; const DEFAULT_PROMPT: &str = "My favorite theorem is "; #[derive(Debug)] enum Prompt { Interactive, Chat, One(String), } #[derive(Clone, Debug, Copy, PartialEq, Eq, ValueEnum)] enum Which { #[value(name = "7b")] L7b, #[value(name = "13b")] L13b, #[value(name = "70b")] L70b, #[value(name = "7b-chat")] L7bChat, #[value(name = "13b-chat")] L13bChat, #[value(name = "70b-chat")] L70bChat, #[value(name = "7b-code")] L7bCode, #[value(name = "13b-code")] L13bCode, #[value(name = "32b-code")] L34bCode, #[value(name = "7b-leo")] Leo7b, #[value(name = "13b-leo")] Leo13b, #[value(name = "7b-mistral")] Mistral7b, #[value(name = "7b-mistral-instruct")] Mistral7bInstruct, #[value(name = "7b-mistral-instruct-v0.2")] Mistral7bInstructV02, #[value(name = "7b-zephyr-a")] Zephyr7bAlpha, #[value(name = "7b-zephyr-b")] Zephyr7bBeta, #[value(name = "7b-open-chat-3.5")] OpenChat35, #[value(name = "7b-starling-a")] Starling7bAlpha, #[value(name = "mixtral")] Mixtral, #[value(name = "mixtral-instruct")] MixtralInstruct, } impl Which { fn is_mistral(&self) -> bool { match self { Self::L7b | Self::L13b | Self::L70b | Self::L7bChat | Self::L13bChat | Self::L70bChat | Self::L7bCode | Self::L13bCode | Self::L34bCode | Self::Leo7b | Self::Leo13b => false, // Zephyr and OpenChat are fine tuned versions of mistral and should be treated in the // same way. Starling is a fine tuned version of OpenChat. Self::OpenChat35 | Self::Starling7bAlpha | Self::Zephyr7bAlpha | Self::Zephyr7bBeta | Self::Mixtral | Self::MixtralInstruct | Self::Mistral7b | Self::Mistral7bInstruct | Self::Mistral7bInstructV02 => true, } } fn is_zephyr(&self) -> bool { match self { Self::L7b | Self::L13b | Self::L70b | Self::L7bChat | Self::L13bChat | Self::L70bChat | Self::L7bCode | Self::L13bCode | Self::L34bCode | Self::Leo7b | Self::Leo13b | Self::Mixtral | Self::MixtralInstruct | Self::Mistral7b | Self::Mistral7bInstruct | Self::Mistral7bInstructV02 | Self::OpenChat35 | Self::Starling7bAlpha => false, Self::Zephyr7bAlpha | Self::Zephyr7bBeta => true, } } fn is_open_chat(&self) -> bool { match self { Self::L7b | Self::L13b | Self::L70b | Self::L7bChat | Self::L13bChat | Self::L70bChat | Self::L7bCode | Self::L13bCode | Self::L34bCode | Self::Leo7b | Self::Leo13b | Self::Mixtral | Self::MixtralInstruct | Self::Mistral7b | Self::Mistral7bInstruct | Self::Mistral7bInstructV02 | Self::Zephyr7bAlpha | Self::Zephyr7bBeta => false, Self::OpenChat35 | Self::Starling7bAlpha => true, } } fn tokenizer_repo(&self) -> &'static str { match self { Which::L7b | Which::L13b | Which::L70b | Which::L7bChat | Which::L13bChat | Which::L70bChat | Which::L7bCode | Which::L13bCode | Which::L34bCode => "hf-internal-testing/llama-tokenizer", Which::Leo7b => "LeoLM/leo-hessianai-7b", Which::Leo13b => "LeoLM/leo-hessianai-13b", Which::Mixtral => "mistralai/Mixtral-8x7B-v0.1", Which::MixtralInstruct => "mistralai/Mixtral-8x7B-Instruct-v0.1", Which::Mistral7b | Which::Mistral7bInstruct | Which::Mistral7bInstructV02 | Which::Zephyr7bAlpha | Which::Zephyr7bBeta => "mistralai/Mistral-7B-v0.1", Which::OpenChat35 => "openchat/openchat_3.5", Which::Starling7bAlpha => "berkeley-nest/Starling-LM-7B-alpha", } } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// GGML/GGUF file to load, typically a .bin/.gguf file generated by the quantize command from llama.cpp #[arg(long)] model: Option<String>, /// The initial prompt, use 'interactive' for entering multiple prompts in an interactive way /// and 'chat' for an interactive model where history of previous prompts and generated tokens /// is preserved. #[arg(long)] prompt: Option<String>, /// The length of the sample to generate (in tokens). #[arg(short = 'n', long, default_value_t = 1000)] sample_len: usize, /// The tokenizer config in json format. #[arg(long)] tokenizer: Option<String>, /// The temperature used to generate samples, use 0 for greedy sampling. #[arg(long, default_value_t = 0.8)] temperature: f64, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, /// Display the token for the specified prompt. #[arg(long)] verbose_prompt: bool, /// Process prompt elements separately. #[arg(long)] split_prompt: bool, /// Run on CPU rather than GPU even if a GPU is available. #[arg(long)] cpu: bool, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, /// The model size to use. #[arg(long, default_value = "7b")] which: Which, /// Group-Query Attention, use 8 for the 70B version of LLaMAv2. #[arg(long)] gqa: Option<usize>, } impl Args { fn tokenizer(&self) -> anyhow::Result<Tokenizer> { let tokenizer_path = match &self.tokenizer { Some(config) => std::path::PathBuf::from(config), None => { let api = hf_hub::api::sync::Api::new()?; let repo = self.which.tokenizer_repo(); let api = api.model(repo.to_string()); api.get("tokenizer.json")? } }; Tokenizer::from_file(tokenizer_path).map_err(anyhow::Error::msg) } fn model(&self) -> anyhow::Result<std::path::PathBuf> { let model_path = match &self.model { Some(config) => std::path::PathBuf::from(config), None => { let (repo, filename) = match self.which { Which::L7b => ("TheBloke/Llama-2-7B-GGML", "llama-2-7b.ggmlv3.q4_0.bin"), Which::L13b => ("TheBloke/Llama-2-13B-GGML", "llama-2-13b.ggmlv3.q4_0.bin"), Which::L70b => ("TheBloke/Llama-2-70B-GGML", "llama-2-70b.ggmlv3.q4_0.bin"), Which::L7bChat => ( "TheBloke/Llama-2-7B-Chat-GGML", "llama-2-7b-chat.ggmlv3.q4_0.bin", ), Which::L13bChat => ( "TheBloke/Llama-2-13B-Chat-GGML", "llama-2-13b-chat.ggmlv3.q4_0.bin", ), Which::L70bChat => ( "TheBloke/Llama-2-70B-Chat-GGML", "llama-2-70b-chat.ggmlv3.q4_0.bin", ), Which::L7bCode => ("TheBloke/CodeLlama-7B-GGUF", "codellama-7b.Q8_0.gguf"), Which::L13bCode => ("TheBloke/CodeLlama-13B-GGUF", "codellama-13b.Q8_0.gguf"), Which::L34bCode => ("TheBloke/CodeLlama-34B-GGUF", "codellama-34b.Q8_0.gguf"), Which::Leo7b => ( "TheBloke/leo-hessianai-7B-GGUF", "leo-hessianai-7b.Q4_K_M.gguf", ), Which::Leo13b => ( "TheBloke/leo-hessianai-13B-GGUF", "leo-hessianai-13b.Q4_K_M.gguf", ), Which::Mixtral => ( "TheBloke/Mixtral-8x7B-v0.1-GGUF", "mixtral-8x7b-v0.1.Q4_K_M.gguf", ), Which::MixtralInstruct => ( "TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF", "mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf", ), Which::Mistral7b => ( "TheBloke/Mistral-7B-v0.1-GGUF", "mistral-7b-v0.1.Q4_K_S.gguf", ), Which::Mistral7bInstruct => ( "TheBloke/Mistral-7B-Instruct-v0.1-GGUF", "mistral-7b-instruct-v0.1.Q4_K_S.gguf", ), Which::Mistral7bInstructV02 => ( "TheBloke/Mistral-7B-Instruct-v0.2-GGUF", "mistral-7b-instruct-v0.2.Q4_K_S.gguf", ), Which::Zephyr7bAlpha => ( "TheBloke/zephyr-7B-alpha-GGUF", "zephyr-7b-alpha.Q4_K_M.gguf", ), Which::Zephyr7bBeta => { ("TheBloke/zephyr-7B-beta-GGUF", "zephyr-7b-beta.Q4_K_M.gguf") } Which::OpenChat35 => ("TheBloke/openchat_3.5-GGUF", "openchat_3.5.Q4_K_M.gguf"), Which::Starling7bAlpha => ( "TheBloke/Starling-LM-7B-alpha-GGUF", "starling-lm-7b-alpha.Q4_K_M.gguf", ), }; let api = hf_hub::api::sync::Api::new()?; let api = api.model(repo.to_string()); api.get(filename)? } }; Ok(model_path) } } fn format_size(size_in_bytes: usize) -> String { if size_in_bytes < 1_000 { format!("{}B", size_in_bytes) } else if size_in_bytes < 1_000_000 { format!("{:.2}KB", size_in_bytes as f64 / 1e3) } else if size_in_bytes < 1_000_000_000 { format!("{:.2}MB", size_in_bytes as f64 / 1e6) } else { format!("{:.2}GB", size_in_bytes as f64 / 1e9) } } fn main() -> anyhow::Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let temperature = if args.temperature == 0. { None } else { Some(args.temperature) }; let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature, args.repeat_penalty, args.repeat_last_n ); let model_path = args.model()?; let mut file = std::fs::File::open(&model_path)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let mut model = match model_path.extension().and_then(|v| v.to_str()) { Some("gguf") => { let model = gguf_file::Content::read(&mut file).map_err(|e| e.with_path(model_path))?; let mut total_size_in_bytes = 0; for (_, tensor) in model.tensor_infos.iter() { let elem_count = tensor.shape.elem_count(); total_size_in_bytes += elem_count * tensor.ggml_dtype.type_size() / tensor.ggml_dtype.block_size(); } println!( "loaded {:?} tensors ({}) in {:.2}s", model.tensor_infos.len(), &format_size(total_size_in_bytes), start.elapsed().as_secs_f32(), ); ModelWeights::from_gguf(model, &mut file, &device)? } Some("ggml" | "bin") | Some(_) | None => { let model = ggml_file::Content::read(&mut file, &device) .map_err(|e| e.with_path(model_path))?; let mut total_size_in_bytes = 0; for (_, tensor) in model.tensors.iter() { let elem_count = tensor.shape().elem_count(); total_size_in_bytes += elem_count * tensor.dtype().type_size() / tensor.dtype().block_size(); } println!( "loaded {:?} tensors ({}) in {:.2}s", model.tensors.len(), &format_size(total_size_in_bytes), start.elapsed().as_secs_f32(), ); println!("params: {:?}", model.hparams); let default_gqa = match args.which { Which::L7b | Which::L13b | Which::L7bChat | Which::L13bChat | Which::L7bCode | Which::L13bCode | Which::L34bCode | Which::Leo7b | Which::Leo13b => 1, Which::Mixtral | Which::MixtralInstruct | Which::Mistral7b | Which::Mistral7bInstruct | Which::Mistral7bInstructV02 | Which::Zephyr7bAlpha | Which::Zephyr7bBeta | Which::L70b | Which::L70bChat | Which::OpenChat35 | Which::Starling7bAlpha => 8, }; ModelWeights::from_ggml(model, args.gqa.unwrap_or(default_gqa))? } }; println!("model built"); let tokenizer = args.tokenizer()?; let mut tos = TokenOutputStream::new(tokenizer); let prompt = match args.prompt.as_deref() { Some("chat") => Prompt::Chat, Some("interactive") => Prompt::Interactive, Some(s) => Prompt::One(s.to_string()), None => Prompt::One(DEFAULT_PROMPT.to_string()), }; let mut pre_prompt_tokens = vec![]; for prompt_index in 0.. { let prompt_str = match &prompt { Prompt::One(prompt) => prompt.clone(), Prompt::Interactive | Prompt::Chat => { let is_interactive = matches!(prompt, Prompt::Interactive); print!("> "); std::io::stdout().flush()?; let mut prompt = String::new(); std::io::stdin().read_line(&mut prompt)?; if prompt.ends_with('\n') { prompt.pop(); if prompt.ends_with('\r') { prompt.pop(); } } if args.which.is_open_chat() { format!("GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:") } else if args.which.is_zephyr() { if prompt_index == 0 || is_interactive { format!("<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>",) } else { format!("<|user|>\n{prompt}</s>\n<|assistant|>") } } else if args.which.is_mistral() { format!("[INST] {prompt} [/INST]") } else { prompt } } }; print!("{}", &prompt_str); let tokens = tos .tokenizer() .encode(prompt_str, true) .map_err(anyhow::Error::msg)?; if args.verbose_prompt { for (token, id) in tokens.get_tokens().iter().zip(tokens.get_ids().iter()) { let token = token.replace('▁', " ").replace("<0x0A>", "\n"); println!("{id:7} -> '{token}'"); } } let prompt_tokens = [&pre_prompt_tokens, tokens.get_ids()].concat(); let to_sample = args.sample_len.saturating_sub(1); let prompt_tokens = if prompt_tokens.len() + to_sample > model::MAX_SEQ_LEN - 10 { let to_remove = prompt_tokens.len() + to_sample + 10 - model::MAX_SEQ_LEN; prompt_tokens[prompt_tokens.len().saturating_sub(to_remove)..].to_vec() } else { prompt_tokens }; let mut all_tokens = vec![]; let mut logits_processor = LogitsProcessor::new(args.seed, temperature, args.top_p); let start_prompt_processing = std::time::Instant::now(); let mut next_token = if !args.split_prompt { let input = Tensor::new(prompt_tokens.as_slice(), &device)?.unsqueeze(0)?; let logits = model.forward(&input, 0)?; let logits = logits.squeeze(0)?; logits_processor.sample(&logits)? } else { let mut next_token = 0; for (pos, token) in prompt_tokens.iter().enumerate() { let input = Tensor::new(&[*token], &device)?.unsqueeze(0)?; let logits = model.forward(&input, pos)?; let logits = logits.squeeze(0)?; next_token = logits_processor.sample(&logits)? } next_token }; let prompt_dt = start_prompt_processing.elapsed(); all_tokens.push(next_token); if let Some(t) = tos.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } let eos_token = if args.which.is_open_chat() { "<|end_of_turn|>" } else { "</s>" }; let eos_token = *tos.tokenizer().get_vocab(true).get(eos_token).unwrap(); let start_post_prompt = std::time::Instant::now(); let mut sampled = 0; for index in 0..to_sample { let input = Tensor::new(&[next_token], &device)?.unsqueeze(0)?; let logits = model.forward(&input, prompt_tokens.len() + index)?; let logits = logits.squeeze(0)?; let logits = if args.repeat_penalty == 1. { logits } else { let start_at = all_tokens.len().saturating_sub(args.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, args.repeat_penalty, &all_tokens[start_at..], )? }; next_token = logits_processor.sample(&logits)?; all_tokens.push(next_token); if let Some(t) = tos.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } sampled += 1; if next_token == eos_token { break; }; } if let Some(rest) = tos.decode_rest().map_err(candle::Error::msg)? { print!("{rest}"); } std::io::stdout().flush()?; let dt = start_post_prompt.elapsed(); println!( "\n\n{:4} prompt tokens processed: {:.2} token/s", prompt_tokens.len(), prompt_tokens.len() as f64 / prompt_dt.as_secs_f64(), ); println!( "{sampled:4} tokens generated: {:.2} token/s", sampled as f64 / dt.as_secs_f64(), ); match prompt { Prompt::One(_) => break, Prompt::Interactive => {} Prompt::Chat => { pre_prompt_tokens = [prompt_tokens.as_slice(), all_tokens.as_slice()].concat() } } } Ok(()) }
candle/candle-examples/examples/quantized/main.rs/0
{ "file_path": "candle/candle-examples/examples/quantized/main.rs", "repo_id": "candle", "token_count": 11416 }
28
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use candle::{DType, IndexOp, D}; use candle_nn::{Module, VarBuilder}; use candle_transformers::models::resnet; use clap::{Parser, ValueEnum}; #[derive(Clone, Copy, Debug, ValueEnum)] enum Which { #[value(name = "18")] Resnet18, #[value(name = "34")] Resnet34, #[value(name = "50")] Resnet50, #[value(name = "101")] Resnet101, #[value(name = "152")] Resnet152, } #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Variant of the model to use. #[arg(value_enum, long, default_value_t = Which::Resnet18)] which: Which, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("lmz/candle-resnet".into()); let filename = match args.which { Which::Resnet18 => "resnet18.safetensors", Which::Resnet34 => "resnet34.safetensors", Which::Resnet50 => "resnet50.safetensors", Which::Resnet101 => "resnet101.safetensors", Which::Resnet152 => "resnet152.safetensors", }; api.get(filename)? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let class_count = candle_examples::imagenet::CLASS_COUNT as usize; let model = match args.which { Which::Resnet18 => resnet::resnet18(class_count, vb)?, Which::Resnet34 => resnet::resnet34(class_count, vb)?, Which::Resnet50 => resnet::resnet50(class_count, vb)?, Which::Resnet101 => resnet::resnet101(class_count, vb)?, Which::Resnet152 => resnet::resnet152(class_count, vb)?, }; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
candle/candle-examples/examples/resnet/main.rs/0
{ "file_path": "candle/candle-examples/examples/resnet/main.rs", "repo_id": "candle", "token_count": 1288 }
29
// https://github.com/openai/whisper/blob/main/whisper/model.py/rgs // TODO: // - Batch size greater than 1. // - More token filters (SuppressBlanks, ApplyTimestampRules). #[cfg(feature = "accelerate")] extern crate accelerate_src; #[cfg(feature = "mkl")] extern crate intel_mkl_src; use anyhow::{Error as E, Result}; use candle::{Device, IndexOp, Tensor}; use candle_nn::{ops::softmax, VarBuilder}; use clap::{Parser, ValueEnum}; use hf_hub::{api::sync::Api, Repo, RepoType}; use rand::{distributions::Distribution, SeedableRng}; use tokenizers::Tokenizer; mod multilingual; mod pcm_decode; use candle_transformers::models::whisper::{self as m, audio, Config}; pub enum Model { Normal(m::model::Whisper), Quantized(m::quantized_model::Whisper), } // Maybe we should use some traits rather than doing the dispatch for all these. impl Model { pub fn config(&self) -> &Config { match self { Self::Normal(m) => &m.config, Self::Quantized(m) => &m.config, } } pub fn encoder_forward(&mut self, x: &Tensor, flush: bool) -> candle::Result<Tensor> { match self { Self::Normal(m) => m.encoder.forward(x, flush), Self::Quantized(m) => m.encoder.forward(x, flush), } } pub fn decoder_forward( &mut self, x: &Tensor, xa: &Tensor, flush: bool, ) -> candle::Result<Tensor> { match self { Self::Normal(m) => m.decoder.forward(x, xa, flush), Self::Quantized(m) => m.decoder.forward(x, xa, flush), } } pub fn decoder_final_linear(&self, x: &Tensor) -> candle::Result<Tensor> { match self { Self::Normal(m) => m.decoder.final_linear(x), Self::Quantized(m) => m.decoder.final_linear(x), } } } #[allow(dead_code)] #[derive(Debug, Clone)] struct DecodingResult { tokens: Vec<u32>, text: String, avg_logprob: f64, no_speech_prob: f64, temperature: f64, compression_ratio: f64, } #[allow(dead_code)] #[derive(Debug, Clone)] struct Segment { start: f64, duration: f64, dr: DecodingResult, } struct Decoder { model: Model, rng: rand::rngs::StdRng, task: Option<Task>, timestamps: bool, verbose: bool, tokenizer: Tokenizer, suppress_tokens: Tensor, sot_token: u32, transcribe_token: u32, translate_token: u32, eot_token: u32, no_speech_token: u32, no_timestamps_token: u32, language_token: Option<u32>, } impl Decoder { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, seed: u64, device: &Device, language_token: Option<u32>, task: Option<Task>, timestamps: bool, verbose: bool, ) -> Result<Self> { let no_timestamps_token = token_id(&tokenizer, m::NO_TIMESTAMPS_TOKEN)?; // Suppress the notimestamps token when in timestamps mode. // https://github.com/openai/whisper/blob/e8622f9afc4eba139bf796c210f5c01081000472/whisper/decoding.py#L452 let suppress_tokens: Vec<f32> = (0..model.config().vocab_size as u32) .map(|i| { if model.config().suppress_tokens.contains(&i) || timestamps && i == no_timestamps_token { f32::NEG_INFINITY } else { 0f32 } }) .collect(); let suppress_tokens = Tensor::new(suppress_tokens.as_slice(), device)?; let sot_token = token_id(&tokenizer, m::SOT_TOKEN)?; let transcribe_token = token_id(&tokenizer, m::TRANSCRIBE_TOKEN)?; let translate_token = token_id(&tokenizer, m::TRANSLATE_TOKEN)?; let eot_token = token_id(&tokenizer, m::EOT_TOKEN)?; let no_speech_token = m::NO_SPEECH_TOKENS .iter() .find_map(|token| token_id(&tokenizer, token).ok()); let no_speech_token = match no_speech_token { None => anyhow::bail!("unable to find any non-speech token"), Some(n) => n, }; Ok(Self { model, rng: rand::rngs::StdRng::seed_from_u64(seed), tokenizer, task, timestamps, verbose, suppress_tokens, sot_token, transcribe_token, translate_token, eot_token, no_speech_token, language_token, no_timestamps_token, }) } fn decode(&mut self, mel: &Tensor, t: f64) -> Result<DecodingResult> { let model = &mut self.model; let audio_features = model.encoder_forward(mel, true)?; if self.verbose { println!("audio features: {:?}", audio_features.dims()); } let sample_len = model.config().max_target_positions / 2; let mut sum_logprob = 0f64; let mut no_speech_prob = f64::NAN; let mut tokens = vec![self.sot_token]; if let Some(language_token) = self.language_token { tokens.push(language_token); } match self.task { None | Some(Task::Transcribe) => tokens.push(self.transcribe_token), Some(Task::Translate) => tokens.push(self.translate_token), } if !self.timestamps { tokens.push(self.no_timestamps_token); } for i in 0..sample_len { let tokens_t = Tensor::new(tokens.as_slice(), mel.device())?; // The model expects a batch dim but this inference loop does not handle // it so we add it at this point. let tokens_t = tokens_t.unsqueeze(0)?; let ys = model.decoder_forward(&tokens_t, &audio_features, i == 0)?; // Extract the no speech probability on the first iteration by looking at the first // token logits and the probability for the according token. if i == 0 { let logits = model.decoder_final_linear(&ys.i(..1)?)?.i(0)?.i(0)?; no_speech_prob = softmax(&logits, 0)? .i(self.no_speech_token as usize)? .to_scalar::<f32>()? as f64; } let (_, seq_len, _) = ys.dims3()?; let logits = model .decoder_final_linear(&ys.i((..1, seq_len - 1..))?)? .i(0)? .i(0)?; // TODO: Besides suppress tokens, we should apply the heuristics from // ApplyTimestampRules, i.e.: // - Timestamps come in pairs, except before EOT. // - Timestamps should be non-decreasing. // - If the sum of the probabilities of timestamps is higher than any other tokens, // only consider timestamps when sampling. // https://github.com/openai/whisper/blob/e8622f9afc4eba139bf796c210f5c01081000472/whisper/decoding.py#L439 let logits = logits.broadcast_add(&self.suppress_tokens)?; let next_token = if t > 0f64 { let prs = softmax(&(&logits / t)?, 0)?; let logits_v: Vec<f32> = prs.to_vec1()?; let distr = rand::distributions::WeightedIndex::new(&logits_v)?; distr.sample(&mut self.rng) as u32 } else { let logits_v: Vec<f32> = logits.to_vec1()?; logits_v .iter() .enumerate() .max_by(|(_, u), (_, v)| u.total_cmp(v)) .map(|(i, _)| i as u32) .unwrap() }; tokens.push(next_token); let prob = softmax(&logits, candle::D::Minus1)? .i(next_token as usize)? .to_scalar::<f32>()? as f64; if next_token == self.eot_token || tokens.len() > model.config().max_target_positions { break; } sum_logprob += prob.ln(); } let text = self.tokenizer.decode(&tokens, true).map_err(E::msg)?; let avg_logprob = sum_logprob / tokens.len() as f64; Ok(DecodingResult { tokens, text, avg_logprob, no_speech_prob, temperature: t, compression_ratio: f64::NAN, }) } fn decode_with_fallback(&mut self, segment: &Tensor) -> Result<DecodingResult> { for (i, &t) in m::TEMPERATURES.iter().enumerate() { let dr: Result<DecodingResult> = self.decode(segment, t); if i == m::TEMPERATURES.len() - 1 { return dr; } // On errors, we try again with a different temperature. match dr { Ok(dr) => { let needs_fallback = dr.compression_ratio > m::COMPRESSION_RATIO_THRESHOLD || dr.avg_logprob < m::LOGPROB_THRESHOLD; if !needs_fallback || dr.no_speech_prob > m::NO_SPEECH_THRESHOLD { return Ok(dr); } } Err(err) => { println!("Error running at {t}: {err}") } } } unreachable!() } fn run(&mut self, mel: &Tensor) -> Result<Vec<Segment>> { let (_, _, content_frames) = mel.dims3()?; let mut seek = 0; let mut segments = vec![]; while seek < content_frames { let start = std::time::Instant::now(); let time_offset = (seek * m::HOP_LENGTH) as f64 / m::SAMPLE_RATE as f64; let segment_size = usize::min(content_frames - seek, m::N_FRAMES); let mel_segment = mel.narrow(2, seek, segment_size)?; let segment_duration = (segment_size * m::HOP_LENGTH) as f64 / m::SAMPLE_RATE as f64; let dr = self.decode_with_fallback(&mel_segment)?; seek += segment_size; if dr.no_speech_prob > m::NO_SPEECH_THRESHOLD && dr.avg_logprob < m::LOGPROB_THRESHOLD { println!("no speech detected, skipping {seek} {dr:?}"); continue; } let segment = Segment { start: time_offset, duration: segment_duration, dr, }; if self.timestamps { println!( "{:.1}s -- {:.1}s", segment.start, segment.start + segment.duration, ); let mut tokens_to_decode = vec![]; let mut prev_timestamp_s = 0f32; for &token in segment.dr.tokens.iter() { if token == self.sot_token || token == self.eot_token { continue; } // The no_timestamp_token is the last before the timestamp ones. if token > self.no_timestamps_token { let timestamp_s = (token - self.no_timestamps_token + 1) as f32 / 50.; if !tokens_to_decode.is_empty() { let text = self .tokenizer .decode(&tokens_to_decode, true) .map_err(E::msg)?; println!(" {:.1}s-{:.1}s: {}", prev_timestamp_s, timestamp_s, text); tokens_to_decode.clear() } prev_timestamp_s = timestamp_s; } else { tokens_to_decode.push(token) } } if !tokens_to_decode.is_empty() { let text = self .tokenizer .decode(&tokens_to_decode, true) .map_err(E::msg)?; if !text.is_empty() { println!(" {:.1}s-...: {}", prev_timestamp_s, text); } tokens_to_decode.clear() } } else { println!( "{:.1}s -- {:.1}s: {}", segment.start, segment.start + segment.duration, segment.dr.text, ) } if self.verbose { println!("{seek}: {segment:?}, in {:?}", start.elapsed()); } segments.push(segment) } Ok(segments) } } pub fn token_id(tokenizer: &Tokenizer, token: &str) -> candle::Result<u32> { match tokenizer.token_to_id(token) { None => candle::bail!("no token-id for {token}"), Some(id) => Ok(id), } } #[derive(Clone, Copy, Debug, ValueEnum)] enum Task { Transcribe, Translate, } #[derive(Clone, Copy, Debug, PartialEq, Eq, ValueEnum)] enum WhichModel { Tiny, #[value(name = "tiny.en")] TinyEn, Base, #[value(name = "base.en")] BaseEn, Small, #[value(name = "small.en")] SmallEn, Medium, #[value(name = "medium.en")] MediumEn, Large, LargeV2, LargeV3, #[value(name = "distil-medium.en")] DistilMediumEn, #[value(name = "distil-large-v2")] DistilLargeV2, } impl WhichModel { fn is_multilingual(&self) -> bool { match self { Self::Tiny | Self::Base | Self::Small | Self::Medium | Self::Large | Self::LargeV2 | Self::LargeV3 | Self::DistilLargeV2 => true, Self::TinyEn | Self::BaseEn | Self::SmallEn | Self::MediumEn | Self::DistilMediumEn => { false } } } fn model_and_revision(&self) -> (&'static str, &'static str) { match self { Self::Tiny => ("openai/whisper-tiny", "main"), Self::TinyEn => ("openai/whisper-tiny.en", "refs/pr/15"), Self::Base => ("openai/whisper-base", "refs/pr/22"), Self::BaseEn => ("openai/whisper-base.en", "refs/pr/13"), Self::Small => ("openai/whisper-small", "main"), Self::SmallEn => ("openai/whisper-small.en", "refs/pr/10"), Self::Medium => ("openai/whisper-medium", "main"), Self::MediumEn => ("openai/whisper-medium.en", "main"), Self::Large => ("openai/whisper-large", "refs/pr/36"), Self::LargeV2 => ("openai/whisper-large-v2", "refs/pr/57"), Self::LargeV3 => ("openai/whisper-large-v3", "main"), Self::DistilMediumEn => ("distil-whisper/distil-medium.en", "main"), Self::DistilLargeV2 => ("distil-whisper/distil-large-v2", "main"), } } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, #[arg(long)] model_id: Option<String>, /// The model to use, check out available models: /// https://huggingface.co/models?search=whisper #[arg(long)] revision: Option<String>, /// The model to be used, can be tiny, small, medium. #[arg(long, default_value = "tiny.en")] model: WhichModel, /// The input to be processed, in wav format, will default to `jfk.wav`. Alternatively /// this can be set to sample:jfk, sample:gb1, ... to fetch a sample from the following /// repo: https://huggingface.co/datasets/Narsil/candle_demo/ #[arg(long)] input: Option<String>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] quantized: bool, /// Language. #[arg(long)] language: Option<String>, /// Task, when no task is specified, the input tokens contain only the sot token which can /// improve things when in no-timestamp mode. #[arg(long)] task: Option<Task>, /// Timestamps mode, this is not fully implemented yet. #[arg(long)] timestamps: bool, /// Print the full DecodingResult structure rather than just the text. #[arg(long)] verbose: bool, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; let device = candle_examples::device(args.cpu)?; let (default_model, default_revision) = if args.quantized { ("lmz/candle-whisper", "main") } else { args.model.model_and_revision() }; let default_model = default_model.to_string(); let default_revision = default_revision.to_string(); let (model_id, revision) = match (args.model_id, args.revision) { (Some(model_id), Some(revision)) => (model_id, revision), (Some(model_id), None) => (model_id, "main".to_string()), (None, Some(revision)) => (default_model, revision), (None, None) => (default_model, default_revision), }; let (config_filename, tokenizer_filename, weights_filename, input) = { let api = Api::new()?; let dataset = api.dataset("Narsil/candle-examples".to_string()); let repo = api.repo(Repo::with_revision(model_id, RepoType::Model, revision)); let sample = if let Some(input) = args.input { if let Some(sample) = input.strip_prefix("sample:") { dataset.get(&format!("samples_{sample}.wav"))? } else { std::path::PathBuf::from(input) } } else { println!("No audio file submitted: Downloading https://huggingface.co/datasets/Narsil/candle_demo/blob/main/samples_jfk.wav"); dataset.get("samples_jfk.wav")? }; let (config, tokenizer, model) = if args.quantized { let ext = match args.model { WhichModel::TinyEn => "tiny-en", WhichModel::Tiny => "tiny", _ => unimplemented!("no quantized support for {:?}", args.model), }; ( repo.get(&format!("config-{ext}.json"))?, repo.get(&format!("tokenizer-{ext}.json"))?, repo.get(&format!("model-{ext}-q80.gguf"))?, ) } else { let config = repo.get("config.json")?; let tokenizer = repo.get("tokenizer.json")?; let model = repo.get("model.safetensors")?; (config, tokenizer, model) }; (config, tokenizer, model, sample) }; let config: Config = serde_json::from_str(&std::fs::read_to_string(config_filename)?)?; let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let mel_bytes = match config.num_mel_bins { 80 => include_bytes!("melfilters.bytes").as_slice(), 128 => include_bytes!("melfilters128.bytes").as_slice(), nmel => anyhow::bail!("unexpected num_mel_bins {nmel}"), }; let mut mel_filters = vec![0f32; mel_bytes.len() / 4]; <byteorder::LittleEndian as byteorder::ByteOrder>::read_f32_into(mel_bytes, &mut mel_filters); let (pcm_data, sample_rate) = pcm_decode::pcm_decode(input)?; if sample_rate != m::SAMPLE_RATE as u32 { anyhow::bail!("input file must have a {} sampling rate", m::SAMPLE_RATE) } println!("pcm data loaded {}", pcm_data.len()); let mel = audio::pcm_to_mel(&config, &pcm_data, &mel_filters); let mel_len = mel.len(); let mel = Tensor::from_vec( mel, (1, config.num_mel_bins, mel_len / config.num_mel_bins), &device, )?; println!("loaded mel: {:?}", mel.dims()); let mut model = if args.quantized { let vb = candle_transformers::quantized_var_builder::VarBuilder::from_gguf( &weights_filename, &device, )?; Model::Quantized(m::quantized_model::Whisper::load(&vb, config)?) } else { let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[weights_filename], m::DTYPE, &device)? }; Model::Normal(m::model::Whisper::load(&vb, config)?) }; let language_token = match (args.model.is_multilingual(), args.language) { (true, None) => Some(multilingual::detect_language(&mut model, &tokenizer, &mel)?), (false, None) => None, (true, Some(language)) => match token_id(&tokenizer, &format!("<|{language}|>")) { Ok(token_id) => Some(token_id), Err(_) => anyhow::bail!("language {language} is not supported"), }, (false, Some(_)) => { anyhow::bail!("a language cannot be set for non-multilingual models") } }; let mut dc = Decoder::new( model, tokenizer, args.seed, &device, language_token, args.task, args.timestamps, args.verbose, )?; dc.run(&mel)?; Ok(()) }
candle/candle-examples/examples/whisper/main.rs/0
{ "file_path": "candle/candle-examples/examples/whisper/main.rs", "repo_id": "candle", "token_count": 10704 }
30
/****************************************************************************** * Copyright (c) 2023, Tri Dao. ******************************************************************************/ #pragma once #include <cuda.h> #include <vector> constexpr int TOTAL_DIM = 0; constexpr int H_DIM = 1; constexpr int D_DIM = 2; //////////////////////////////////////////////////////////////////////////////////////////////////// struct Qkv_params { using index_t = uint32_t; // The QKV matrices. void *__restrict__ q_ptr; void *__restrict__ k_ptr; void *__restrict__ v_ptr; // The stride between rows of the Q, K and V matrices. index_t q_batch_stride; index_t k_batch_stride; index_t v_batch_stride; index_t q_row_stride; index_t k_row_stride; index_t v_row_stride; index_t q_head_stride; index_t k_head_stride; index_t v_head_stride; // The number of heads. int h, h_k; // In the case of multi-query and grouped-query attention (MQA/GQA), nheads_k could be // different from nheads (query). int h_h_k_ratio; // precompute h / h_k, }; //////////////////////////////////////////////////////////////////////////////////////////////////// struct Flash_fwd_params : public Qkv_params { // The O matrix (output). void * __restrict__ o_ptr; void * __restrict__ oaccum_ptr; // The stride between rows of O. index_t o_batch_stride; index_t o_row_stride; index_t o_head_stride; // The pointer to the P matrix. void * __restrict__ p_ptr; // The pointer to the softmax sum. void * __restrict__ softmax_lse_ptr; void * __restrict__ softmax_lseaccum_ptr; // The dimensions. int b, seqlen_q, seqlen_k, seqlen_knew, d, seqlen_q_rounded, seqlen_k_rounded, d_rounded, rotary_dim; // The scaling factors for the kernel. float scale_softmax; float scale_softmax_log2; // array of length b+1 holding starting offset of each sequence. int * __restrict__ cu_seqlens_q; int * __restrict__ cu_seqlens_k; // If provided, the actual length of each k sequence. int * __restrict__ seqused_k; int *__restrict__ blockmask; // The K_new and V_new matrices. void * __restrict__ knew_ptr; void * __restrict__ vnew_ptr; // The stride between rows of the Q, K and V matrices. index_t knew_batch_stride; index_t vnew_batch_stride; index_t knew_row_stride; index_t vnew_row_stride; index_t knew_head_stride; index_t vnew_head_stride; // The cos and sin matrices for rotary embedding. void * __restrict__ rotary_cos_ptr; void * __restrict__ rotary_sin_ptr; // The indices to index into the KV cache. int *__restrict__ cache_batch_idx; // The dropout probability (probability of keeping an activation). float p_dropout; // uint32_t p_dropout_in_uint; // uint16_t p_dropout_in_uint16_t; uint8_t p_dropout_in_uint8_t; // Scale factor of 1 / (1 - p_dropout). float rp_dropout; float scale_softmax_rp_dropout; // Local window size int window_size_left, window_size_right; bool is_bf16; bool is_causal; // If is_seqlens_k_cumulative, then seqlen_k is cu_seqlens_k[bidb + 1] - cu_seqlens_k[bidb]. // Otherwise it's cu_seqlens_k[bidb], i.e., we use cu_seqlens_k to store the sequence lengths of K. bool is_seqlens_k_cumulative; bool is_rotary_interleaved; int num_splits; // For split-KV version void * __restrict__ alibi_slopes_ptr; index_t alibi_slopes_batch_stride; }; //////////////////////////////////////////////////////////////////////////////////////////////////// struct Flash_bwd_params : public Flash_fwd_params { // The dO and dQKV matrices. void *__restrict__ do_ptr; void *__restrict__ dq_ptr; void *__restrict__ dk_ptr; void *__restrict__ dv_ptr; // To accumulate dQ void *__restrict__ dq_accum_ptr; void *__restrict__ dk_accum_ptr; void *__restrict__ dv_accum_ptr; // // To accumulate dK and dV in case we're splitting the bwd along seqlen_q // dimension void *__restrict__ dk_accum_ptr; void *__restrict__ // dv_accum_ptr; // The stride between rows of the dO, dQ, dK and dV matrices. // TD [2022-04-16]: We're using 32-bit indexing to save registers. // The code probably won't work for arrays larger than 2GB. index_t do_batch_stride; index_t do_row_stride; index_t do_head_stride; index_t dq_batch_stride; index_t dk_batch_stride; index_t dv_batch_stride; index_t dq_row_stride; index_t dk_row_stride; index_t dv_row_stride; index_t dq_head_stride; index_t dk_head_stride; index_t dv_head_stride; // The pointer to the softmax d sum. void *__restrict__ dsoftmax_sum; bool deterministic; index_t dq_accum_split_stride; }; //////////////////////////////////////////////////////////////////////////////////////////////////// template<typename T, int Headdim> void run_mha_fwd_(Flash_fwd_params &params, cudaStream_t stream); template<typename T, int Headdim> void run_mha_fwd_splitkv_dispatch(Flash_fwd_params &params, cudaStream_t stream); template<typename T, int Headdim> void run_mha_bwd_(Flash_bwd_params &params, cudaStream_t stream, const bool configure);
candle/candle-flash-attn/kernels/flash.h/0
{ "file_path": "candle/candle-flash-attn/kernels/flash.h", "repo_id": "candle", "token_count": 2033 }
31
#include "cuda_utils.cuh" #include<stdint.h> #define AFFINE_OP(TYPENAME, FN_NAME) \ extern "C" __global__ void FN_NAME( \ const size_t numel, \ const size_t num_dims, \ const size_t *info, \ const TYPENAME *inp, \ TYPENAME *out, \ const TYPENAME mul, \ const TYPENAME add \ ) { \ const size_t *dims = info; \ const size_t *strides = info + num_dims; \ if (info == nullptr || is_contiguous(num_dims, dims, strides)) { \ for (unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; i < numel; i += blockDim.x * gridDim.x) { \ TYPENAME x = inp ? inp[i] : out[i]; \ out[i] = x * mul + add; \ } \ } \ else { \ for (unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; i < numel; i += blockDim.x * gridDim.x) { \ unsigned strided_i = get_strided_index(i, num_dims, dims, strides); \ TYPENAME x = inp ? inp[strided_i] : out[i]; \ out[i] = x * mul + add; \ } \ } \ } \ #if __CUDA_ARCH__ >= 800 AFFINE_OP(__nv_bfloat16, affine_bf16) #endif #if __CUDA_ARCH__ >= 530 AFFINE_OP(__half, affine_f16) #endif AFFINE_OP(float, affine_f32) AFFINE_OP(double, affine_f64) AFFINE_OP(uint8_t, affine_u8) AFFINE_OP(uint32_t, affine_u32) AFFINE_OP(int64_t, affine_i64)
candle/candle-kernels/src/affine.cu/0
{ "file_path": "candle/candle-kernels/src/affine.cu", "repo_id": "candle", "token_count": 659 }
32
#include <metal_stdlib> METAL_FUNC uint get_strided_index( uint idx, constant size_t &num_dims, constant size_t *dims, constant size_t *strides ) { uint strided_i = 0; for (uint d = 0; d < num_dims; d++) { uint dim_idx = num_dims - 1 - d; strided_i += (idx % dims[dim_idx]) * strides[dim_idx]; idx /= dims[dim_idx]; } return strided_i; } using namespace metal; #define AFFINE(FN_NAME, T) \ kernel void FN_NAME( \ constant size_t &dim, \ constant float &mul, \ constant float &add, \ device const T *input, \ device T *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ output[id] = T(fma(float(input[id]), mul, add)); \ } \ kernel void FN_NAME##_strided( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ constant float &mul, \ constant float &add, \ device const T *input, \ device T *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ output[id] = T(fma(float(input[get_strided_index(id, num_dims, dims, strides)]), mul, add)); \ } #define POWF(FN_NAME, TYPENAME) \ kernel void FN_NAME( \ constant size_t &dim, \ constant float &mul, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ output[id] = TYPENAME(pow(input[id], TYPENAME(mul))); \ } \ kernel void FN_NAME##_strided( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ constant float &mul, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ output[id] = TYPENAME(pow(input[get_strided_index(id, num_dims, dims, strides)], TYPENAME(mul))); \ } #define ELU(FN_NAME, TYPENAME) \ kernel void FN_NAME( \ constant size_t &dim, \ constant float &mul, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ const TYPENAME x = input[id]; \ output[id] = TYPENAME((x > 0)?x: mul * (exp(x) - 1)); \ } \ kernel void FN_NAME##_strided( \ constant size_t &dim, \ constant size_t &num_dims, \ constant size_t *dims, \ constant size_t *strides, \ constant float &mul, \ device const TYPENAME *input, \ device TYPENAME *output, \ uint id [[ thread_position_in_grid ]] \ ) { \ if (id >= dim) { \ return; \ } \ const TYPENAME x = input[get_strided_index(id, num_dims, dims, strides)]; \ output[id] = TYPENAME((x > 0)?x: mul * exp(x - 1)); \ } \ AFFINE(affine_f32, float) AFFINE(affine_f16, half) POWF(powf_f32, float) POWF(powf_f16, half) ELU(elu_f32, float) ELU(elu_f16, half) #if defined(__HAVE_BFLOAT__) AFFINE(affine_bf16, bfloat); POWF(powf_bf16, bfloat); ELU(elu_bf16, bfloat); #endif
candle/candle-metal-kernels/src/affine.metal/0
{ "file_path": "candle/candle-metal-kernels/src/affine.metal", "repo_id": "candle", "token_count": 1464 }
33
use candle_metal_kernels::{call_unary_contiguous, call_unary_strided, unary, Kernels}; use half::{bf16, f16}; use metal::objc::rc::autoreleasepool; use metal::{Device, MTLResourceOptions}; use rand; use std::any::type_name; use std::time::Instant; fn main() { let device = Device::system_default().unwrap(); let kernels = Kernels::new(); let f32_1k = (0..1000).map(|_| rand::random::<f32>()).collect::<Vec<_>>(); let f32_10k = (0..10000) .map(|_| rand::random::<f32>()) .collect::<Vec<_>>(); let f32_100k = (0..100000) .map(|_| rand::random::<f32>()) .collect::<Vec<_>>(); let f16_map = |v: &[f32]| v.iter().map(|v| f16::from_f32(*v)).collect::<Vec<_>>(); let f16_1k = f16_map(&f32_1k); let f16_10k = f16_map(&f32_10k); let f16_100k = f16_map(&f32_100k); let bf16_map = |v: &[f32]| v.iter().map(|v| bf16::from_f32(*v)).collect::<Vec<_>>(); let bf16_1k = bf16_map(&f32_1k); let bf16_10k = bf16_map(&f32_10k); let bf16_100k = bf16_map(&f32_100k); let f32_ckernels = [ unary::contiguous::sin::FLOAT, unary::contiguous::cos::FLOAT, unary::contiguous::exp::FLOAT, unary::contiguous::sqr::FLOAT, unary::contiguous::sqrt::FLOAT, unary::contiguous::neg::FLOAT, unary::contiguous::copy::FLOAT, ]; let f32_skernels = [ unary::strided::sin::FLOAT, unary::strided::cos::FLOAT, unary::strided::exp::FLOAT, unary::strided::sqr::FLOAT, unary::strided::sqrt::FLOAT, unary::strided::neg::FLOAT, unary::strided::copy::FLOAT, ]; let f16_ckernels = [ unary::contiguous::sin::HALF, unary::contiguous::cos::HALF, unary::contiguous::exp::HALF, unary::contiguous::sqr::HALF, unary::contiguous::sqrt::HALF, unary::contiguous::neg::HALF, unary::contiguous::copy::HALF, ]; let f16_skernels = [ unary::strided::sin::HALF, unary::strided::cos::HALF, unary::strided::exp::HALF, unary::strided::sqr::HALF, unary::strided::sqrt::HALF, unary::strided::neg::HALF, unary::strided::copy::HALF, ]; let bf16_ckernels = [ unary::contiguous::sin::BFLOAT, unary::contiguous::cos::BFLOAT, unary::contiguous::exp::BFLOAT, unary::contiguous::sqr::BFLOAT, unary::contiguous::sqrt::BFLOAT, unary::contiguous::neg::BFLOAT, unary::contiguous::copy::BFLOAT, ]; let bf16_skernels = [ unary::strided::sin::BFLOAT, unary::strided::cos::BFLOAT, unary::strided::exp::BFLOAT, unary::strided::sqr::BFLOAT, unary::strided::sqrt::BFLOAT, unary::strided::neg::BFLOAT, unary::strided::copy::BFLOAT, ]; println!( "{0: <5} | {1: <19} | {2: <6} | {3: <5} | {4: <11} | {5: <11}", "dtype", "kernel", "size", "runs", "total time", "avg time" ); // f32 run_unary_bench(&device, &kernels, &f32_1k, f32_ckernels, f32_skernels); run_unary_bench(&device, &kernels, &f32_10k, f32_ckernels, f32_skernels); run_unary_bench(&device, &kernels, &f32_100k, f32_ckernels, f32_skernels); // f16 run_unary_bench(&device, &kernels, &f16_1k, f16_ckernels, f16_skernels); run_unary_bench(&device, &kernels, &f16_10k, f16_ckernels, f16_skernels); run_unary_bench(&device, &kernels, &f16_100k, f16_ckernels, f16_skernels); // bf16 run_unary_bench(&device, &kernels, &bf16_1k, bf16_ckernels, bf16_skernels); run_unary_bench(&device, &kernels, &bf16_10k, bf16_ckernels, bf16_skernels); run_unary_bench(&device, &kernels, &bf16_100k, bf16_ckernels, bf16_skernels); } fn run_unary_bench<T: Clone>( device: &Device, kernels: &Kernels, v: &[T], contiguous: [unary::contiguous::Kernel; 7], strided: [unary::strided::Kernel; 7], ) { let command_queue = device.new_command_queue(); let options = MTLResourceOptions::StorageModeManaged; let iterations = 10000; let input = device.new_buffer_with_data( v.as_ptr() as *const core::ffi::c_void, core::mem::size_of_val(v) as u64, options, ); let mut output = device.new_buffer(core::mem::size_of_val(v) as u64, options); // Contiguous for kernel_name in contiguous { let total_time = autoreleasepool(|| { let command_buffer = command_queue.new_command_buffer(); let start = Instant::now(); for _ in 0..iterations { call_unary_contiguous( device, &command_buffer, kernels, kernel_name, v.len(), &input, &mut output, ) .unwrap(); } command_buffer.commit(); command_buffer.wait_until_completed(); start.elapsed() }); println!( "{0: <5} | {1: <19} | {2: <6} | {3: <5} | {4: <11?} | {5: <11?}", type_name::<T>().split("::").last().unwrap(), kernel_name.0, v.len(), iterations, total_time, total_time / iterations ); } // Strided let shape = vec![2, 5_000]; let strides = vec![2, 1]; let offset = 0; for kernel_name in &strided { let total_time = autoreleasepool(|| { let command_buffer = command_queue.new_command_buffer(); let start = Instant::now(); for _ in 0..iterations { call_unary_strided( device, command_buffer, &kernels, kernel_name, &shape, &input, &strides, offset, &mut output, 0, ) .unwrap(); } command_buffer.commit(); command_buffer.wait_until_completed(); start.elapsed() }); println!( "{0: <5} | {1: <19} | {2: <6} | {3: <5} | {4: <11?} | {5: <11?}", type_name::<T>().split("::").last().unwrap(), kernel_name.0, v.len(), iterations, total_time, total_time / iterations ); } }
candle/candle-metal-kernels/tmp/unary.rs/0
{ "file_path": "candle/candle-metal-kernels/tmp/unary.rs", "repo_id": "candle", "token_count": 3489 }
34
use candle::{Result, Tensor}; /// The negative log likelihood loss. /// /// Arguments /// /// * [inp]: The input tensor of dimensions `N, C` where `N` is the batch size and `C` the number /// of categories. This is expected to contain log probabilities. /// * [target]: The ground truth labels as a tensor of u32 of dimension `N`. /// /// The resulting tensor is a scalar containing the average value over the batch. pub fn nll(inp: &Tensor, target: &Tensor) -> Result<Tensor> { let b_sz = match target.dims() { &[b_sz] => b_sz, dims => candle::bail!("the target tensor should have a single dimension ({dims:?})"), }; match inp.dims() { &[inp_b_sz, _] => { if inp_b_sz != b_sz { candle::bail!("batch size mismatch between inp ({inp_b_sz}) and target ({b_sz})") } } dims => candle::bail!("the target tensor should have two dimensions ({dims:?})"), } inp.gather(&target.unsqueeze(1)?, 1)? .sum_all()? .affine(-1f64 / b_sz as f64, 0.) } /// The cross-entropy loss. /// /// Arguments /// /// * [inp]: The input tensor of dimensions `N, C` where `N` is the batch size and `C` the number /// of categories. This is expected to raw logits. /// * [target]: The ground truth labels as a tensor of u32 of dimension `N`. /// /// The resulting tensor is a scalar containing the average value over the batch. pub fn cross_entropy(inp: &Tensor, target: &Tensor) -> Result<Tensor> { if inp.rank() != 2 { candle::bail!("cross_entropy expects an input tensor of rank 2") } let inp = crate::ops::log_softmax(inp, 1)?; nll(&inp, target) } /// The mean squared error loss. pub fn mse(inp: &Tensor, target: &Tensor) -> Result<Tensor> { (inp - target)?.sqr()?.mean_all() } /// The binary cross-entropy with logit loss. /// /// Arguments /// /// * [inp]: The input tensor of dimensions `N, C` where `N` is the batch size and `C` the number /// of categories. This is expected to raw logits. /// * [target]: The ground truth labels as a tensor of u32 of dimension `N, C` where `N` is the batch size and `C` the number /// of categories. /// /// The resulting tensor is a scalar containing the average value over the batch. pub fn binary_cross_entropy_with_logit(inp: &Tensor, target: &Tensor) -> Result<Tensor> { let inp = crate::ops::sigmoid(inp)?; let left_side = target * inp.log()?; let right_side = (target.affine(-1., 1.))? * inp.affine(-1., 1.)?.log()?; let loss = left_side? + right_side?; let loss = loss?.neg()?.mean_all()?; Ok(loss) }
candle/candle-nn/src/loss.rs/0
{ "file_path": "candle/candle-nn/src/loss.rs", "repo_id": "candle", "token_count": 1040 }
35
# candle-onnx This crate adds ONNX support to candle ## FAQ #### Missing protoc installation when compiling candle-onnx The candle-onnx dependency prost-build no longer comes bundled with prost binaries. This could cause the following error when attempting to compile candle-onnx: ``` error: failed to run custom build command for `candle-onnx` Caused by: // (...) Could not find `protoc` installation and this build crate cannot proceed without this knowledge. ``` To fix this issue install protoc on your system and make it available in your system `PATH`. See the [protoc documentation](https://grpc.io/docs/protoc-installation/) for more information.
candle/candle-onnx/README.md/0
{ "file_path": "candle/candle-onnx/README.md", "repo_id": "candle", "token_count": 180 }
36
# Generated content DO NOT EDIT from typing import Any, Callable, Dict, List, Optional, Tuple, Union, Sequence from os import PathLike from candle.typing import _ArrayLike, Device, Scalar, Index, Shape from candle import Tensor, DType, QTensor @staticmethod def avg_pool2d(tensor: Tensor, ksize: int, stride: int = 1) -> Tensor: """ Applies the 2d avg-pool function to a given tensor.# """ pass @staticmethod def gelu(tensor: Tensor) -> Tensor: """ Applies the Gaussian Error Linear Unit (GELU) function to a given tensor. """ pass @staticmethod def max_pool2d(tensor: Tensor, ksize: int, stride: int = 1) -> Tensor: """ Applies the 2d max-pool function to a given tensor.# """ pass @staticmethod def relu(tensor: Tensor) -> Tensor: """ Applies the Rectified Linear Unit (ReLU) function to a given tensor. """ pass @staticmethod def silu(tensor: Tensor) -> Tensor: """ Applies the Sigmoid Linear Unit (SiLU) function to a given tensor. """ pass @staticmethod def softmax(tensor: Tensor, dim: int) -> Tensor: """ Applies the Softmax function to a given tensor.# """ pass @staticmethod def tanh(tensor: Tensor) -> Tensor: """ Applies the tanh function to a given tensor. """ pass
candle/candle-pyo3/py_src/candle/functional/__init__.pyi/0
{ "file_path": "candle/candle-pyo3/py_src/candle/functional/__init__.pyi", "repo_id": "candle", "token_count": 484 }
37
# This example shows how the candle Python api can be used to replicate llama.cpp. import sys from typing import Dict, Tuple, Any import candle from candle.models.llama import QuantizedLlama from candle import utils MAX_SEQ_LEN = 4096 def gguf_rename(tensor_name: str): if tensor_name == "token_embd.weight": return "tok_embeddings.weight" if tensor_name == "output_norm.weight": return "norm.weight" tensor_name = tensor_name.replace("blk.", "layers.") tensor_name = tensor_name.replace(".attn_q.", ".attention.wq.") tensor_name = tensor_name.replace(".attn_k.", ".attention.wk.") tensor_name = tensor_name.replace(".attn_v.", ".attention.wv.") tensor_name = tensor_name.replace(".attn_output.", ".attention.wo.") tensor_name = tensor_name.replace(".ffn_gate.", ".feed_forward.w1.") tensor_name = tensor_name.replace(".ffn_down.", ".feed_forward.w2.") tensor_name = tensor_name.replace(".ffn_up.", ".feed_forward.w3.") tensor_name = tensor_name.replace(".attn_norm.", ".attention_norm.") return tensor_name def main(): if len(sys.argv) < 2: raise ValueError("missing weight file argument") filename = sys.argv[1] print(f"reading model file {filename}") if filename.endswith("gguf"): all_tensors, metadata = utils.load_gguf(filename) vocab = metadata["tokenizer.ggml.tokens"] for i, v in enumerate(vocab): vocab[i] = "\n" if v == "<0x0A>" else v.replace("▁", " ") hparams = {k: v for (k, v) in metadata.items() if not k.startswith("tokenizer")} print(hparams) hparams = { "n_vocab": len(vocab), "n_embd": metadata["llama.embedding_length"], "n_mult": 256, "n_head": metadata["llama.attention.head_count"], "n_head_kv": metadata["llama.attention.head_count_kv"], "n_layer": metadata["llama.block_count"], "n_rot": metadata["llama.rope.dimension_count"], "rope_freq": metadata.get("llama.rope.freq_base", 10000.0), "ftype": metadata["general.file_type"], "context_length": metadata["llama.context_length"], } all_tensors = {gguf_rename(k): v for k, v in all_tensors.items()} else: all_tensors, hparams, vocab = utils.load_ggml(filename) hparams["context_length"] = 2048 print(hparams) model = QuantizedLlama(hparams, all_tensors) print("model built, starting inference") tokens = [1] for token_idx in range(500): last_token = tokens[-1] lt = candle.tensor([last_token]).unsqueeze(0) logits = model.forward(lt, len(tokens)) # Greedy sampling for now # pr = candle.nn.softmax(logits, -1) m = logits.get(0).argmax_keepdim(-1) next_token = m.values()[0] print(vocab[next_token], end="", flush=True) tokens.append(next_token) if __name__ == "__main__": main()
candle/candle-pyo3/quant-llama.py/0
{ "file_path": "candle/candle-pyo3/quant-llama.py", "repo_id": "candle", "token_count": 1318 }
38
# candle-transformers
candle/candle-transformers/README.md/0
{ "file_path": "candle/candle-transformers/README.md", "repo_id": "candle", "token_count": 6 }
39
use std::sync::Arc; use candle::{DType, Device, Module, Result, Tensor, D}; use candle_nn::{linear_b as linear, Linear, VarBuilder}; fn default_max_position_embeddings() -> usize { 4096 } #[derive(serde::Deserialize, Debug, Clone)] pub struct Config { pub attention_bias: bool, pub head_dim: usize, pub hidden_act: candle_nn::Activation, pub hidden_size: usize, pub intermediate_size: usize, pub num_attention_heads: usize, pub num_hidden_layers: usize, pub num_key_value_heads: usize, pub rms_norm_eps: f64, pub rope_theta: f64, pub vocab_size: usize, #[serde(default = "default_max_position_embeddings")] pub max_position_embeddings: usize, } #[derive(Debug, Clone)] struct RmsNorm { weight: Tensor, eps: f64, } impl RmsNorm { fn new(dim: usize, eps: f64, vb: VarBuilder) -> Result<Self> { let weight = vb.get(dim, "weight")?; Ok(Self { weight, eps }) } } impl Module for RmsNorm { fn forward(&self, x: &Tensor) -> Result<Tensor> { let x_dtype = x.dtype(); let internal_dtype = match x_dtype { DType::F16 | DType::BF16 => DType::F32, d => d, }; let hidden_size = x.dim(D::Minus1)?; let x = x.to_dtype(internal_dtype)?; let norm_x = (x.sqr()?.sum_keepdim(D::Minus1)? / hidden_size as f64)?; let x_normed = x.broadcast_div(&(norm_x + self.eps)?.sqrt()?)?; x_normed .to_dtype(x_dtype)? .broadcast_mul(&(&self.weight + 1.0)?) } } #[derive(Debug, Clone)] struct RotaryEmbedding { sin: Tensor, cos: Tensor, } fn rotate_half(xs: &Tensor) -> Result<Tensor> { let last_dim = xs.dim(D::Minus1)?; let xs1 = xs.narrow(D::Minus1, 0, last_dim / 2)?; let xs2 = xs.narrow(D::Minus1, last_dim / 2, last_dim - last_dim / 2)?; Tensor::cat(&[&xs2.neg()?, &xs1], D::Minus1) } impl RotaryEmbedding { fn new(dtype: DType, cfg: &Config, dev: &Device) -> Result<Self> { let dim = cfg.head_dim; let max_seq_len = cfg.max_position_embeddings; let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / cfg.rope_theta.powf(i as f64 / dim as f64) as f32) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?.to_dtype(dtype)?; let t = Tensor::arange(0u32, max_seq_len as u32, dev)? .to_dtype(dtype)? .reshape((max_seq_len, 1))?; let freqs = t.matmul(&inv_freq)?; let freqs = Tensor::cat(&[&freqs, &freqs], D::Minus1)?; Ok(Self { sin: freqs.sin()?, cos: freqs.cos()?, }) } fn apply_rotary_emb_qkv( &self, q: &Tensor, k: &Tensor, seqlen_offset: usize, ) -> Result<(Tensor, Tensor)> { let (_b_sz, _h, seq_len, _n_embd) = q.dims4()?; let cos = self.cos.narrow(0, seqlen_offset, seq_len)?; let sin = self.sin.narrow(0, seqlen_offset, seq_len)?; let cos = cos.unsqueeze(0)?.unsqueeze(0)?; // (1, 1, seq_len, dim) let sin = sin.unsqueeze(0)?.unsqueeze(0)?; // (1, 1, seq_len, dim) let q_embed = (q.broadcast_mul(&cos)? + rotate_half(q)?.broadcast_mul(&sin))?; let k_embed = (k.broadcast_mul(&cos)? + rotate_half(k)?.broadcast_mul(&sin))?; Ok((q_embed, k_embed)) } } #[derive(Debug, Clone)] #[allow(clippy::upper_case_acronyms)] struct MLP { gate_proj: Linear, up_proj: Linear, down_proj: Linear, act_fn: candle_nn::Activation, } impl MLP { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_sz = cfg.hidden_size; let intermediate_sz = cfg.intermediate_size; let gate_proj = linear(hidden_sz, intermediate_sz, false, vb.pp("gate_proj"))?; let up_proj = linear(hidden_sz, intermediate_sz, false, vb.pp("up_proj"))?; let down_proj = linear(intermediate_sz, hidden_sz, false, vb.pp("down_proj"))?; Ok(Self { gate_proj, up_proj, down_proj, act_fn: cfg.hidden_act, }) } } impl Module for MLP { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let lhs = xs.apply(&self.gate_proj)?.apply(&self.act_fn)?; let rhs = xs.apply(&self.up_proj)?; (lhs * rhs)?.apply(&self.down_proj) } } #[derive(Debug, Clone)] struct Attention { q_proj: Linear, k_proj: Linear, v_proj: Linear, o_proj: Linear, num_heads: usize, num_kv_heads: usize, num_kv_groups: usize, head_dim: usize, rotary_emb: Arc<RotaryEmbedding>, kv_cache: Option<(Tensor, Tensor)>, } impl Attention { fn new(rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_sz = cfg.hidden_size; let num_heads = cfg.num_attention_heads; let num_kv_heads = cfg.num_key_value_heads; let num_kv_groups = num_heads / num_kv_heads; let head_dim = cfg.head_dim; let bias = cfg.attention_bias; let q_proj = linear(hidden_sz, num_heads * head_dim, bias, vb.pp("q_proj"))?; let k_proj = linear(hidden_sz, num_kv_heads * head_dim, bias, vb.pp("k_proj"))?; let v_proj = linear(hidden_sz, num_kv_heads * head_dim, bias, vb.pp("v_proj"))?; let o_proj = linear(num_heads * head_dim, hidden_sz, bias, vb.pp("o_proj"))?; Ok(Self { q_proj, k_proj, v_proj, o_proj, num_heads, num_kv_heads, num_kv_groups, head_dim, rotary_emb, kv_cache: None, }) } fn repeat_kv(&self, xs: Tensor) -> Result<Tensor> { let n_rep = self.num_kv_groups; if n_rep == 1 { Ok(xs) } else { let (b_sz, num_kv_heads, seq_len, head_dim) = xs.dims4()?; xs.unsqueeze(2)? .expand((b_sz, num_kv_heads, n_rep, seq_len, head_dim))? .reshape((b_sz, num_kv_heads * n_rep, seq_len, head_dim)) } } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, seqlen_offset: usize, ) -> Result<Tensor> { let (b_sz, q_len, _) = xs.dims3()?; let query_states = self.q_proj.forward(xs)?; let key_states = self.k_proj.forward(xs)?; let value_states = self.v_proj.forward(xs)?; let query_states = query_states .reshape((b_sz, q_len, self.num_heads, self.head_dim))? .transpose(1, 2)?; let key_states = key_states .reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)?; let value_states = value_states .reshape((b_sz, q_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)?; let (query_states, key_states) = self.rotary_emb .apply_rotary_emb_qkv(&query_states, &key_states, seqlen_offset)?; let (key_states, value_states) = match &self.kv_cache { None => (key_states, value_states), Some((prev_k, prev_v)) => { let key_states = Tensor::cat(&[prev_k, &key_states], 2)?; let value_states = Tensor::cat(&[prev_v, &value_states], 2)?; (key_states, value_states) } }; self.kv_cache = Some((key_states.clone(), value_states.clone())); let key_states = self.repeat_kv(key_states)?.contiguous()?; let value_states = self.repeat_kv(value_states)?.contiguous()?; let attn_output = { let scale = 1f64 / f64::sqrt(self.head_dim as f64); let attn_weights = (query_states.matmul(&key_states.transpose(2, 3)?)? * scale)?; let attn_weights = match attention_mask { None => attn_weights, Some(mask) => attn_weights.broadcast_add(mask)?, }; let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?; attn_weights.matmul(&value_states)? }; attn_output .transpose(1, 2)? .reshape((b_sz, q_len, ()))? .apply(&self.o_proj) } fn clear_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Debug, Clone)] struct DecoderLayer { self_attn: Attention, mlp: MLP, input_layernorm: RmsNorm, post_attention_layernorm: RmsNorm, } impl DecoderLayer { fn new(rotary_emb: Arc<RotaryEmbedding>, cfg: &Config, vb: VarBuilder) -> Result<Self> { let self_attn = Attention::new(rotary_emb, cfg, vb.pp("self_attn"))?; let mlp = MLP::new(cfg, vb.pp("mlp"))?; let input_layernorm = RmsNorm::new(cfg.hidden_size, cfg.rms_norm_eps, vb.pp("input_layernorm"))?; let post_attention_layernorm = RmsNorm::new( cfg.hidden_size, cfg.rms_norm_eps, vb.pp("post_attention_layernorm"), )?; Ok(Self { self_attn, mlp, input_layernorm, post_attention_layernorm, }) } fn forward( &mut self, xs: &Tensor, attention_mask: Option<&Tensor>, seqlen_offset: usize, ) -> Result<Tensor> { let residual = xs; let xs = self.input_layernorm.forward(xs)?; let xs = self.self_attn.forward(&xs, attention_mask, seqlen_offset)?; let xs = (xs + residual)?; let residual = &xs; let xs = xs.apply(&self.post_attention_layernorm)?.apply(&self.mlp)?; residual + xs } fn clear_kv_cache(&mut self) { self.self_attn.clear_kv_cache() } } #[derive(Debug, Clone)] pub struct Model { embed_tokens: candle_nn::Embedding, layers: Vec<DecoderLayer>, norm: RmsNorm, lm_head: Linear, device: Device, dtype: DType, hidden_size: usize, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb_m = vb.pp("model"); let embed_tokens = candle_nn::embedding(cfg.vocab_size, cfg.hidden_size, vb_m.pp("embed_tokens"))?; let rotary_emb = Arc::new(RotaryEmbedding::new(vb.dtype(), cfg, vb_m.device())?); let mut layers = Vec::with_capacity(cfg.num_hidden_layers); let vb_l = vb_m.pp("layers"); for layer_idx in 0..cfg.num_hidden_layers { let layer = DecoderLayer::new(rotary_emb.clone(), cfg, vb_l.pp(layer_idx))?; layers.push(layer) } let norm = RmsNorm::new(cfg.hidden_size, cfg.rms_norm_eps, vb_m.pp("norm"))?; let lm_head = Linear::new(embed_tokens.embeddings().clone(), None); Ok(Self { embed_tokens, layers, norm, lm_head, device: vb.device().clone(), dtype: vb.dtype(), hidden_size: cfg.hidden_size, }) } fn prepare_decoder_attention_mask( &self, b_size: usize, tgt_len: usize, seqlen_offset: usize, ) -> Result<Tensor> { let mask: Vec<_> = (0..tgt_len) .flat_map(|i| (0..tgt_len).map(move |j| if i < j { f32::NEG_INFINITY } else { 0. })) .collect(); let mask = Tensor::from_slice(&mask, (tgt_len, tgt_len), &self.device)?; let mask = if seqlen_offset > 0 { let mask0 = Tensor::zeros((tgt_len, seqlen_offset), DType::F32, &self.device)?; Tensor::cat(&[&mask0, &mask], D::Minus1)? } else { mask }; mask.expand((b_size, 1, tgt_len, tgt_len + seqlen_offset))? .to_dtype(self.dtype) } pub fn forward(&mut self, input_ids: &Tensor, seqlen_offset: usize) -> Result<Tensor> { let (b_size, seq_len) = input_ids.dims2()?; let attention_mask = if seq_len <= 1 { None } else { let mask = self.prepare_decoder_attention_mask(b_size, seq_len, seqlen_offset)?; Some(mask) }; let xs = self.embed_tokens.forward(input_ids)?; let mut xs = (xs * (self.hidden_size as f64).sqrt())?; for layer in self.layers.iter_mut() { xs = layer.forward(&xs, attention_mask.as_ref(), seqlen_offset)? } xs.narrow(1, seq_len - 1, 1)? .apply(&self.norm)? .apply(&self.lm_head) } pub fn clear_kv_cache(&mut self) { for layer in self.layers.iter_mut() { layer.clear_kv_cache() } } }
candle/candle-transformers/src/models/gemma.rs/0
{ "file_path": "candle/candle-transformers/src/models/gemma.rs", "repo_id": "candle", "token_count": 6582 }
40
use super::quantized_blip_text as blip_text; use crate::quantized_nn::{layer_norm, linear, Linear}; pub use crate::quantized_var_builder::VarBuilder; use candle::{Module, Result, Tensor, D}; use candle_nn::{Conv2d, Conv2dConfig, LayerNorm}; pub type VisionConfig = super::blip::VisionConfig; pub type Config = super::blip::Config; #[derive(Debug, Clone)] struct VisionEmbeddings { class_embedding: Tensor, patch_embedding: Conv2d, position_embedding: Tensor, } impl VisionEmbeddings { fn new(cfg: &VisionConfig, vb: VarBuilder) -> Result<Self> { let class_embedding = vb .get((1, 1, cfg.hidden_size), "class_embedding")? .dequantize(vb.device())?; let conv_cfg = Conv2dConfig { stride: cfg.patch_size, ..Default::default() }; let pe_vb = vb.pp("patch_embedding"); let pe_weight = pe_vb .get( (cfg.hidden_size, 3, cfg.patch_size, cfg.patch_size), "weight", )? .dequantize(vb.device())?; let pe_bias = pe_vb .get(cfg.hidden_size, "bias")? .dequantize(vb.device())?; let patch_embedding = Conv2d::new(pe_weight, Some(pe_bias), conv_cfg); let num_patches1 = cfg.image_size / cfg.patch_size; let num_patches = num_patches1 * num_patches1; let num_positions = num_patches + 1; let position_embedding = vb .get((1, num_positions, cfg.hidden_size), "position_embedding")? .dequantize(vb.device())?; Ok(Self { class_embedding, patch_embedding, position_embedding, }) } } impl Module for VisionEmbeddings { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let target_dtype = xs.dtype(); let b_size = xs.dim(0)?; let patch_embeds = xs.apply(&self.patch_embedding)?.flatten_from(2)?.t()?; let d = self.class_embedding.dim(D::Minus1)?; let class_embeds = self .class_embedding .broadcast_as((b_size, 1, d))? .to_dtype(target_dtype)?; let embeddings = Tensor::cat(&[&class_embeds, &patch_embeds], 1)?; let position_embedding = self.position_embedding.narrow(1, 0, embeddings.dim(1)?)?; embeddings.broadcast_add(&position_embedding) } } #[derive(Debug, Clone)] struct Attention { qkv: Linear, projection: Linear, scale: f64, num_heads: usize, } impl Attention { fn new(cfg: &VisionConfig, vb: VarBuilder) -> Result<Self> { let embed_dim = cfg.hidden_size; let num_heads = cfg.num_attention_heads; let head_dim = embed_dim / num_heads; let scale = 1f64 / (head_dim as f64).sqrt(); let qkv = linear(embed_dim, 3 * embed_dim, vb.pp("qkv"))?; let projection = linear(embed_dim, embed_dim, vb.pp("projection"))?; Ok(Self { qkv, projection, scale, num_heads, }) } fn forward(&self, xs: &Tensor, attn_mask: Option<&Tensor>) -> Result<Tensor> { let (b_sz, tgt_len, embed_dim) = xs.dims3()?; let mixed_qkv = xs .apply(&self.qkv)? .reshape((b_sz, tgt_len, 3, self.num_heads, embed_dim / self.num_heads))? .permute((2, 0, 3, 1, 4))?; let query = mixed_qkv.get(0)?; let key = mixed_qkv.get(1)?; let value = mixed_qkv.get(2)?; let attention_scores = query.matmul(&key.t()?)?; let attention_scores = (attention_scores * self.scale)?; let attention_probs = candle_nn::ops::softmax_last_dim(&attention_scores)?; let attention_probs = match attn_mask { None => attention_probs, Some(attn_mask) => (attention_probs * attn_mask)?, }; attention_probs .matmul(&value)? .permute((0, 2, 1, 3))? .flatten_from(D::Minus2)? .apply(&self.projection) } } #[derive(Debug, Clone)] #[allow(clippy::upper_case_acronyms)] struct MLP { activation_fn: candle_nn::Activation, fc1: Linear, fc2: Linear, } impl MLP { fn new(cfg: &VisionConfig, vb: VarBuilder) -> Result<Self> { let fc1 = linear(cfg.hidden_size, cfg.intermediate_size, vb.pp("fc1"))?; let fc2 = linear(cfg.intermediate_size, cfg.hidden_size, vb.pp("fc2"))?; Ok(Self { activation_fn: cfg.hidden_act, fc1, fc2, }) } } impl Module for MLP { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.fc1)? .apply(&self.activation_fn)? .apply(&self.fc2) } } #[derive(Debug, Clone)] struct EncoderLayer { self_attn: Attention, layer_norm1: LayerNorm, mlp: MLP, layer_norm2: LayerNorm, } impl EncoderLayer { fn new(cfg: &VisionConfig, vb: VarBuilder) -> Result<Self> { let embed_dim = cfg.hidden_size; let self_attn = Attention::new(cfg, vb.pp("self_attn"))?; let layer_norm1 = layer_norm(embed_dim, cfg.layer_norm_eps, vb.pp("layer_norm1"))?; let layer_norm2 = layer_norm(embed_dim, cfg.layer_norm_eps, vb.pp("layer_norm2"))?; let mlp = MLP::new(cfg, vb.pp("mlp"))?; Ok(Self { self_attn, layer_norm1, mlp, layer_norm2, }) } fn forward(&self, xs: &Tensor, attention_mask: Option<&Tensor>) -> Result<Tensor> { let residual = xs; let xs = xs.apply(&self.layer_norm1)?; let xs = self.self_attn.forward(&xs, attention_mask)?; let xs = (xs + residual)?; let residual = &xs; let xs = xs.apply(&self.layer_norm2)?.apply(&self.mlp)?; xs + residual } } #[derive(Debug, Clone)] struct Encoder { layers: Vec<EncoderLayer>, } impl Encoder { fn new(cfg: &VisionConfig, vb: VarBuilder) -> Result<Self> { let mut layers = Vec::with_capacity(cfg.num_hidden_layers); let vb = vb.pp("layers"); for i in 0..cfg.num_hidden_layers { let layer = EncoderLayer::new(cfg, vb.pp(i))?; layers.push(layer) } Ok(Self { layers }) } fn forward(&self, xs: &Tensor, attention_mask: Option<&Tensor>) -> Result<Tensor> { let mut xs = xs.clone(); for layer in self.layers.iter() { xs = layer.forward(&xs, attention_mask)? } Ok(xs) } } #[derive(Debug, Clone)] pub struct VisionModel { embeddings: VisionEmbeddings, encoder: Encoder, post_layernorm: LayerNorm, } impl VisionModel { fn new(cfg: &VisionConfig, vb: VarBuilder) -> Result<Self> { let embeddings = VisionEmbeddings::new(cfg, vb.pp("embeddings"))?; let encoder = Encoder::new(cfg, vb.pp("encoder"))?; let post_layernorm = layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb.pp("post_layernorm"))?; Ok(Self { embeddings, encoder, post_layernorm, }) } } impl Module for VisionModel { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let xs = xs.apply(&self.embeddings)?; let encoder_outputs = self.encoder.forward(&xs, None)?; // Return the last hidden state rather than pooled outputs. encoder_outputs.apply(&self.post_layernorm) } } #[derive(Debug, Clone)] pub struct BlipForConditionalGeneration { vision_model: VisionModel, text_decoder: blip_text::TextLMHeadModel, } impl BlipForConditionalGeneration { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vision_model = VisionModel::new(&cfg.vision_config, vb.pp("vision_model"))?; let text_decoder = blip_text::TextLMHeadModel::new(&cfg.text_config, vb.pp("text_decoder"))?; Ok(Self { vision_model, text_decoder, }) } pub fn vision_model(&self) -> &VisionModel { &self.vision_model } pub fn text_decoder(&mut self) -> &mut blip_text::TextLMHeadModel { &mut self.text_decoder } pub fn reset_kv_cache(&mut self) { self.text_decoder.reset_kv_cache(); } }
candle/candle-transformers/src/models/quantized_blip.rs/0
{ "file_path": "candle/candle-transformers/src/models/quantized_blip.rs", "repo_id": "candle", "token_count": 4013 }
41
use super::with_tracing::{layer_norm, linear_no_bias as linear, LayerNorm, Linear}; use candle::{IndexOp, Result, Tensor}; use candle_nn::{embedding, Embedding, Module, VarBuilder}; pub use crate::models::rwkv_v5::{Config, State, Tokenizer}; #[derive(Debug, Clone)] struct SelfAttention { key: Linear, receptance: Linear, value: Linear, gate: Linear, output: Linear, ln_x: candle_nn::GroupNorm, time_mix_x: Tensor, time_mix_w: Tensor, time_mix_key: Tensor, time_mix_value: Tensor, time_mix_receptance: Tensor, time_decay: Tensor, time_faaaa: Tensor, time_mix_gate: Tensor, time_decay_w1: Tensor, time_decay_w2: Tensor, time_mix_w1: Tensor, time_mix_w2: Tensor, layer_id: usize, n_attn_heads: usize, } impl SelfAttention { fn new(layer_id: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_size = cfg.hidden_size; let attn_hidden_size = cfg.attention_hidden_size; let key = linear(hidden_size, attn_hidden_size, vb.pp("key"))?; let receptance = linear(hidden_size, attn_hidden_size, vb.pp("receptance"))?; let value = linear(hidden_size, attn_hidden_size, vb.pp("value"))?; let gate = linear(hidden_size, attn_hidden_size, vb.pp("gate"))?; let output = linear(attn_hidden_size, hidden_size, vb.pp("output"))?; let ln_x = candle_nn::group_norm( hidden_size / cfg.head_size, hidden_size, 1e-5, vb.pp("ln_x"), )?; let time_mix_x = vb.get((1, 1, cfg.hidden_size), "time_mix_x")?; let time_mix_w = vb.get((1, 1, cfg.hidden_size), "time_mix_w")?; let time_mix_key = vb.get((1, 1, cfg.hidden_size), "time_mix_key")?; let time_mix_value = vb.get((1, 1, cfg.hidden_size), "time_mix_value")?; let time_mix_receptance = vb.get((1, 1, cfg.hidden_size), "time_mix_receptance")?; let n_attn_heads = cfg.hidden_size / cfg.head_size; let time_decay = vb.get((1, 1, cfg.hidden_size), "time_decay")?; let time_faaaa = vb.get((n_attn_heads, cfg.head_size), "time_faaaa")?; let time_mix_gate = vb.get((1, 1, cfg.hidden_size), "time_mix_gate")?; let time_decay_w1 = vb.get((cfg.hidden_size, n_attn_heads * 2), "time_decay_w1")?; let time_decay_w2 = vb.get((n_attn_heads * 2, cfg.hidden_size), "time_decay_w2")?; let time_mix_w1 = vb.get((cfg.hidden_size, n_attn_heads * 5), "time_mix_w1")?; let time_mix_w2 = vb.get((5, n_attn_heads, cfg.hidden_size), "time_mix_w2")?; Ok(Self { key, value, receptance, gate, output, ln_x, time_mix_x, time_mix_w, time_mix_key, time_mix_value, time_mix_receptance, time_decay, time_faaaa, time_mix_gate, time_decay_w1, time_decay_w2, time_mix_w1, time_mix_w2, layer_id, n_attn_heads, }) } pub fn forward(&self, xs: &Tensor, state: &mut State) -> Result<Tensor> { let h = self.n_attn_heads; let (b, t, s) = xs.dims3()?; let s = s / h; let (receptance, key, value, gate, w) = { // extract key-value let shifted = state.per_layer[self.layer_id].extract_key_value.clone(); let shifted = if shifted.rank() == 2 { shifted.unsqueeze(1)? } else { shifted }; let sx = (&shifted - xs)?; let xxx = (xs + &sx * &self.time_mix_x)?; let xxx = xxx .broadcast_matmul(&self.time_mix_w1)? .tanh()? .reshape((b * t, 5, ()))? .transpose(0, 1)?; let xxx = xxx.matmul(&self.time_mix_w2)?.reshape((5, b, t, ()))?; let (mw, mk, mv, mr, mg) = (xxx.i(0)?, xxx.i(1)?, xxx.i(2)?, xxx.i(3)?, xxx.i(4)?); let xw = (xs + &sx * (&self.time_mix_w + &mw)?)?; let xk = (xs + &sx * (&self.time_mix_key + &mk)?)?; let xv = (xs + &sx * (&self.time_mix_value + &mv)?)?; let xr = (xs + &sx * (&self.time_mix_receptance + &mr)?)?; let xg = (xs + &sx * (&self.time_mix_gate + &mg)?)?; let w = (&self.time_decay + xw.broadcast_matmul(&self.time_decay_w1)? .tanh()? .broadcast_matmul(&self.time_decay_w2)?)? .reshape(((), 1, 1))? .reshape((self.n_attn_heads, (), 1))?; let key = self.key.forward(&xk)?; let value = self.value.forward(&xv)?; let receptance = self.receptance.forward(&xr)?; let gate = candle_nn::ops::silu(&self.gate.forward(&xg)?)?; state.per_layer[self.layer_id].extract_key_value = xs.i((.., t - 1))?; (receptance, key, value, gate, w) }; // linear attention let mut state_ = state.per_layer[self.layer_id].linear_attention.clone(); let key = key.reshape((b, t, h, s))?.permute((0, 2, 3, 1))?; let value = value.reshape((b, t, h, s))?.transpose(1, 2)?; let receptance = receptance.reshape((b, t, h, s))?.transpose(1, 2)?; let w = w.exp()?.neg()?.exp()?; let time_faaaa = self.time_faaaa .reshape(((), 1, 1))? .reshape((self.n_attn_heads, (), 1))?; let mut out: Vec<Tensor> = Vec::with_capacity(t); for t_ in 0..t { let rt = receptance.i((.., .., t_..t_ + 1))?.contiguous()?; let kt = key.i((.., .., .., t_..t_ + 1))?.contiguous()?; let vt = value.i((.., .., t_..t_ + 1))?.contiguous()?; let at = kt.matmul(&vt)?; let rhs = (time_faaaa.broadcast_mul(&at)? + &state_)?; let out_ = rt.matmul(&rhs)?.squeeze(2)?; state_ = (&at + w.broadcast_mul(&state_))?; out.push(out_) } let out = Tensor::cat(&out, 1)?.reshape((b * t, h * s, 1))?; let out = out.apply(&self.ln_x)?.reshape((b, t, h * s))?; let out = (out * gate)?.apply(&self.output)?; state.per_layer[self.layer_id].linear_attention = state_; Ok(out) } } #[derive(Debug, Clone)] struct FeedForward { time_mix_key: Tensor, time_mix_receptance: Tensor, key: Linear, receptance: Linear, value: Linear, layer_id: usize, } impl FeedForward { fn new(layer_id: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> { let int_size = cfg .intermediate_size .unwrap_or(((cfg.hidden_size as f64 * 3.5) as usize) / 32 * 32); let key = linear(cfg.hidden_size, int_size, vb.pp("key"))?; let receptance = linear(cfg.hidden_size, cfg.hidden_size, vb.pp("receptance"))?; let value = linear(int_size, cfg.hidden_size, vb.pp("value"))?; let time_mix_key = vb.get((1, 1, cfg.hidden_size), "time_mix_key")?; let time_mix_receptance = vb.get((1, 1, cfg.hidden_size), "time_mix_receptance")?; Ok(Self { key, receptance, value, time_mix_key, time_mix_receptance, layer_id, }) } fn forward(&self, xs: &Tensor, state: &mut State) -> Result<Tensor> { let shifted = state.per_layer[self.layer_id] .feed_forward .broadcast_sub(xs)?; let key = (xs + shifted.broadcast_mul(&self.time_mix_key)?)?; let receptance = (xs + shifted.broadcast_mul(&self.time_mix_receptance)?)?; let key = key.apply(&self.key)?.relu()?.sqr()?; let value = key.apply(&self.value)?; let receptance = candle_nn::ops::sigmoid(&receptance.apply(&self.receptance)?)?; state.per_layer[self.layer_id].feed_forward = xs.i((.., xs.dim(1)? - 1))?; let xs = (receptance * value)?; Ok(xs) } } #[derive(Debug, Clone)] struct Block { pre_ln: Option<LayerNorm>, ln1: LayerNorm, ln2: LayerNorm, attention: SelfAttention, feed_forward: FeedForward, } impl Block { fn new(layer_id: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> { let ln1 = layer_norm(cfg.hidden_size, cfg.layer_norm_epsilon, vb.pp("ln1"))?; let ln2 = layer_norm(cfg.hidden_size, cfg.layer_norm_epsilon, vb.pp("ln2"))?; let pre_ln = if layer_id == 0 { let ln = layer_norm(cfg.hidden_size, cfg.layer_norm_epsilon, vb.pp("pre_ln"))?; Some(ln) } else { None }; let attention = SelfAttention::new(layer_id, cfg, vb.pp("attention"))?; let feed_forward = FeedForward::new(layer_id, cfg, vb.pp("feed_forward"))?; Ok(Self { pre_ln, ln1, ln2, attention, feed_forward, }) } fn forward(&self, xs: &Tensor, state: &mut State) -> Result<Tensor> { let xs = match self.pre_ln.as_ref() { None => xs.clone(), Some(pre_ln) => xs.apply(pre_ln)?, }; let attention = self.attention.forward(&xs.apply(&self.ln1)?, state)?; let xs = (xs + attention)?; let feed_forward = self.feed_forward.forward(&xs.apply(&self.ln2)?, state)?; let xs = (xs + feed_forward)?; Ok(xs) } } #[derive(Debug, Clone)] pub struct Model { embeddings: Embedding, blocks: Vec<Block>, ln_out: LayerNorm, head: Linear, rescale_every: usize, layers_are_rescaled: bool, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb_m = vb.pp("rwkv"); let embeddings = embedding(cfg.vocab_size, cfg.hidden_size, vb_m.pp("embeddings"))?; let mut blocks = Vec::with_capacity(cfg.num_hidden_layers); let vb_b = vb_m.pp("blocks"); for block_index in 0..cfg.num_hidden_layers { let block = Block::new(block_index, cfg, vb_b.pp(block_index))?; blocks.push(block) } let ln_out = layer_norm(cfg.hidden_size, 1e-5, vb_m.pp("ln_out"))?; let head = linear(cfg.hidden_size, cfg.vocab_size, vb.pp("head"))?; Ok(Self { embeddings, blocks, ln_out, head, rescale_every: cfg.rescale_every, layers_are_rescaled: false, // This seem to only happen for the f16/bf16 dtypes. }) } pub fn forward(&self, xs: &Tensor, state: &mut State) -> Result<Tensor> { let (_b_size, _seq_len) = xs.dims2()?; let mut xs = xs.apply(&self.embeddings)?; for (block_idx, block) in self.blocks.iter().enumerate() { xs = block.forward(&xs, state)?; if self.layers_are_rescaled && (block_idx + 1) % self.rescale_every == 0 { xs = (xs / 2.)? } } let xs = xs.apply(&self.ln_out)?.apply(&self.head)?; state.pos += 1; Ok(xs) } }
candle/candle-transformers/src/models/rwkv_v6.rs/0
{ "file_path": "candle/candle-transformers/src/models/rwkv_v6.rs", "repo_id": "candle", "token_count": 5859 }
42
//! ResNet Building Blocks //! //! Some Residual Network blocks used in UNet models. //! //! Denoising Diffusion Implicit Models, K. He and al, 2015. //! https://arxiv.org/abs/1512.03385 use crate::models::with_tracing::{conv2d, Conv2d}; use candle::{Result, Tensor, D}; use candle_nn as nn; use candle_nn::Module; /// Configuration for a ResNet block. #[derive(Debug, Clone, Copy)] pub struct ResnetBlock2DConfig { /// The number of output channels, defaults to the number of input channels. pub out_channels: Option<usize>, pub temb_channels: Option<usize>, /// The number of groups to use in group normalization. pub groups: usize, pub groups_out: Option<usize>, /// The epsilon to be used in the group normalization operations. pub eps: f64, /// Whether to use a 2D convolution in the skip connection. When using None, /// such a convolution is used if the number of input channels is different from /// the number of output channels. pub use_in_shortcut: Option<bool>, // non_linearity: silu /// The final output is scaled by dividing by this value. pub output_scale_factor: f64, } impl Default for ResnetBlock2DConfig { fn default() -> Self { Self { out_channels: None, temb_channels: Some(512), groups: 32, groups_out: None, eps: 1e-6, use_in_shortcut: None, output_scale_factor: 1., } } } #[derive(Debug)] pub struct ResnetBlock2D { norm1: nn::GroupNorm, conv1: Conv2d, norm2: nn::GroupNorm, conv2: Conv2d, time_emb_proj: Option<nn::Linear>, conv_shortcut: Option<Conv2d>, span: tracing::Span, config: ResnetBlock2DConfig, } impl ResnetBlock2D { pub fn new( vs: nn::VarBuilder, in_channels: usize, config: ResnetBlock2DConfig, ) -> Result<Self> { let out_channels = config.out_channels.unwrap_or(in_channels); let conv_cfg = nn::Conv2dConfig { stride: 1, padding: 1, groups: 1, dilation: 1, }; let norm1 = nn::group_norm(config.groups, in_channels, config.eps, vs.pp("norm1"))?; let conv1 = conv2d(in_channels, out_channels, 3, conv_cfg, vs.pp("conv1"))?; let groups_out = config.groups_out.unwrap_or(config.groups); let norm2 = nn::group_norm(groups_out, out_channels, config.eps, vs.pp("norm2"))?; let conv2 = conv2d(out_channels, out_channels, 3, conv_cfg, vs.pp("conv2"))?; let use_in_shortcut = config .use_in_shortcut .unwrap_or(in_channels != out_channels); let conv_shortcut = if use_in_shortcut { let conv_cfg = nn::Conv2dConfig { stride: 1, padding: 0, groups: 1, dilation: 1, }; Some(conv2d( in_channels, out_channels, 1, conv_cfg, vs.pp("conv_shortcut"), )?) } else { None }; let time_emb_proj = match config.temb_channels { None => None, Some(temb_channels) => Some(nn::linear( temb_channels, out_channels, vs.pp("time_emb_proj"), )?), }; let span = tracing::span!(tracing::Level::TRACE, "resnet2d"); Ok(Self { norm1, conv1, norm2, conv2, time_emb_proj, span, config, conv_shortcut, }) } pub fn forward(&self, xs: &Tensor, temb: Option<&Tensor>) -> Result<Tensor> { let _enter = self.span.enter(); let shortcut_xs = match &self.conv_shortcut { Some(conv_shortcut) => conv_shortcut.forward(xs)?, None => xs.clone(), }; let xs = self.norm1.forward(xs)?; let xs = self.conv1.forward(&nn::ops::silu(&xs)?)?; let xs = match (temb, &self.time_emb_proj) { (Some(temb), Some(time_emb_proj)) => time_emb_proj .forward(&nn::ops::silu(temb)?)? .unsqueeze(D::Minus1)? .unsqueeze(D::Minus1)? .broadcast_add(&xs)?, _ => xs, }; let xs = self .conv2 .forward(&nn::ops::silu(&self.norm2.forward(&xs)?)?)?; (shortcut_xs + xs)? / self.config.output_scale_factor } }
candle/candle-transformers/src/models/stable_diffusion/resnet.rs/0
{ "file_path": "candle/candle-transformers/src/models/stable_diffusion/resnet.rs", "repo_id": "candle", "token_count": 2284 }
43
use candle::{Module, Result, Tensor}; use candle_nn::VarBuilder; #[derive(Debug, Clone)] pub struct Embedding { inner: candle_nn::Embedding, span: tracing::Span, } impl Embedding { pub fn new(d1: usize, d2: usize, vb: VarBuilder) -> Result<Self> { let inner = candle_nn::embedding(d1, d2, vb)?; let span = tracing::span!(tracing::Level::TRACE, "embedding"); Ok(Self { inner, span }) } pub fn from_weights(weights: Tensor) -> Result<Self> { let (_in_size, out_size) = weights.dims2()?; let inner = candle_nn::Embedding::new(weights, out_size); let span = tracing::span!(tracing::Level::TRACE, "embedding"); Ok(Self { inner, span }) } pub fn embeddings(&self) -> &Tensor { self.inner.embeddings() } } impl Module for Embedding { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); self.inner.forward(xs) } } #[derive(Debug, Clone)] pub struct Linear { inner: candle_nn::Linear, span: tracing::Span, } impl Linear { pub fn from_weights(weights: Tensor, bias: Option<Tensor>) -> Self { let inner = candle_nn::Linear::new(weights, bias); let span = tracing::span!(tracing::Level::TRACE, "linear"); Self { inner, span } } } pub fn linear_b(d1: usize, d2: usize, b: bool, vb: VarBuilder) -> Result<Linear> { let inner = candle_nn::linear_b(d1, d2, b, vb)?; let span = tracing::span!(tracing::Level::TRACE, "linear"); Ok(Linear { inner, span }) } pub fn linear(d1: usize, d2: usize, vb: VarBuilder) -> Result<Linear> { let inner = candle_nn::linear(d1, d2, vb)?; let span = tracing::span!(tracing::Level::TRACE, "linear"); Ok(Linear { inner, span }) } pub fn linear_no_bias(d1: usize, d2: usize, vb: VarBuilder) -> Result<Linear> { let inner = candle_nn::linear_no_bias(d1, d2, vb)?; let span = tracing::span!(tracing::Level::TRACE, "linear"); Ok(Linear { inner, span }) } impl Module for Linear { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); self.inner.forward(xs) } } // Wrap the conv2d op to provide some tracing. #[derive(Debug, Clone)] pub struct Conv2d { inner: candle_nn::Conv2d, span: tracing::Span, } impl Module for Conv2d { fn forward(&self, x: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); self.inner.forward(x) } } pub fn conv2d( in_channels: usize, out_channels: usize, kernel_size: usize, cfg: candle_nn::Conv2dConfig, vs: candle_nn::VarBuilder, ) -> Result<Conv2d> { let span = tracing::span!(tracing::Level::TRACE, "conv2d"); let inner = candle_nn::conv2d(in_channels, out_channels, kernel_size, cfg, vs)?; Ok(Conv2d { inner, span }) } // QMatMul wrapper adding some tracing. #[derive(Clone)] pub struct QMatMul { inner: candle::quantized::QMatMul, span: tracing::Span, } impl QMatMul { pub fn new( out_dim: usize, in_dim: usize, vb: crate::quantized_var_builder::VarBuilder, ) -> Result<Self> { let ws = vb.get((in_dim, out_dim), "weight")?; let inner = candle::quantized::QMatMul::from_arc(ws)?; let span = tracing::span!(tracing::Level::TRACE, "qmatmul"); Ok(Self { inner, span }) } pub fn from_weights(ws: std::sync::Arc<candle::quantized::QTensor>) -> Result<Self> { let inner = candle::quantized::QMatMul::from_arc(ws)?; let span = tracing::span!(tracing::Level::TRACE, "qmatmul"); Ok(Self { inner, span }) } } impl Module for QMatMul { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); self.inner.forward(xs) } } impl std::fmt::Debug for QMatMul { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "QMatMul") } } #[derive(Clone, Debug)] pub struct LayerNorm { inner: candle_nn::LayerNorm, span: tracing::Span, } impl LayerNorm { pub fn new(weight: Tensor, bias: Tensor, eps: f64) -> Self { let inner = candle_nn::LayerNorm::new(weight, bias, eps); let span = tracing::span!(tracing::Level::TRACE, "layer-norm"); Self { inner, span } } } impl Module for LayerNorm { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); self.inner.forward(xs) } } pub fn layer_norm<C: Into<candle_nn::LayerNormConfig>>( size: usize, c: C, vb: VarBuilder, ) -> Result<LayerNorm> { let inner = candle_nn::layer_norm(size, c, vb)?; let span = tracing::span!(tracing::Level::TRACE, "layer-norm"); Ok(LayerNorm { inner, span }) } #[derive(Debug, Clone)] pub struct RmsNorm { inner: candle_nn::RmsNorm, span: tracing::Span, } impl RmsNorm { pub fn new(size: usize, eps: f64, vb: VarBuilder) -> Result<Self> { let span = tracing::span!(tracing::Level::TRACE, "rms-norm"); let inner = candle_nn::rms_norm(size, eps, vb)?; Ok(Self { inner, span }) } } impl Module for RmsNorm { fn forward(&self, x: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); self.inner.forward(x) } }
candle/candle-transformers/src/models/with_tracing.rs/0
{ "file_path": "candle/candle-transformers/src/models/with_tracing.rs", "repo_id": "candle", "token_count": 2315 }
44
[package] name = "candle-wasm-example-bert" version.workspace = true edition.workspace = true description.workspace = true repository.workspace = true keywords.workspace = true categories.workspace = true license.workspace = true [dependencies] candle = { workspace = true } candle-nn = { workspace = true } candle-transformers = { workspace = true } num-traits = { workspace = true } tokenizers = { workspace = true, features = ["unstable_wasm"] } # App crates. anyhow = { workspace = true } byteorder = { workspace = true } log = { workspace = true } rand = { workspace = true } serde = { workspace = true } serde_json = { workspace = true } safetensors = { workspace = true } # Wasm specific crates. console_error_panic_hook = "0.1.7" getrandom = { version = "0.2", features = ["js"] } gloo = "0.11" js-sys = "0.3.64" wasm-bindgen = "0.2.87" serde-wasm-bindgen = "0.6.0"
candle/candle-wasm-examples/bert/Cargo.toml/0
{ "file_path": "candle/candle-wasm-examples/bert/Cargo.toml", "repo_id": "candle", "token_count": 304 }
45
[package] name = "candle-wasm-example-llama2" version.workspace = true edition.workspace = true description.workspace = true repository.workspace = true keywords.workspace = true categories.workspace = true license.workspace = true [dependencies] candle = { workspace = true } candle-nn = { workspace = true } candle-transformers = { workspace = true } num-traits = { workspace = true } tokenizers = { workspace = true, features = ["unstable_wasm"] } # App crates. anyhow = { workspace = true } byteorder = { workspace = true } log = { workspace = true } rand = { workspace = true } serde = { workspace = true } serde_json = { workspace = true } # Wasm specific crates. console_error_panic_hook = "0.1.7" getrandom = { version = "0.2", features = ["js"] } gloo = "0.11" js-sys = "0.3.64" wasm-bindgen = "0.2.87" wasm-bindgen-futures = "0.4.37" wasm-logger = "0.2" yew-agent = "0.2.0" yew = { version = "0.20.0", features = ["csr"] } [dependencies.web-sys] version = "0.3.64" features = [ 'Blob', 'Document', 'Element', 'HtmlElement', 'Node', 'Window', 'Request', 'RequestCache', 'RequestInit', 'RequestMode', 'Response', 'Performance', ]
candle/candle-wasm-examples/llama2-c/Cargo.toml/0
{ "file_path": "candle/candle-wasm-examples/llama2-c/Cargo.toml", "repo_id": "candle", "token_count": 434 }
46
<html> <head> <meta content="text/html;charset=utf-8" http-equiv="Content-Type" /> <title>Candle Phi 1.5 / Phi 2.0 Rust/WASM</title> </head> <body></body> </html> <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/highlightjs/[email protected]/build/styles/default.min.css" /> <style> @import url("https://fonts.googleapis.com/css2?family=Source+Code+Pro:wght@200;300;400&family=Source+Sans+3:wght@100;200;300;400;500;600;700;800;900&display=swap"); html, body { font-family: "Source Sans 3", sans-serif; } code, output, select, pre { font-family: "Source Code Pro", monospace; } </style> <style type="text/tailwindcss"> .link { @apply underline hover:text-blue-500 hover:no-underline; } </style> <script src="https://cdn.tailwindcss.com"></script> <script type="module"> import snarkdown from "https://cdn.skypack.dev/snarkdown"; import hljs from "https://cdn.skypack.dev/highlight.js"; // models base url const MODELS = { phi_1_5_q4k: { base_url: "https://huggingface.co/lmz/candle-quantized-phi/resolve/main/", model: "model-q4k.gguf", tokenizer: "tokenizer.json", config: "phi-1_5.json", quantized: true, seq_len: 2048, size: "800 MB", }, phi_1_5_q80: { base_url: "https://huggingface.co/lmz/candle-quantized-phi/resolve/main/", model: "model-q80.gguf", tokenizer: "tokenizer.json", config: "phi-1_5.json", quantized: true, seq_len: 2048, size: "1.51 GB", }, phi_2_0_q4k: { base_url: "https://huggingface.co/radames/phi-2-quantized/resolve/main/", model: [ "model-v2-q4k.gguf_aa.part", "model-v2-q4k.gguf_ab.part", "model-v2-q4k.gguf_ac.part", ], tokenizer: "tokenizer.json", config: "config.json", quantized: true, seq_len: 2048, size: "1.57GB", }, puffin_phi_v2_q4k: { base_url: "https://huggingface.co/lmz/candle-quantized-phi/resolve/main/", model: "model-puffin-phi-v2-q4k.gguf", tokenizer: "tokenizer-puffin-phi-v2.json", config: "puffin-phi-v2.json", quantized: true, seq_len: 2048, size: "798 MB", }, puffin_phi_v2_q80: { base_url: "https://huggingface.co/lmz/candle-quantized-phi/resolve/main/", model: "model-puffin-phi-v2-q80.gguf", tokenizer: "tokenizer-puffin-phi-v2.json", config: "puffin-phi-v2.json", quantized: true, seq_len: 2048, size: "1.50 GB", }, }; const TEMPLATES = [ { title: "Simple prompt", prompt: `Sebastien is in London today, it’s the middle of July yet it’s raining, so Sebastien is feeling gloomy. He`, }, { title: "Think step by step", prompt: `Suppose Alice originally had 3 apples, then Bob gave Alice 7 apples, then Alice gave Cook 5 apples, and then Tim gave Alice 3x the amount of apples Alice had. How many apples does Alice have now? Let’s think step by step.`, }, { title: "Explaing a code snippet", prompt: `What does this script do? \`\`\`python s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('', 0)) s.listen(1) conn, addr = s.accept() print('Connected by', addr) return conn.getsockname()[1] \`\`\` Let’s think step by step.`, }, { title: "Question answering", prompt: `Instruct: What is the capital of France? Output:`, }, { title: "Chat mode", prompt: `Alice: Can you tell me how to create a python application to go through all the files in one directory where the file’s name DOES NOT end with '.json'? Bob:`, }, { title: "Python code completion", prompt: `"""write a python function called batch(function, list) which call function(x) for x in list in parallel""" Solution:`, }, { title: "Python Sample", prompt: `"""Can you make sure those histograms appear side by side on the same plot: \`\`\`python plt.hist(intreps_retrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) plt.hist(intreps_pretrained[0][1].view(64,-1).norm(dim=1).detach().cpu().numpy(), bins = 20) \`\`\` """`, }, { title: "Write a Twitter post", prompt: `Write a twitter post for the discovery of gravitational wave. Twitter Post:`, }, { title: "Write a review", prompt: `Write a polite review complaining that the video game 'Random Game' was too badly optimized and it burned my laptop. Very polite review:`, }, ]; const phiWorker = new Worker("./phiWorker.js", { type: "module", }); async function generateSequence(controller) { const getValue = (id) => document.querySelector(`#${id}`).value; const modelID = getValue("model"); const model = MODELS[modelID]; const weightsURL = model.model instanceof Array ? model.model.map((m) => model.base_url + m) : model.base_url + model.model; const tokenizerURL = model.base_url + model.tokenizer; const configURL = model.base_url + model.config; const prompt = getValue("prompt").trim(); const temperature = getValue("temperature"); const topP = getValue("top-p"); const repeatPenalty = getValue("repeat_penalty"); const seed = getValue("seed"); const maxSeqLen = getValue("max-seq"); function updateStatus(data) { const outStatus = document.querySelector("#output-status"); const outGen = document.querySelector("#output-generation"); const outCounter = document.querySelector("#output-counter"); switch (data.status) { case "loading": outStatus.hidden = false; outStatus.textContent = data.message; outGen.hidden = true; outCounter.hidden = true; break; case "generating": const { message, prompt, sentence, tokensSec, totalTime } = data; outStatus.hidden = true; outCounter.hidden = false; outGen.hidden = false; outGen.innerHTML = snarkdown(prompt + sentence); outCounter.innerHTML = `${(totalTime / 1000).toFixed( 2 )}s (${tokensSec.toFixed(2)} tok/s)`; hljs.highlightAll(); break; case "complete": outStatus.hidden = true; outGen.hidden = false; break; } } return new Promise((resolve, reject) => { phiWorker.postMessage({ weightsURL, modelID, tokenizerURL, configURL, quantized: model.quantized, prompt, temp: temperature, top_p: topP, repeatPenalty, seed: seed, maxSeqLen, command: "start", }); const handleAbort = () => { phiWorker.postMessage({ command: "abort" }); }; const handleMessage = (event) => { const { status, error, message, prompt, sentence } = event.data; if (status) updateStatus(event.data); if (error) { phiWorker.removeEventListener("message", handleMessage); reject(new Error(error)); } if (status === "aborted") { phiWorker.removeEventListener("message", handleMessage); resolve(event.data); } if (status === "complete") { phiWorker.removeEventListener("message", handleMessage); resolve(event.data); } }; controller.signal.addEventListener("abort", handleAbort); phiWorker.addEventListener("message", handleMessage); }); } const form = document.querySelector("#form"); const prompt = document.querySelector("#prompt"); const clearBtn = document.querySelector("#clear-btn"); const runBtn = document.querySelector("#run"); const modelSelect = document.querySelector("#model"); const promptTemplates = document.querySelector("#prompt-templates"); let runController = new AbortController(); let isRunning = false; document.addEventListener("DOMContentLoaded", () => { for (const [id, model] of Object.entries(MODELS)) { const option = document.createElement("option"); option.value = id; option.innerText = `${id} (${model.size})`; modelSelect.appendChild(option); } const query = new URLSearchParams(window.location.search); const modelID = query.get("model"); if (modelID) { modelSelect.value = modelID; } else { modelSelect.value = "phi_1_5_q4k"; } for (const [i, { title, prompt }] of TEMPLATES.entries()) { const div = document.createElement("div"); const input = document.createElement("input"); input.type = "radio"; input.name = "task"; input.id = `templates-${i}`; input.classList.add("font-light", "cursor-pointer"); input.value = prompt; const label = document.createElement("label"); label.htmlFor = `templates-${i}`; label.classList.add("cursor-pointer"); label.innerText = title; div.appendChild(input); div.appendChild(label); promptTemplates.appendChild(div); } }); promptTemplates.addEventListener("change", (e) => { const template = e.target.value; prompt.value = template; prompt.style.height = "auto"; prompt.style.height = prompt.scrollHeight + "px"; runBtn.disabled = false; clearBtn.classList.remove("invisible"); }); modelSelect.addEventListener("change", (e) => { const query = new URLSearchParams(window.location.search); query.set("model", e.target.value); window.history.replaceState( {}, "", `${window.location.pathname}?${query}` ); window.parent.postMessage({ queryString: "?" + query }, "*"); const model = MODELS[e.target.value]; document.querySelector("#max-seq").max = model.seq_len; document.querySelector("#max-seq").nextElementSibling.value = 200; }); form.addEventListener("submit", async (e) => { e.preventDefault(); if (isRunning) { stopRunning(); } else { startRunning(); await generateSequence(runController); stopRunning(); } }); function startRunning() { isRunning = true; runBtn.textContent = "Stop"; } function stopRunning() { runController.abort(); runController = new AbortController(); runBtn.textContent = "Run"; isRunning = false; } clearBtn.addEventListener("click", (e) => { e.preventDefault(); prompt.value = ""; clearBtn.classList.add("invisible"); runBtn.disabled = true; stopRunning(); }); prompt.addEventListener("input", (e) => { runBtn.disabled = false; if (e.target.value.length > 0) { clearBtn.classList.remove("invisible"); } else { clearBtn.classList.add("invisible"); } }); </script> </head> <body class="container max-w-4xl mx-auto p-4 text-gray-800"> <main class="grid grid-cols-1 gap-8 relative"> <span class="absolute text-5xl -ml-[1em]"> 🕯️ </span> <div> <h1 class="text-5xl font-bold">Candle Phi 1.5 / Phi 2.0</h1> <h2 class="text-2xl font-bold">Rust/WASM Demo</h2> <p class="max-w-lg"> The <a href="https://huggingface.co/microsoft/phi-1_5" class="link" target="_blank" >Phi-1.5</a > and <a href="https://huggingface.co/microsoft/phi-2" class="link" target="_blank" >Phi-2</a > models achieve state-of-the-art performance with only 1.3 billion and 2.7 billion parameters, compared to larger models with up to 13 billion parameters. Here you can try the quantized versions. Additional prompt examples are available in the <a href="https://arxiv.org/pdf/2309.05463.pdf#page=8" class="link" target="_blank" > technical report </a >. </p> <p class="max-w-lg"> You can also try <a href="https://huggingface.co/teknium/Puffin-Phi-v2" class="link" target="_blank" >Puffin-Phi V2 </a> quantized version, a fine-tuned version of Phi-1.5 on the <a href="https://huggingface.co/datasets/LDJnr/Puffin" class="link" target="_blank" >Puffin dataset </a> </p> </div> <div> <p class="text-xs italic max-w-lg"> <b>Note:</b> When first run, the app will download and cache the model, which could take a few minutes. The models are <b>~800MB</b> or <b>~1.57GB</b> in size. </p> </div> <div> <label for="model" class="font-medium">Models Options: </label> <select id="model" class="border-2 border-gray-500 rounded-md font-light" ></select> </div> <div> <details> <summary class="font-medium cursor-pointer">Prompt Templates</summary> <form id="prompt-templates" class="grid grid-cols-1 sm:grid-cols-2 gap-1 my-2" ></form> </details> </div> <form id="form" class="flex text-normal px-1 py-1 border border-gray-700 rounded-md items-center" > <input type="submit" hidden /> <textarea type="text" id="prompt" class="font-light text-lg w-full px-3 py-2 mx-1 resize-none outline-none" oninput="this.style.height = 0;this.style.height = this.scrollHeight + 'px'" placeholder="Add your prompt here..." > Instruct: Write a detailed analogy between mathematics and a lighthouse. Output:</textarea > <button id="clear-btn"> <svg fill="none" xmlns="http://www.w3.org/2000/svg" width="40" viewBox="0 0 70 40" > <path opacity=".5" d="M39 .2v40.2" stroke="#1F2937" /> <path d="M1.5 11.5 19 29.1m0-17.6L1.5 29.1" opacity=".5" stroke="#1F2937" stroke-width="2" /> </svg> </button> <button id="run" class="bg-gray-700 hover:bg-gray-800 text-white font-normal py-2 w-16 rounded disabled:bg-gray-300 disabled:cursor-not-allowed" > Run </button> </form> <details> <summary class="font-medium cursor-pointer">Advanced Options</summary> <div class="grid grid-cols-3 max-w-md items-center gap-3 py-3"> <label class="text-sm font-medium" for="max-seq" >Maximum length </label> <input type="range" id="max-seq" name="max-seq" min="1" max="2048" step="1" value="200" oninput="this.nextElementSibling.value = Number(this.value)" /> <output class="text-xs w-[50px] text-center font-light px-1 py-1 border border-gray-700 rounded-md" > 200</output > <label class="text-sm font-medium" for="temperature" >Temperature</label > <input type="range" id="temperature" name="temperature" min="0" max="2" step="0.01" value="0.00" oninput="this.nextElementSibling.value = Number(this.value).toFixed(2)" /> <output class="text-xs w-[50px] text-center font-light px-1 py-1 border border-gray-700 rounded-md" > 0.00</output > <label class="text-sm font-medium" for="top-p">Top-p</label> <input type="range" id="top-p" name="top-p" min="0" max="1" step="0.01" value="1.00" oninput="this.nextElementSibling.value = Number(this.value).toFixed(2)" /> <output class="text-xs w-[50px] text-center font-light px-1 py-1 border border-gray-700 rounded-md" > 1.00</output > <label class="text-sm font-medium" for="repeat_penalty" >Repeat Penalty</label > <input type="range" id="repeat_penalty" name="repeat_penalty" min="1" max="2" step="0.01" value="1.10" oninput="this.nextElementSibling.value = Number(this.value).toFixed(2)" /> <output class="text-xs w-[50px] text-center font-light px-1 py-1 border border-gray-700 rounded-md" >1.10</output > <label class="text-sm font-medium" for="seed">Seed</label> <input type="number" id="seed" name="seed" value="299792458" class="font-light border border-gray-700 text-right rounded-md p-2" /> <button id="run" onclick="document.querySelector('#seed').value = Math.floor(Math.random() * Number.MAX_SAFE_INTEGER)" class="bg-gray-700 hover:bg-gray-800 text-white font-normal py-1 w-[50px] rounded disabled:bg-gray-300 disabled:cursor-not-allowed text-sm" > Rand </button> </div> </details> <div> <h3 class="font-medium">Generation:</h3> <div class="min-h-[250px] bg-slate-100 text-gray-500 p-4 rounded-md flex flex-col gap-2" > <div id="output-counter" hidden class="ml-auto font-semibold grid-rows-1" ></div> <p hidden id="output-generation" class="grid-rows-2 text-lg"></p> <span id="output-status" class="m-auto font-light" >No output yet</span > </div> </div> </main> </body> </html>
candle/candle-wasm-examples/phi/index.html/0
{ "file_path": "candle/candle-wasm-examples/phi/index.html", "repo_id": "candle", "token_count": 9818 }
47
<html> <head> <meta content="text/html;charset=utf-8" http-equiv="Content-Type" /> <title>Candle T5</title> </head> <body></body> </html> <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <style> @import url("https://fonts.googleapis.com/css2?family=Source+Code+Pro:wght@200;300;400&family=Source+Sans+3:wght@100;200;300;400;500;600;700;800;900&display=swap"); html, body { font-family: "Source Sans 3", sans-serif; } </style> <style type="text/tailwindcss"> .link { @apply underline hover:text-blue-500 hover:no-underline; } </style> <script src="https://cdn.tailwindcss.com"></script> <script type="module"> import { getModelInfo, MODELS, extractEmbeddings, generateText, } from "./utils.js"; const t5ModelEncoderWorker = new Worker("./T5ModelEncoderWorker.js", { type: "module", }); const t5ModelConditionalGeneration = new Worker( "./T5ModelConditionalGeneration.js", { type: "module" } ); const formEl = document.querySelector("#form"); const modelEl = document.querySelector("#model"); const promptEl = document.querySelector("#prompt"); const temperatureEl = document.querySelector("#temperature"); const toppEL = document.querySelector("#top-p"); const repeatPenaltyEl = document.querySelector("#repeat_penalty"); const seedEl = document.querySelector("#seed"); const outputEl = document.querySelector("#output-generation"); const tasksEl = document.querySelector("#tasks"); let selectedTaskID = ""; document.addEventListener("DOMContentLoaded", () => { for (const [id, model] of Object.entries(MODELS)) { const option = document.createElement("option"); option.value = id; option.innerText = `${id} (${model.size})`; modelEl.appendChild(option); } populateTasks(modelEl.value); modelEl.addEventListener("change", (e) => { populateTasks(e.target.value); }); tasksEl.addEventListener("change", (e) => { const task = e.target.value; const modelID = modelEl.value; promptEl.value = MODELS[modelID].tasks[task].prefix; selectedTaskID = task; }); }); function populateTasks(modelID) { const tasks = MODELS[modelID].tasks; tasksEl.innerHTML = ""; for (const [task, params] of Object.entries(tasks)) { const div = document.createElement("div"); div.innerHTML = ` <input type="radio" name="task" id="${task}" class="font-light cursor-pointer" value="${task}" /> <label for="${task}" class="cursor-pointer"> ${params.prefix} </label> `; tasksEl.appendChild(div); } selectedTaskID = Object.keys(tasks)[0]; tasksEl.querySelector(`#${selectedTaskID}`).checked = true; } form.addEventListener("submit", (e) => { e.preventDefault(); const promptText = promptEl.value; const modelID = modelEl.value; const { modelURL, configURL, tokenizerURL, maxLength } = getModelInfo( modelID, selectedTaskID ); const params = { temperature: Number(temperatureEl.value), top_p: Number(toppEL.value), repetition_penalty: Number(repeatPenaltyEl.value), seed: BigInt(seedEl.value), max_length: maxLength, }; generateText( t5ModelConditionalGeneration, modelURL, tokenizerURL, configURL, modelID, promptText, params, (status) => { if (status.status === "loading") { outputEl.innerText = "Loading model..."; } if (status.status === "decoding") { outputEl.innerText = "Generating..."; } } ).then(({ output }) => { outputEl.innerText = output.generation; }); }); </script> </head> <body class="container max-w-4xl mx-auto p-4"> <main class="grid grid-cols-1 gap-8 relative"> <span class="absolute text-5xl -ml-[1em]"> 🕯️ </span> <div> <h1 class="text-5xl font-bold">Candle T5 Transformer</h1> <h2 class="text-2xl font-bold">Rust/WASM Demo</h2> <p class="max-w-lg"> This demo showcase Text-To-Text Transfer Transformer (<a href="https://blog.research.google/2020/02/exploring-transfer-learning-with-t5.html" target="_blank" class="link" >T5</a >) models right in your browser, thanks to <a href="https://github.com/huggingface/candle/" target="_blank" class="link"> Candle </a> ML framework and rust/wasm. You can choose from a range of available models, including <a href="https://huggingface.co/t5-small" target="_blank" class="link"> t5-small</a >, <a href="https://huggingface.co/t5-base" target="_blank" class="link" >t5-base</a >, <a href="https://huggingface.co/google/flan-t5-small" target="_blank" class="link" >flan-t5-small</a >, several <a href="https://huggingface.co/lmz/candle-quantized-t5/tree/main" target="_blank" class="link"> t5 quantized gguf models</a >, and also a quantized <a href="https://huggingface.co/jbochi/candle-coedit-quantized/tree/main" target="_blank" class="link"> CoEdIT model for text rewrite</a >. </p> </div> <div> <label for="model" class="font-medium">Models Options: </label> <select id="model" class="border-2 border-gray-500 rounded-md font-light"></select> </div> <div> <h3 class="font-medium">Task Prefix:</h3> <form id="tasks" class="flex flex-col gap-1 my-2"></form> </div> <form id="form" class="flex text-normal px-1 py-1 border border-gray-700 rounded-md items-center"> <input type="submit" hidden /> <input type="text" id="prompt" class="font-light w-full px-3 py-2 mx-1 resize-none outline-none" placeholder="Add prompt here, e.g. 'translate English to German: Today I'm going to eat Ice Cream'" value="translate English to German: Today I'm going to eat Ice Cream" /> <button class="bg-gray-700 hover:bg-gray-800 text-white font-normal py-2 w-16 rounded disabled:bg-gray-300 disabled:cursor-not-allowed"> Run </button> </form> <div class="grid grid-cols-3 max-w-md items-center gap-3"> <label class="text-sm font-medium" for="temperature">Temperature</label> <input type="range" id="temperature" name="temperature" min="0" max="2" step="0.01" value="0.00" oninput="this.nextElementSibling.value = Number(this.value).toFixed(2)" /> <output class="text-xs w-[50px] text-center font-light px-1 py-1 border border-gray-700 rounded-md"> 0.00</output > <label class="text-sm font-medium" for="top-p">Top-p</label> <input type="range" id="top-p" name="top-p" min="0" max="1" step="0.01" value="1.00" oninput="this.nextElementSibling.value = Number(this.value).toFixed(2)" /> <output class="text-xs w-[50px] text-center font-light px-1 py-1 border border-gray-700 rounded-md"> 1.00</output > <label class="text-sm font-medium" for="repeat_penalty" >Repeat Penalty</label > <input type="range" id="repeat_penalty" name="repeat_penalty" min="1" max="2" step="0.01" value="1.10" oninput="this.nextElementSibling.value = Number(this.value).toFixed(2)" /> <output class="text-xs w-[50px] text-center font-light px-1 py-1 border border-gray-700 rounded-md" >1.10</output > <label class="text-sm font-medium" for="seed">Seed</label> <input type="number" id="seed" name="seed" value="299792458" class="font-light border border-gray-700 text-right rounded-md p-2" /> <button id="run" onclick="document.querySelector('#seed').value = BigInt(Math.floor(Math.random() * 2**64-1))" class="bg-gray-700 hover:bg-gray-800 text-white font-normal py-1 w-[50px] rounded disabled:bg-gray-300 disabled:cursor-not-allowed text-sm"> Rand </button> </div> <div> <h3 class="font-medium">Generation:</h3> <div class="min-h-[250px] bg-slate-100 text-gray-500 p-4 rounded-md flex flex-col gap-2 text-lg"> <p id="output-generation" class="grid-rows-2">No output yet</p> </div> </div> </main> </body> </html>
candle/candle-wasm-examples/t5/index.html/0
{ "file_path": "candle/candle-wasm-examples/t5/index.html", "repo_id": "candle", "token_count": 4724 }
48
pub const LANGUAGES: [(&str, &str); 99] = [ ("en", "english"), ("zh", "chinese"), ("de", "german"), ("es", "spanish"), ("ru", "russian"), ("ko", "korean"), ("fr", "french"), ("ja", "japanese"), ("pt", "portuguese"), ("tr", "turkish"), ("pl", "polish"), ("ca", "catalan"), ("nl", "dutch"), ("ar", "arabic"), ("sv", "swedish"), ("it", "italian"), ("id", "indonesian"), ("hi", "hindi"), ("fi", "finnish"), ("vi", "vietnamese"), ("he", "hebrew"), ("uk", "ukrainian"), ("el", "greek"), ("ms", "malay"), ("cs", "czech"), ("ro", "romanian"), ("da", "danish"), ("hu", "hungarian"), ("ta", "tamil"), ("no", "norwegian"), ("th", "thai"), ("ur", "urdu"), ("hr", "croatian"), ("bg", "bulgarian"), ("lt", "lithuanian"), ("la", "latin"), ("mi", "maori"), ("ml", "malayalam"), ("cy", "welsh"), ("sk", "slovak"), ("te", "telugu"), ("fa", "persian"), ("lv", "latvian"), ("bn", "bengali"), ("sr", "serbian"), ("az", "azerbaijani"), ("sl", "slovenian"), ("kn", "kannada"), ("et", "estonian"), ("mk", "macedonian"), ("br", "breton"), ("eu", "basque"), ("is", "icelandic"), ("hy", "armenian"), ("ne", "nepali"), ("mn", "mongolian"), ("bs", "bosnian"), ("kk", "kazakh"), ("sq", "albanian"), ("sw", "swahili"), ("gl", "galician"), ("mr", "marathi"), ("pa", "punjabi"), ("si", "sinhala"), ("km", "khmer"), ("sn", "shona"), ("yo", "yoruba"), ("so", "somali"), ("af", "afrikaans"), ("oc", "occitan"), ("ka", "georgian"), ("be", "belarusian"), ("tg", "tajik"), ("sd", "sindhi"), ("gu", "gujarati"), ("am", "amharic"), ("yi", "yiddish"), ("lo", "lao"), ("uz", "uzbek"), ("fo", "faroese"), ("ht", "haitian creole"), ("ps", "pashto"), ("tk", "turkmen"), ("nn", "nynorsk"), ("mt", "maltese"), ("sa", "sanskrit"), ("lb", "luxembourgish"), ("my", "myanmar"), ("bo", "tibetan"), ("tl", "tagalog"), ("mg", "malagasy"), ("as", "assamese"), ("tt", "tatar"), ("haw", "hawaiian"), ("ln", "lingala"), ("ha", "hausa"), ("ba", "bashkir"), ("jw", "javanese"), ("su", "sundanese"), ];
candle/candle-wasm-examples/whisper/src/languages.rs/0
{ "file_path": "candle/candle-wasm-examples/whisper/src/languages.rs", "repo_id": "candle", "token_count": 1175 }
49
use crate::model::{report_detect, report_pose, Bbox, Multiples, YoloV8, YoloV8Pose}; use candle::{DType, Device, Result, Tensor}; use candle_nn::{Module, VarBuilder}; use serde::{Deserialize, Serialize}; use wasm_bindgen::prelude::*; use yew_agent::{HandlerId, Public, WorkerLink}; #[wasm_bindgen] extern "C" { // Use `js_namespace` here to bind `console.log(..)` instead of just // `log(..)` #[wasm_bindgen(js_namespace = console)] pub fn log(s: &str); } #[macro_export] macro_rules! console_log { // Note that this is using the `log` function imported above during // `bare_bones` ($($t:tt)*) => ($crate::worker::log(&format_args!($($t)*).to_string())) } // Communication to the worker happens through bincode, the model weights and configs are fetched // on the main thread and transferred via the following structure. #[derive(Serialize, Deserialize)] pub struct ModelData { pub weights: Vec<u8>, pub model_size: String, } #[derive(Serialize, Deserialize)] pub struct RunData { pub image_data: Vec<u8>, pub conf_threshold: f32, pub iou_threshold: f32, } pub struct Model { model: YoloV8, } impl Model { pub fn run( &self, image_data: Vec<u8>, conf_threshold: f32, iou_threshold: f32, ) -> Result<Vec<Vec<Bbox>>> { console_log!("image data: {}", image_data.len()); let image_data = std::io::Cursor::new(image_data); let original_image = image::io::Reader::new(image_data) .with_guessed_format()? .decode() .map_err(candle::Error::wrap)?; let (width, height) = { let w = original_image.width() as usize; let h = original_image.height() as usize; if w < h { let w = w * 640 / h; // Sizes have to be divisible by 32. (w / 32 * 32, 640) } else { let h = h * 640 / w; (640, h / 32 * 32) } }; let image_t = { let img = original_image.resize_exact( width as u32, height as u32, image::imageops::FilterType::CatmullRom, ); let data = img.to_rgb8().into_raw(); Tensor::from_vec( data, (img.height() as usize, img.width() as usize, 3), &Device::Cpu, )? .permute((2, 0, 1))? }; let image_t = (image_t.unsqueeze(0)?.to_dtype(DType::F32)? * (1. / 255.))?; let predictions = self.model.forward(&image_t)?.squeeze(0)?; console_log!("generated predictions {predictions:?}"); let bboxes = report_detect( &predictions, original_image, width, height, conf_threshold, iou_threshold, )?; Ok(bboxes) } pub fn load_(weights: Vec<u8>, model_size: &str) -> Result<Self> { let multiples = match model_size { "n" => Multiples::n(), "s" => Multiples::s(), "m" => Multiples::m(), "l" => Multiples::l(), "x" => Multiples::x(), _ => Err(candle::Error::Msg( "invalid model size: must be n, s, m, l or x".to_string(), ))?, }; let dev = &Device::Cpu; let vb = VarBuilder::from_buffered_safetensors(weights, DType::F32, dev)?; let model = YoloV8::load(vb, multiples, 80)?; Ok(Self { model }) } pub fn load(md: ModelData) -> Result<Self> { Self::load_(md.weights, &md.model_size.to_string()) } } pub struct ModelPose { model: YoloV8Pose, } impl ModelPose { pub fn run( &self, image_data: Vec<u8>, conf_threshold: f32, iou_threshold: f32, ) -> Result<Vec<Bbox>> { console_log!("image data: {}", image_data.len()); let image_data = std::io::Cursor::new(image_data); let original_image = image::io::Reader::new(image_data) .with_guessed_format()? .decode() .map_err(candle::Error::wrap)?; let (width, height) = { let w = original_image.width() as usize; let h = original_image.height() as usize; if w < h { let w = w * 640 / h; // Sizes have to be divisible by 32. (w / 32 * 32, 640) } else { let h = h * 640 / w; (640, h / 32 * 32) } }; let image_t = { let img = original_image.resize_exact( width as u32, height as u32, image::imageops::FilterType::CatmullRom, ); let data = img.to_rgb8().into_raw(); Tensor::from_vec( data, (img.height() as usize, img.width() as usize, 3), &Device::Cpu, )? .permute((2, 0, 1))? }; let image_t = (image_t.unsqueeze(0)?.to_dtype(DType::F32)? * (1. / 255.))?; let predictions = self.model.forward(&image_t)?.squeeze(0)?; console_log!("generated predictions {predictions:?}"); let bboxes = report_pose( &predictions, original_image, width, height, conf_threshold, iou_threshold, )?; Ok(bboxes) } pub fn load_(weights: Vec<u8>, model_size: &str) -> Result<Self> { let multiples = match model_size { "n" => Multiples::n(), "s" => Multiples::s(), "m" => Multiples::m(), "l" => Multiples::l(), "x" => Multiples::x(), _ => Err(candle::Error::Msg( "invalid model size: must be n, s, m, l or x".to_string(), ))?, }; let dev = &Device::Cpu; let vb = VarBuilder::from_buffered_safetensors(weights, DType::F32, dev)?; let model = YoloV8Pose::load(vb, multiples, 1, (17, 3))?; Ok(Self { model }) } pub fn load(md: ModelData) -> Result<Self> { Self::load_(md.weights, &md.model_size.to_string()) } } pub struct Worker { link: WorkerLink<Self>, model: Option<Model>, } #[derive(Serialize, Deserialize)] pub enum WorkerInput { ModelData(ModelData), RunData(RunData), } #[derive(Serialize, Deserialize)] pub enum WorkerOutput { ProcessingDone(std::result::Result<Vec<Vec<Bbox>>, String>), WeightsLoaded, } impl yew_agent::Worker for Worker { type Input = WorkerInput; type Message = (); type Output = std::result::Result<WorkerOutput, String>; type Reach = Public<Self>; fn create(link: WorkerLink<Self>) -> Self { Self { link, model: None } } fn update(&mut self, _msg: Self::Message) { // no messaging } fn handle_input(&mut self, msg: Self::Input, id: HandlerId) { let output = match msg { WorkerInput::ModelData(md) => match Model::load(md) { Ok(model) => { self.model = Some(model); Ok(WorkerOutput::WeightsLoaded) } Err(err) => Err(format!("model creation error {err:?}")), }, WorkerInput::RunData(rd) => match &mut self.model { None => Err("model has not been set yet".to_string()), Some(model) => { let result = model .run(rd.image_data, rd.conf_threshold, rd.iou_threshold) .map_err(|e| e.to_string()); Ok(WorkerOutput::ProcessingDone(result)) } }, }; self.link.respond(id, output); } fn name_of_resource() -> &'static str { "worker.js" } fn resource_path_is_relative() -> bool { true } }
candle/candle-wasm-examples/yolo/src/worker.rs/0
{ "file_path": "candle/candle-wasm-examples/yolo/src/worker.rs", "repo_id": "candle", "token_count": 4077 }
50
{ "editor.formatOnSave": true, "editor.defaultFormatter": "esbenp.prettier-vscode", "editor.codeActionsOnSave": { "source.fixAll": "explicit" }, "eslint.validate": ["javascript", "svelte"] }
chat-ui/.vscode/settings.json/0
{ "file_path": "chat-ui/.vscode/settings.json", "repo_id": "chat-ui", "token_count": 83 }
51
<!DOCTYPE html> <html lang="en" class="h-full"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" /> <meta name="theme-color" content="rgb(249, 250, 251)" /> <script> if ( localStorage.theme === "dark" || (!("theme" in localStorage) && window.matchMedia("(prefers-color-scheme: dark)").matches) ) { document.documentElement.classList.add("dark"); document .querySelector('meta[name="theme-color"]') .setAttribute("content", "rgb(26, 36, 50)"); } // For some reason, Sveltekit doesn't let us load env variables from .env here, so we load it from hooks.server.ts window.gaId = "%gaId%"; </script> %sveltekit.head% </head> <body data-sveltekit-preload-data="hover" class="h-full dark:bg-gray-900"> <div id="app" class="contents h-full">%sveltekit.body%</div> <!-- Google Tag Manager --> <script> if (window.gaId) { const script = document.createElement("script"); script.src = "https://www.googletagmanager.com/gtag/js?id=" + window.gaId; script.async = true; document.head.appendChild(script); window.dataLayer = window.dataLayer || []; function gtag() { dataLayer.push(arguments); } gtag("js", new Date()); /// ^ See https://developers.google.com/tag-platform/gtagjs/install gtag("config", window.gaId); gtag("consent", "default", { ad_storage: "denied", analytics_storage: "denied" }); /// ^ See https://developers.google.com/tag-platform/gtagjs/reference#consent /// TODO: ask the user for their consent and update this with gtag('consent', 'update') } </script> </body> </html>
chat-ui/src/app.html/0
{ "file_path": "chat-ui/src/app.html", "repo_id": "chat-ui", "token_count": 677 }
52
<script lang="ts"> import { base } from "$app/paths"; import { page } from "$app/stores"; import { createEventDispatcher } from "svelte"; import CarbonCheckmark from "~icons/carbon/checkmark"; import CarbonTrashCan from "~icons/carbon/trash-can"; import CarbonClose from "~icons/carbon/close"; import CarbonEdit from "~icons/carbon/edit"; import type { ConvSidebar } from "$lib/types/ConvSidebar"; export let conv: ConvSidebar; let confirmDelete = false; const dispatch = createEventDispatcher<{ deleteConversation: string; editConversationTitle: { id: string; title: string }; }>(); </script> <a data-sveltekit-noscroll on:mouseleave={() => { confirmDelete = false; }} href="{base}/conversation/{conv.id}" class="group flex h-10 flex-none items-center gap-1.5 rounded-lg pl-2.5 pr-2 text-gray-600 hover:bg-gray-100 dark:text-gray-300 dark:hover:bg-gray-700 {conv.id === $page.params.id ? 'bg-gray-100 dark:bg-gray-700' : ''}" > <div class="flex flex-1 items-center truncate"> {#if confirmDelete} <span class="mr-1 font-semibold"> Delete </span> {/if} {#if conv.avatarHash} <img src="{base}/settings/assistants/{conv.assistantId}/avatar.jpg?hash={conv.avatarHash}" alt="Assistant avatar" class="mr-1.5 inline size-4 flex-none rounded-full object-cover" /> {conv.title.replace(/\p{Emoji}/gu, "")} {:else if conv.assistantId} <div class="mr-1.5 flex size-4 flex-none items-center justify-center rounded-full bg-gray-300 text-xs font-bold uppercase text-gray-500" /> {conv.title.replace(/\p{Emoji}/gu, "")} {:else} {conv.title} {/if} </div> {#if confirmDelete} <button type="button" class="flex h-5 w-5 items-center justify-center rounded md:hidden md:group-hover:flex" title="Confirm delete action" on:click|preventDefault={() => { confirmDelete = false; dispatch("deleteConversation", conv.id); }} > <CarbonCheckmark class="text-xs text-gray-400 hover:text-gray-500 dark:hover:text-gray-300" /> </button> <button type="button" class="flex h-5 w-5 items-center justify-center rounded md:hidden md:group-hover:flex" title="Cancel delete action" on:click|preventDefault={() => (confirmDelete = false)} > <CarbonClose class="text-xs text-gray-400 hover:text-gray-500 dark:hover:text-gray-300" /> </button> {:else} <button type="button" class="flex h-5 w-5 items-center justify-center rounded md:hidden md:group-hover:flex" title="Edit conversation title" on:click|preventDefault={() => { const newTitle = prompt("Edit this conversation title:", conv.title); if (!newTitle) return; dispatch("editConversationTitle", { id: conv.id, title: newTitle }); }} > <CarbonEdit class="text-xs text-gray-400 hover:text-gray-500 dark:hover:text-gray-300" /> </button> <button type="button" class="flex h-5 w-5 items-center justify-center rounded md:hidden md:group-hover:flex" title="Delete conversation" on:click|preventDefault={(event) => { if (event.shiftKey) { dispatch("deleteConversation", conv.id); } else { confirmDelete = true; } }} > <CarbonTrashCan class="text-xs text-gray-400 hover:text-gray-500 dark:hover:text-gray-300" /> </button> {/if} </a>
chat-ui/src/lib/components/NavConversationItem.svelte/0
{ "file_path": "chat-ui/src/lib/components/NavConversationItem.svelte", "repo_id": "chat-ui", "token_count": 1309 }
53
<script lang="ts"> import { isDesktop } from "$lib/utils/isDesktop"; import { createEventDispatcher, onMount } from "svelte"; export let value = ""; export let minRows = 1; export let maxRows: null | number = null; export let placeholder = ""; export let disabled = false; let textareaElement: HTMLTextAreaElement; let isCompositionOn = false; const dispatch = createEventDispatcher<{ submit: void }>(); $: minHeight = `${1 + minRows * 1.5}em`; $: maxHeight = maxRows ? `${1 + maxRows * 1.5}em` : `auto`; function handleKeydown(event: KeyboardEvent) { // submit on enter if (event.key === "Enter" && !event.shiftKey && !isCompositionOn) { event.preventDefault(); // blur to close keyboard on mobile textareaElement.blur(); // refocus so that user on desktop can start typing without needing to reclick on textarea if (isDesktop(window)) { textareaElement.focus(); } dispatch("submit"); // use a custom event instead of `event.target.form.requestSubmit()` as it does not work on Safari 14 } } onMount(() => { if (isDesktop(window)) { textareaElement.focus(); } }); </script> <div class="relative min-w-0 flex-1"> <pre class="scrollbar-custom invisible overflow-x-hidden overflow-y-scroll whitespace-pre-wrap break-words p-3" aria-hidden="true" style="min-height: {minHeight}; max-height: {maxHeight}">{(value || " ") + "\n"}</pre> <textarea enterkeyhint="send" tabindex="0" rows="1" class="scrollbar-custom absolute top-0 m-0 h-full w-full resize-none scroll-p-3 overflow-x-hidden overflow-y-scroll border-0 bg-transparent p-3 outline-none focus:ring-0 focus-visible:ring-0" class:text-gray-400={disabled} bind:value bind:this={textareaElement} {disabled} on:keydown={handleKeydown} on:compositionstart={() => (isCompositionOn = true)} on:compositionend={() => (isCompositionOn = false)} on:beforeinput {placeholder} /> </div> <style> pre, textarea { font-family: inherit; box-sizing: border-box; line-height: 1.5; } </style>
chat-ui/src/lib/components/chat/ChatInput.svelte/0
{ "file_path": "chat-ui/src/lib/components/chat/ChatInput.svelte", "repo_id": "chat-ui", "token_count": 748 }
54
import { client, collections } from "$lib/server/database"; import { migrations } from "./routines"; import { acquireLock, releaseLock, isDBLocked, refreshLock } from "./lock"; import { isHuggingChat } from "$lib/utils/isHuggingChat"; export async function checkAndRunMigrations() { // make sure all GUIDs are unique if (new Set(migrations.map((m) => m._id.toString())).size !== migrations.length) { throw new Error("Duplicate migration GUIDs found."); } // check if all migrations have already been run const migrationResults = await collections.migrationResults.find().toArray(); // if all the migrations._id are in the migrationResults, we can exit early if ( migrations.every((m) => migrationResults.some((m2) => m2._id.toString() === m._id.toString())) ) { console.log("[MIGRATIONS] All migrations already applied."); return; } console.log("[MIGRATIONS] Begin check..."); // connect to the database const connectedClient = await client.connect(); const hasLock = await acquireLock(); if (!hasLock) { // another instance already has the lock, so we exit early console.log( "[MIGRATIONS] Another instance already has the lock. Waiting for DB to be unlocked." ); // block until the lock is released while (await isDBLocked()) { await new Promise((resolve) => setTimeout(resolve, 1000)); } return; } // once here, we have the lock // make sure to refresh it regularly while it's running const refreshInterval = setInterval(async () => { await refreshLock(); }, 1000 * 10); // iterate over all migrations for (const migration of migrations) { // check if the migration has already been applied const existingMigrationResult = migrationResults.find( (m) => m._id.toString() === migration._id.toString() ); // check if the migration has already been applied if (existingMigrationResult) { console.log(`[MIGRATIONS] "${migration.name}" already applied. Skipping...`); } else { // check the modifiers to see if some cases match if ( (migration.runForHuggingChat === "only" && !isHuggingChat) || (migration.runForHuggingChat === "never" && isHuggingChat) ) { console.log( `[MIGRATIONS] "${migration.name}" should not be applied for this run. Skipping...` ); continue; } // otherwise all is good and we cna run the migration console.log(`[MIGRATIONS] "${migration.name}" not applied yet. Applying...`); await collections.migrationResults.updateOne( { _id: migration._id }, { $set: { name: migration.name, status: "ongoing", }, }, { upsert: true } ); const session = connectedClient.startSession(); let result = false; try { await session.withTransaction(async () => { result = await migration.up(connectedClient); }); } catch (e) { console.log(`[MIGRATION[] "${migration.name}" failed!`); console.error(e); } finally { await session.endSession(); } await collections.migrationResults.updateOne( { _id: migration._id }, { $set: { name: migration.name, status: result ? "success" : "failure", }, }, { upsert: true } ); } } console.log("[MIGRATIONS] All migrations applied. Releasing lock"); clearInterval(refreshInterval); await releaseLock(); }
chat-ui/src/lib/migrations/migrations.ts/0
{ "file_path": "chat-ui/src/lib/migrations/migrations.ts", "repo_id": "chat-ui", "token_count": 1186 }
55
import { z } from "zod"; import { openAICompletionToTextGenerationStream } from "./openAICompletionToTextGenerationStream"; import { openAIChatToTextGenerationStream } from "./openAIChatToTextGenerationStream"; import { buildPrompt } from "$lib/buildPrompt"; import { OPENAI_API_KEY } from "$env/static/private"; import type { Endpoint } from "../endpoints"; export const endpointOAIParametersSchema = z.object({ weight: z.number().int().positive().default(1), model: z.any(), type: z.literal("openai"), baseURL: z.string().url().default("https://api.openai.com/v1"), apiKey: z.string().default(OPENAI_API_KEY ?? "sk-"), completion: z .union([z.literal("completions"), z.literal("chat_completions")]) .default("chat_completions"), defaultHeaders: z.record(z.string()).optional(), defaultQuery: z.record(z.string()).optional(), }); export async function endpointOai( input: z.input<typeof endpointOAIParametersSchema> ): Promise<Endpoint> { const { baseURL, apiKey, completion, model, defaultHeaders, defaultQuery } = endpointOAIParametersSchema.parse(input); let OpenAI; try { OpenAI = (await import("openai")).OpenAI; } catch (e) { throw new Error("Failed to import OpenAI", { cause: e }); } const openai = new OpenAI({ apiKey: apiKey ?? "sk-", baseURL, defaultHeaders, defaultQuery, }); if (completion === "completions") { return async ({ messages, preprompt, continueMessage }) => { const prompt = await buildPrompt({ messages, continueMessage, preprompt, model, }); return openAICompletionToTextGenerationStream( await openai.completions.create({ model: model.id ?? model.name, prompt, stream: true, max_tokens: model.parameters?.max_new_tokens, stop: model.parameters?.stop, temperature: model.parameters?.temperature, top_p: model.parameters?.top_p, frequency_penalty: model.parameters?.repetition_penalty, }) ); }; } else if (completion === "chat_completions") { return async ({ messages, preprompt }) => { let messagesOpenAI = messages.map((message) => ({ role: message.from, content: message.content, })); if (messagesOpenAI?.[0]?.role !== "system") { messagesOpenAI = [{ role: "system", content: "" }, ...messagesOpenAI]; } if (messagesOpenAI?.[0]) { messagesOpenAI[0].content = preprompt ?? ""; } return openAIChatToTextGenerationStream( await openai.chat.completions.create({ model: model.id ?? model.name, messages: messagesOpenAI, stream: true, max_tokens: model.parameters?.max_new_tokens, stop: model.parameters?.stop, temperature: model.parameters?.temperature, top_p: model.parameters?.top_p, frequency_penalty: model.parameters?.repetition_penalty, }) ); }; } else { throw new Error("Invalid completion type"); } }
chat-ui/src/lib/server/endpoints/openai/endpointOai.ts/0
{ "file_path": "chat-ui/src/lib/server/endpoints/openai/endpointOai.ts", "repo_id": "chat-ui", "token_count": 1108 }
56
import { searchWeb } from "$lib/server/websearch/searchWeb"; import { generateQuery } from "$lib/server/websearch/generateQuery"; import { parseWeb } from "$lib/server/websearch/parseWeb"; import { chunk } from "$lib/utils/chunk"; import { findSimilarSentences } from "$lib/server/sentenceSimilarity"; import { getWebSearchProvider } from "./searchWeb"; import { defaultEmbeddingModel, embeddingModels } from "$lib/server/embeddingModels"; import { WEBSEARCH_ALLOWLIST, WEBSEARCH_BLOCKLIST, ENABLE_LOCAL_FETCH } from "$env/static/private"; import type { Conversation } from "$lib/types/Conversation"; import type { MessageUpdate } from "$lib/types/MessageUpdate"; import type { Message } from "$lib/types/Message"; import type { WebSearch, WebSearchSource } from "$lib/types/WebSearch"; import type { Assistant } from "$lib/types/Assistant"; import { z } from "zod"; import JSON5 from "json5"; import { isURLLocal } from "../isURLLocal"; const MAX_N_PAGES_SCRAPE = 10 as const; const MAX_N_PAGES_EMBED = 5 as const; const listSchema = z.array(z.string()).default([]); const allowList = listSchema.parse(JSON5.parse(WEBSEARCH_ALLOWLIST)); const blockList = listSchema.parse(JSON5.parse(WEBSEARCH_BLOCKLIST)); export async function runWebSearch( conv: Conversation, messages: Message[], updatePad: (upd: MessageUpdate) => void, ragSettings?: Assistant["rag"] ) { const prompt = messages[messages.length - 1].content; const webSearch: WebSearch = { prompt, searchQuery: "", results: [], context: "", contextSources: [], createdAt: new Date(), updatedAt: new Date(), }; function appendUpdate(message: string, args?: string[], type?: "error" | "update") { updatePad({ type: "webSearch", messageType: type ?? "update", message, args }); } try { // if the assistant specified direct links, skip the websearch if (ragSettings && ragSettings?.allowedLinks.length > 0) { appendUpdate("Using links specified in Assistant"); let linksToUse = [...ragSettings.allowedLinks]; if (ENABLE_LOCAL_FETCH !== "true") { const localLinks = await Promise.all( linksToUse.map(async (link) => { try { const url = new URL(link); return await isURLLocal(url); } catch (e) { return true; } }) ); linksToUse = linksToUse.filter((_, index) => !localLinks[index]); } webSearch.results = linksToUse.map((link) => { return { link, hostname: new URL(link).hostname, title: "", text: "" }; }); } else { webSearch.searchQuery = await generateQuery(messages); const searchProvider = getWebSearchProvider(); appendUpdate(`Searching ${searchProvider}`, [webSearch.searchQuery]); let filters = ""; if (ragSettings && ragSettings?.allowedDomains.length > 0) { appendUpdate("Filtering on specified domains"); filters += ragSettings.allowedDomains.map((item) => "site:" + item).join(" OR "); } // handle the global lists filters += allowList.map((item) => "site:" + item).join(" OR ") + " " + blockList.map((item) => "-site:" + item).join(" "); webSearch.searchQuery = filters + " " + webSearch.searchQuery; const results = await searchWeb(webSearch.searchQuery); webSearch.results = (results.organic_results && results.organic_results.map((el: { title?: string; link: string; text?: string }) => { try { const { title, link, text } = el; const { hostname } = new URL(link); return { title, link, hostname, text }; } catch (e) { // Ignore Errors return null; } })) ?? []; } webSearch.results = webSearch.results.filter((value) => value !== null); webSearch.results = webSearch.results .filter(({ link }) => !blockList.some((el) => link.includes(el))) // filter out blocklist links .slice(0, MAX_N_PAGES_SCRAPE); // limit to first 10 links only // fetch the model const embeddingModel = embeddingModels.find((m) => m.id === conv.embeddingModel) ?? defaultEmbeddingModel; if (!embeddingModel) { throw new Error(`Embedding model ${conv.embeddingModel} not available anymore`); } let paragraphChunks: { source: WebSearchSource; text: string }[] = []; if (webSearch.results.length > 0) { appendUpdate("Browsing results"); const promises = webSearch.results.map(async (result) => { const { link } = result; let text = result.text ?? ""; if (!text) { try { text = await parseWeb(link); appendUpdate("Browsing webpage", [link]); } catch (e) { appendUpdate("Failed to parse webpage", [(e as Error).message, link], "error"); // ignore errors } } const MAX_N_CHUNKS = 100; const texts = chunk(text, embeddingModel.chunkCharLength).slice(0, MAX_N_CHUNKS); return texts.map((t) => ({ source: result, text: t })); }); const nestedParagraphChunks = (await Promise.all(promises)).slice(0, MAX_N_PAGES_EMBED); paragraphChunks = nestedParagraphChunks.flat(); if (!paragraphChunks.length) { throw new Error("No text found on the first 5 results"); } } else { throw new Error("No results found for this search query"); } appendUpdate("Extracting relevant information"); const topKClosestParagraphs = 8; const texts = paragraphChunks.map(({ text }) => text); const indices = await findSimilarSentences(embeddingModel, prompt, texts, { topK: topKClosestParagraphs, }); webSearch.context = indices.map((idx) => texts[idx]).join(""); const usedSources = new Set<string>(); for (const idx of indices) { const { source } = paragraphChunks[idx]; if (!usedSources.has(source.link)) { usedSources.add(source.link); webSearch.contextSources.push(source); } } updatePad({ type: "webSearch", messageType: "sources", message: "sources", sources: webSearch.contextSources, }); } catch (searchError) { if (searchError instanceof Error) { appendUpdate("An error occurred", [JSON.stringify(searchError.message)], "error"); } } return webSearch; }
chat-ui/src/lib/server/websearch/runWebSearch.ts/0
{ "file_path": "chat-ui/src/lib/server/websearch/runWebSearch.ts", "repo_id": "chat-ui", "token_count": 2199 }
57
import type { ObjectId } from "mongodb"; import type { Message } from "./Message"; import type { Timestamps } from "./Timestamps"; import type { User } from "./User"; import type { Assistant } from "./Assistant"; export interface Conversation extends Timestamps { _id: ObjectId; sessionId?: string; userId?: User["_id"]; model: string; embeddingModel: string; title: string; rootMessageId?: Message["id"]; messages: Message[]; meta?: { fromShareId?: string; }; preprompt?: string; assistantId?: Assistant["_id"]; userAgent?: string; }
chat-ui/src/lib/types/Conversation.ts/0
{ "file_path": "chat-ui/src/lib/types/Conversation.ts", "repo_id": "chat-ui", "token_count": 182 }
58
import type { ObjectId } from "mongodb"; import type { Conversation } from "./Conversation"; import type { Timestamps } from "./Timestamps"; export interface WebSearch extends Timestamps { _id?: ObjectId; convId?: Conversation["_id"]; prompt: string; searchQuery: string; results: WebSearchSource[]; context: string; contextSources: WebSearchSource[]; } export interface WebSearchSource { title: string; link: string; hostname: string; text?: string; // You.com provides text of webpage right away } export type WebSearchMessageSources = { type: "sources"; sources: WebSearchSource[]; }; export interface YouWebSearch { hits: YouSearchHit[]; latency: number; } interface YouSearchHit { url: string; title: string; description: string; snippets: string[]; } // eslint-disable-next-line no-shadow export enum WebSearchProvider { GOOGLE = "Google", YOU = "You.com", SEARXNG = "SearXNG", }
chat-ui/src/lib/types/WebSearch.ts/0
{ "file_path": "chat-ui/src/lib/types/WebSearch.ts", "repo_id": "chat-ui", "token_count": 306 }
59
export function parseStringToList(links: unknown): string[] { if (typeof links !== "string") { throw new Error("Expected a string"); } return links .split(",") .map((link) => link.trim()) .filter((link) => link.length > 0); }
chat-ui/src/lib/utils/parseStringToList.ts/0
{ "file_path": "chat-ui/src/lib/utils/parseStringToList.ts", "repo_id": "chat-ui", "token_count": 86 }
60
import type { Conversation } from "$lib/types/Conversation"; import type { Message } from "$lib/types/Message"; import { v4 } from "uuid"; export function convertLegacyConversation( conv: Pick<Conversation, "messages" | "rootMessageId" | "preprompt"> ): Pick<Conversation, "messages" | "rootMessageId" | "preprompt"> { if (conv.rootMessageId) return conv; // not a legacy conversation if (conv.messages.length === 0) return conv; // empty conversation const messages = [ { from: "system", content: conv.preprompt ?? "", createdAt: new Date(), updatedAt: new Date(), id: v4(), } satisfies Message, ...conv.messages, ]; const rootMessageId = messages[0].id; const newMessages = messages.map((message, index) => { return { ...message, ancestors: messages.slice(0, index).map((m) => m.id), children: index < messages.length - 1 ? [messages[index + 1].id] : [], }; }); return { ...conv, rootMessageId, messages: newMessages, }; }
chat-ui/src/lib/utils/tree/convertLegacyConversation.ts/0
{ "file_path": "chat-ui/src/lib/utils/tree/convertLegacyConversation.ts", "repo_id": "chat-ui", "token_count": 354 }
61
import ChatThumbnail from "./ChatThumbnail.svelte"; import { collections } from "$lib/server/database"; import { error, type RequestHandler } from "@sveltejs/kit"; import { ObjectId } from "mongodb"; import type { SvelteComponent } from "svelte"; import { Resvg } from "@resvg/resvg-js"; import satori from "satori"; import { html } from "satori-html"; import InterRegular from "../../../../../static/fonts/Inter-Regular.ttf"; import InterBold from "../../../../../static/fonts/Inter-Bold.ttf"; import sharp from "sharp"; export const GET: RequestHandler = (async ({ params }) => { const assistant = await collections.assistants.findOne({ _id: new ObjectId(params.assistantId), }); if (!assistant) { throw error(404, "Assistant not found."); } let avatar = ""; const fileId = collections.bucket.find({ filename: assistant._id.toString() }); const file = await fileId.next(); if (file) { avatar = await (async () => { const fileStream = collections.bucket.openDownloadStream(file?._id); const fileBuffer = await new Promise<Buffer>((resolve, reject) => { const chunks: Uint8Array[] = []; fileStream.on("data", (chunk) => chunks.push(chunk)); fileStream.on("error", reject); fileStream.on("end", () => resolve(Buffer.concat(chunks))); }); return fileBuffer; })() .then(async (buf) => sharp(buf).jpeg().toBuffer()) // convert to jpeg bc satori png is really slow .then(async (buf) => "data:image/jpeg;base64," + buf.toString("base64")); } const renderedComponent = (ChatThumbnail as unknown as SvelteComponent).render({ name: assistant.name, description: assistant.description, createdByName: assistant.createdByName, avatar, }); const reactLike = html( "<style>" + renderedComponent.css.code + "</style>" + renderedComponent.html ); const svg = await satori(reactLike, { width: 1200, height: 648, fonts: [ { name: "Inter", data: InterRegular as unknown as ArrayBuffer, weight: 500, }, { name: "Inter", data: InterBold as unknown as ArrayBuffer, weight: 700, }, ], }); const png = new Resvg(svg, { fitTo: { mode: "original" }, }) .render() .asPng(); return new Response(png, { headers: { "Content-Type": "image/png", }, }); }) satisfies RequestHandler;
chat-ui/src/routes/assistant/[assistantId]/thumbnail.png/+server.ts/0
{ "file_path": "chat-ui/src/routes/assistant/[assistantId]/thumbnail.png/+server.ts", "repo_id": "chat-ui", "token_count": 833 }
62
import { assert, it, describe, afterEach, vi, expect } from "vitest"; import type { Cookies } from "@sveltejs/kit"; import { collections } from "$lib/server/database"; import { updateUser } from "./updateUser"; import { ObjectId } from "mongodb"; import { DEFAULT_SETTINGS } from "$lib/types/Settings"; import { defaultModel } from "$lib/server/models"; import { findUser } from "$lib/server/auth"; import { defaultEmbeddingModel } from "$lib/server/embeddingModels"; const userData = { preferred_username: "new-username", name: "name", picture: "https://example.com/avatar.png", sub: "1234567890", }; Object.freeze(userData); const locals = { userId: "1234567890", sessionId: "1234567890", }; // @ts-expect-error SvelteKit cookies dumb mock const cookiesMock: Cookies = { set: vi.fn(), }; const insertRandomUser = async () => { const res = await collections.users.insertOne({ _id: new ObjectId(), createdAt: new Date(), updatedAt: new Date(), username: "base-username", name: userData.name, avatarUrl: userData.picture, hfUserId: userData.sub, }); return res.insertedId; }; const insertRandomConversations = async (count: number) => { const res = await collections.conversations.insertMany( new Array(count).fill(0).map(() => ({ _id: new ObjectId(), title: "random title", messages: [], model: defaultModel.id, embeddingModel: defaultEmbeddingModel.id, createdAt: new Date(), updatedAt: new Date(), sessionId: locals.sessionId, })) ); return res.insertedIds; }; describe("login", () => { it("should update user if existing", async () => { await insertRandomUser(); await updateUser({ userData, locals, cookies: cookiesMock }); const existingUser = await collections.users.findOne({ hfUserId: userData.sub }); assert.equal(existingUser?.name, userData.name); expect(cookiesMock.set).toBeCalledTimes(1); }); it("should migrate pre-existing conversations for new user", async () => { const insertedId = await insertRandomUser(); await insertRandomConversations(2); await updateUser({ userData, locals, cookies: cookiesMock }); const conversationCount = await collections.conversations.countDocuments({ userId: insertedId, sessionId: { $exists: false }, }); assert.equal(conversationCount, 2); await collections.conversations.deleteMany({ userId: insertedId }); }); it("should create default settings for new user", async () => { await updateUser({ userData, locals, cookies: cookiesMock }); const user = await findUser(locals.sessionId); assert.exists(user); const settings = await collections.settings.findOne({ userId: user?._id }); expect(settings).toMatchObject({ userId: user?._id, updatedAt: expect.any(Date), createdAt: expect.any(Date), ethicsModalAcceptedAt: expect.any(Date), ...DEFAULT_SETTINGS, }); await collections.settings.deleteOne({ userId: user?._id }); }); it("should migrate pre-existing settings for pre-existing user", async () => { const { insertedId } = await collections.settings.insertOne({ sessionId: locals.sessionId, ethicsModalAcceptedAt: new Date(), updatedAt: new Date(), createdAt: new Date(), ...DEFAULT_SETTINGS, shareConversationsWithModelAuthors: false, }); await updateUser({ userData, locals, cookies: cookiesMock }); const settings = await collections.settings.findOne({ _id: insertedId, sessionId: { $exists: false }, }); assert.exists(settings); const user = await collections.users.findOne({ hfUserId: userData.sub }); expect(settings).toMatchObject({ userId: user?._id, updatedAt: expect.any(Date), createdAt: expect.any(Date), ethicsModalAcceptedAt: expect.any(Date), ...DEFAULT_SETTINGS, shareConversationsWithModelAuthors: false, }); await collections.settings.deleteOne({ userId: user?._id }); }); }); afterEach(async () => { await collections.users.deleteMany({ hfUserId: userData.sub }); await collections.sessions.deleteMany({}); locals.userId = "1234567890"; locals.sessionId = "1234567890"; vi.clearAllMocks(); });
chat-ui/src/routes/login/callback/updateUser.spec.ts/0
{ "file_path": "chat-ui/src/routes/login/callback/updateUser.spec.ts", "repo_id": "chat-ui", "token_count": 1408 }
63
<script lang="ts"> import { enhance } from "$app/forms"; import { base } from "$app/paths"; import { page } from "$app/stores"; import { PUBLIC_ORIGIN, PUBLIC_SHARE_PREFIX } from "$env/static/public"; import { useSettingsStore } from "$lib/stores/settings"; import type { PageData } from "./$types"; import CarbonPen from "~icons/carbon/pen"; import CarbonTrash from "~icons/carbon/trash-can"; import CarbonCopy from "~icons/carbon/copy-file"; import CarbonFlag from "~icons/carbon/flag"; import CarbonLink from "~icons/carbon/link"; import CopyToClipBoardBtn from "$lib/components/CopyToClipBoardBtn.svelte"; import ReportModal from "./ReportModal.svelte"; import IconInternet from "$lib/components/icons/IconInternet.svelte"; export let data: PageData; $: assistant = data.assistants.find((el) => el._id.toString() === $page.params.assistantId); const settings = useSettingsStore(); $: isActive = $settings.activeModel === $page.params.assistantId; const prefix = PUBLIC_SHARE_PREFIX || `${PUBLIC_ORIGIN || $page.url.origin}${base}`; $: shareUrl = `${prefix}/assistant/${assistant?._id}`; let displayReportModal = false; $: hasRag = assistant?.rag?.allowAllDomains || !!assistant?.rag?.allowedDomains?.length || !!assistant?.rag?.allowedLinks?.length; </script> {#if displayReportModal} <ReportModal on:close={() => (displayReportModal = false)} /> {/if} <div class="flex h-full flex-col gap-2"> <div class="flex gap-6"> {#if assistant?.avatar} <!-- crop image if not square --> <img src={`${base}/settings/assistants/${assistant?._id}/avatar.jpg?hash=${assistant?.avatar}`} alt="Avatar" class="size-16 flex-none rounded-full object-cover sm:size-24" /> {:else} <div class="flex size-16 flex-none items-center justify-center rounded-full bg-gray-300 text-4xl font-semibold uppercase text-gray-500 sm:size-24" > {assistant?.name[0]} </div> {/if} <div class="flex-1"> <div class="mb-1.5"> <h1 class="mr-1 inline text-xl font-semibold"> {assistant?.name} </h1> {#if hasRag} <span class="inline-grid size-5 place-items-center rounded-full bg-blue-500/10" title="This assistant uses the websearch." > <IconInternet classNames="text-sm text-blue-600" /> </span> {/if} <span class="ml-1 rounded-full border px-2 py-0.5 text-sm leading-none text-gray-500" >public</span > </div> {#if assistant?.description} <p class="mb-2 line-clamp-2 text-sm text-gray-500"> {assistant.description} </p> {/if} <p class="text-sm text-gray-500"> Model: <span class="font-semibold"> {assistant?.modelId} </span> <span class="text-gray-300">•</span> Created by <a class="underline" href="{base}/assistants?user={assistant?.createdByName}"> {assistant?.createdByName} </a> </p> <div class="flex items-center gap-4 whitespace-nowrap text-sm text-gray-500 hover:*:text-gray-800" > <button class="{isActive ? 'bg-gray-100 text-gray-800' : 'bg-black !text-white'} my-2 flex w-fit items-center rounded-full px-3 py-1 text-base" disabled={isActive} name="Activate model" on:click|stopPropagation={() => { $settings.activeModel = $page.params.assistantId; }} > {isActive ? "Active" : "Activate"} </button> {#if assistant?.createdByMe} <a href="{base}/settings/assistants/{assistant?._id}/edit" class="underline" ><CarbonPen class="mr-1.5 inline text-xs" />Edit </a> <form method="POST" action="?/delete" use:enhance> <button type="submit" class="flex items-center underline"> <CarbonTrash class="mr-1.5 inline text-xs" />Delete</button > </form> {:else} <form method="POST" action="?/unsubscribe" use:enhance> <button type="submit" class="underline"> <CarbonTrash class="mr-1.5 inline text-xs" />Remove</button > </form> <form method="POST" action="?/edit" use:enhance class="hidden"> <button type="submit" class="underline"> <CarbonCopy class="mr-1.5 inline text-xs" />Duplicate</button > </form> {#if !assistant?.reported} <button type="button" on:click={() => { displayReportModal = true; }} class="underline" > <CarbonFlag class="mr-1.5 inline text-xs" />Report </button> {:else} <button type="button" disabled class="text-gray-700"> <CarbonFlag class="mr-1.5 inline text-xs" />Reported</button > {/if} {/if} </div> </div> </div> <div> <h2 class="text-lg font-semibold">Direct URL</h2> <p class="pb-2 text-sm text-gray-500">Share this link for people to use your assistant.</p> <div class="flex flex-row gap-2 rounded-lg border-2 border-gray-200 bg-gray-100 py-2 pl-3 pr-1.5" > <input disabled class="flex-1 truncate bg-inherit" value={shareUrl} /> <CopyToClipBoardBtn value={shareUrl} classNames="!border-none !shadow-none !py-0 !px-1 !rounded-md" > <div class="flex items-center gap-1.5 text-gray-500 hover:underline"> <CarbonLink />Copy </div> </CopyToClipBoardBtn> </div> </div> <!-- two columns for big screen, single column for small screen --> <div class="mb-12 mt-3"> <h2 class="mb-2 font-semibold">System Instructions</h2> <textarea disabled class="box-border h-full min-h-[8lh] w-full rounded-lg border-2 border-gray-200 bg-gray-100 p-2 disabled:cursor-not-allowed" >{assistant?.preprompt}</textarea > {#if hasRag} <div class="mt-4"> <h2 class=" font-semibold">Internet Access</h2> {#if assistant?.rag?.allowAllDomains} <p class="text-sm text-gray-500"> This Assistant uses Web Search to find information on Internet. </p> {:else if !!assistant?.rag?.allowedDomains && assistant?.rag?.allowedDomains.length} <p class="pb-4 text-sm text-gray-500"> This Assistant can use Web Search on the following domains: </p> <ul class="mr-2 flex flex-wrap gap-2.5 text-sm text-gray-800"> {#each assistant?.rag?.allowedDomains as domain} <li class="break-all rounded-lg border border-gray-200 bg-gray-100 px-2 py-0.5 leading-tight decoration-gray-400" > <a target="_blank" class="underline" href={domain}>{domain}</a> </li> {/each} </ul> {:else if !!assistant?.rag?.allowedLinks && assistant?.rag?.allowedLinks.length} <p class="pb-4 text-sm text-gray-500">This Assistant can browse the following links:</p> <ul class="mr-2 flex flex-wrap gap-2.5 text-sm text-gray-800"> {#each assistant?.rag?.allowedLinks as link} <li class="break-all rounded-lg border border-gray-200 bg-gray-100 px-2 py-0.5 leading-tight decoration-gray-400" > <a target="_blank" class="underline" href={link}>{link}</a> </li> {/each} </ul> {/if} </div> {/if} </div> </div>
chat-ui/src/routes/settings/(nav)/assistants/[assistantId]/+page.svelte/0
{ "file_path": "chat-ui/src/routes/settings/(nav)/assistants/[assistantId]/+page.svelte", "repo_id": "chat-ui", "token_count": 3055 }
64
{ "$schema": "https://vega.github.io/schema/vega-lite/v4.json", "data": { "values": "<DVC_METRIC_DATA>" }, "title": "<DVC_METRIC_TITLE>", "mark": { "type": "line" }, "encoding": { "x": { "field": "<DVC_METRIC_X>", "type": "quantitative", "title": "<DVC_METRIC_X_LABEL>" }, "y": { "field": "<DVC_METRIC_Y>", "type": "quantitative", "title": "<DVC_METRIC_Y_LABEL>", "scale": { "zero": false } }, "color": { "field": "rev", "type": "nominal" } } }
datasets/.dvc/plots/default.json/0
{ "file_path": "datasets/.dvc/plots/default.json", "repo_id": "datasets", "token_count": 419 }
65
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [email protected]. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at [https://www.contributor-covenant.org/version/2/0/code_of_conduct.html][v2.0]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.0]: https://www.contributor-covenant.org/version/2/0/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/diversity [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations
datasets/CODE_OF_CONDUCT.md/0
{ "file_path": "datasets/CODE_OF_CONDUCT.md", "repo_id": "datasets", "token_count": 1208 }
66
# Create an audio dataset You can share a dataset with your team or with anyone in the community by creating a dataset repository on the Hugging Face Hub: ```py from datasets import load_dataset dataset = load_dataset("<username>/my_dataset") ``` There are several methods for creating and sharing an audio dataset: * Create an audio dataset from local files in python with [`Dataset.push_to_hub`]. This is an easy way that requires only a few steps in python. * Create an audio dataset repository with the `AudioFolder` builder. This is a no-code solution for quickly creating an audio dataset with several thousand audio files. * Create an audio dataset by writing a loading script. This method is for advanced users and requires more effort and coding, but you have greater flexibility over how a dataset is defined, downloaded, and generated which can be useful for more complex or large scale audio datasets. <Tip> You can control access to your dataset by requiring users to share their contact information first. Check out the [Gated datasets](https://huggingface.co/docs/hub/datasets-gated) guide for more information about how to enable this feature on the Hub. </Tip> ## Local files You can load your own dataset using the paths to your audio files. Use the [`~Dataset.cast_column`] function to take a column of audio file paths, and cast it to the [`Audio`] feature: ```py >>> audio_dataset = Dataset.from_dict({"audio": ["path/to/audio_1", "path/to/audio_2", ..., "path/to/audio_n"]}).cast_column("audio", Audio()) >>> audio_dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': 'path/to/audio_1', 'sampling_rate': 16000} ``` Then upload the dataset to the Hugging Face Hub using [`Dataset.push_to_hub`]: ```py audio_dataset.push_to_hub("<username>/my_dataset") ``` This will create a dataset repository containing your audio dataset: ``` my_dataset/ ├── README.md └── data/ └── train-00000-of-00001.parquet ``` ## AudioFolder The `AudioFolder` is a dataset builder designed to quickly load an audio dataset with several thousand audio files without requiring you to write any code. Any additional information about your dataset - such as transcription, speaker accent, or speaker intent - is automatically loaded by `AudioFolder` as long as you include this information in a metadata file (`metadata.csv`/`metadata.jsonl`). <Tip> 💡 Take a look at the [Split pattern hierarchy](repository_structure#split-pattern-hierarchy) to learn more about how `AudioFolder` creates dataset splits based on your dataset repository structure. </Tip> Create a dataset repository on the Hugging Face Hub and upload your dataset directory following the `AudioFolder` structure: ``` my_dataset/ ├── README.md ├── metadata.csv └── data/ ``` The `data` folder can be any name you want. <Tip> It can be helpful to store your metadata as a `jsonl` file if the data columns contain a more complex format (like a list of floats) to avoid parsing errors or reading complex values as strings. </Tip> The metadata file should include a `file_name` column to link an audio file to it's metadata: ```csv file_name,transcription data/first_audio_file.mp3,znowu się duch z ciałem zrośnie w młodocianej wstaniesz wiosnie i możesz skutkiem tych leków umierać wstawać wiek wieków dalej tam były przestrogi jak siekać głowę jak nogi data/second_audio_file.mp3,już u źwierzyńca podwojów król zasiada przy nim książęta i panowie rada a gdzie wzniosły krążył ganek rycerze obok kochanek król skinął palcem zaczęto igrzysko data/third_audio_file.mp3,pewnie kędyś w obłędzie ubite minęły szlaki zaczekajmy dzień jaki poślemy szukać wszędzie dziś jutro pewnie będzie posłali wszędzie sługi czekali dzień i drugi gdy nic nie doczekali z płaczem chcą jechać dali ``` Then you can store your dataset in a directory structure like this: ``` metadata.csv data/first_audio_file.mp3 data/second_audio_file.mp3 data/third_audio_file.mp3 ``` Users can now load your dataset and the associated metadata by specifying `audiofolder` in [`load_dataset`] and the dataset directory in `data_dir`: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("audiofolder", data_dir="/path/to/data") >>> dataset["train"][0] {'audio': {'path': '/path/to/extracted/audio/first_audio_file.mp3', 'array': array([ 0.00088501, 0.0012207 , 0.00131226, ..., -0.00045776, -0.00054932, -0.00054932], dtype=float32), 'sampling_rate': 16000}, 'transcription': 'znowu się duch z ciałem zrośnie w młodocianej wstaniesz wiosnie i możesz skutkiem tych leków umierać wstawać wiek wieków dalej tam były przestrogi jak siekać głowę jak nogi' } ``` You can also use `audiofolder` to load datasets involving multiple splits. To do so, your dataset directory might have the following structure: ``` data/train/first_train_audio_file.mp3 data/train/second_train_audio_file.mp3 data/test/first_test_audio_file.mp3 data/test/second_test_audio_file.mp3 ``` <Tip warning={true}> Note that if audio files are located not right next to a metadata file, `file_name` column should be a full relative path to an audio file, not just its filename. </Tip> For audio datasets that don't have any associated metadata, `AudioFolder` automatically infers the class labels of the dataset based on the directory name. It might be useful for audio classification tasks. Your dataset directory might look like: ``` data/train/electronic/01.mp3 data/train/punk/01.mp3 data/test/electronic/09.mp3 data/test/punk/09.mp3 ``` Load the dataset with `AudioFolder`, and it will create a `label` column from the directory name (language id): ```py >>> from datasets import load_dataset >>> dataset = load_dataset("audiofolder", data_dir="/path/to/data") >>> dataset["train"][0] {'audio': {'path': '/path/to/electronic/01.mp3', 'array': array([ 3.9714024e-07, 7.3031038e-07, 7.5640685e-07, ..., -1.1963668e-01, -1.1681189e-01, -1.1244172e-01], dtype=float32), 'sampling_rate': 44100}, 'label': 0 # "electronic" } >>> dataset["train"][-1] {'audio': {'path': '/path/to/punk/01.mp3', 'array': array([0.15237972, 0.13222949, 0.10627693, ..., 0.41940814, 0.37578005, 0.33717662], dtype=float32), 'sampling_rate': 44100}, 'label': 1 # "punk" } ``` <Tip warning={true}> If all audio files are contained in a single directory or if they are not on the same level of directory structure, `label` column won't be added automatically. If you need it, set `drop_labels=False` explicitly. </Tip> <Tip> Some audio datasets, like those found in [Kaggle competitions](https://www.kaggle.com/competitions/kaggle-pog-series-s01e02/overview), have separate metadata files for each split. Provided the metadata features are the same for each split, `audiofolder` can be used to load all splits at once. If the metadata features differ across each split, you should load them with separate `load_dataset()` calls. </Tip> ## Loading script Write a dataset loading script to manually create a dataset. It defines a dataset's splits and configurations, and handles downloading and generating the dataset examples. The script should have the same name as your dataset folder or repository: ``` my_dataset/ ├── README.md ├── my_dataset.py └── data/ ``` The `data` folder can be any name you want, it doesn't have to be `data`. This folder is optional, unless you're hosting your dataset on the Hub. This directory structure allows your dataset to be loaded in one line: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("path/to/my_dataset") ``` This guide will show you how to create a dataset loading script for audio datasets, which is a bit different from <a class="underline decoration-green-400 decoration-2 font-semibold" href="./dataset_script">creating a loading script for text datasets</a>. Audio datasets are commonly stored in `tar.gz` archives which requires a particular approach to support streaming mode. While streaming is not required, we highly encourage implementing streaming support in your audio dataset because users without a lot of disk space can use your dataset without downloading it. Learn more about streaming in the [Stream](./stream) guide! Here is an example using TAR archives: ``` my_dataset/ ├── README.md ├── my_dataset.py └── data/ ├── train.tar.gz ├── test.tar.gz └── metadata.csv ``` In addition to learning how to create a streamable dataset, you'll also learn how to: * Create a dataset builder class. * Create dataset configurations. * Add dataset metadata. * Download and define the dataset splits. * Generate the dataset. * Upload the dataset to the Hub. The best way to learn is to open up an existing audio dataset loading script, like [Vivos](https://huggingface.co/datasets/vivos/blob/main/vivos.py), and follow along! <Tip warning=True> This guide shows how to process audio data stored in TAR archives - the most frequent case for audio datasets. Check out [minds14](https://huggingface.co/datasets/PolyAI/minds14/blob/main/minds14.py) dataset for an example of an audio script which uses ZIP archives. </Tip> <Tip> To help you get started, we created a loading script [template](https://github.com/huggingface/datasets/blob/main/templates/new_dataset_script.py) you can copy and use as a starting point! </Tip> ### Create a dataset builder class [`GeneratorBasedBuilder`] is the base class for datasets generated from a dictionary generator. Within this class, there are three methods to help create your dataset: * `_info` stores information about your dataset like its description, license, and features. * `_split_generators` downloads the dataset and defines its splits. * `_generate_examples` generates the dataset's samples containing the audio data and other features specified in `info` for each split. Start by creating your dataset class as a subclass of [`GeneratorBasedBuilder`] and add the three methods. Don't worry about filling in each of these methods yet, you'll develop those over the next few sections: ```py class VivosDataset(datasets.GeneratorBasedBuilder): """VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for Vietnamese Automatic Speech Recognition task.""" def _info(self): def _split_generators(self, dl_manager): def _generate_examples(self, prompts_path, path_to_clips, audio_files): ``` #### Multiple configurations In some cases, a dataset may have more than one configuration. For example, [LibriVox Indonesia](https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia) dataset has several configurations corresponding to different languages. To create different configurations, use the [`BuilderConfig`] class to create a subclass of your dataset. The only required parameter is the `name` of the configuration, which must be passed to the configuration's superclass `__init__()`. Otherwise, you can specify any custom parameters you want in your configuration class. ```py class LibriVoxIndonesiaConfig(datasets.BuilderConfig): """BuilderConfig for LibriVoxIndonesia.""" def __init__(self, name, version, **kwargs): self.language = kwargs.pop("language", None) self.release_date = kwargs.pop("release_date", None) self.num_clips = kwargs.pop("num_clips", None) self.num_speakers = kwargs.pop("num_speakers", None) self.validated_hr = kwargs.pop("validated_hr", None) self.total_hr = kwargs.pop("total_hr", None) self.size_bytes = kwargs.pop("size_bytes", None) self.size_human = size_str(self.size_bytes) description = ( f"LibriVox-Indonesia speech to text dataset in {self.language} released on {self.release_date}. " f"The dataset comprises {self.validated_hr} hours of transcribed speech data" ) super(LibriVoxIndonesiaConfig, self).__init__( name=name, version=datasets.Version(version), description=description, **kwargs, ) ``` Define your configurations in the `BUILDER_CONFIGS` class variable inside [`GeneratorBasedBuilder`]. In this example, the author imports the languages from a separate `release_stats.py` [file](https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia/blob/main/release_stats.py) from their repository, and then loops through each language to create a configuration: ```py class LibriVoxIndonesia(datasets.GeneratorBasedBuilder): DEFAULT_CONFIG_NAME = "all" BUILDER_CONFIGS = [ LibriVoxIndonesiaConfig( name=lang, version=STATS["version"], language=LANGUAGES[lang], release_date=STATS["date"], num_clips=lang_stats["clips"], num_speakers=lang_stats["users"], total_hr=float(lang_stats["totalHrs"]) if lang_stats["totalHrs"] else None, size_bytes=int(lang_stats["size"]) if lang_stats["size"] else None, ) for lang, lang_stats in STATS["locales"].items() ] ``` <Tip> Typically, users need to specify a configuration to load in [`load_dataset`], otherwise a `ValueError` is raised. You can avoid this by setting a default dataset configuration to load in `DEFAULT_CONFIG_NAME`. </Tip> Now if users want to load the Balinese (`bal`) configuration, they can use the configuration name: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("indonesian-nlp/librivox-indonesia", "bal", split="train") ``` ### Add dataset metadata Adding information about your dataset helps users to learn more about it. This information is stored in the [`DatasetInfo`] class which is returned by the `info` method. Users can access this information by: ```py >>> from datasets import load_dataset_builder >>> ds_builder = load_dataset_builder("vivos") >>> ds_builder.info ``` There is a lot of information you can include about your dataset, but some important ones are: 1. `description` provides a concise description of the dataset. 2. `features` specify the dataset column types. Since you're creating an audio loading script, you'll need to include the [`Audio`] feature and the `sampling_rate` of the dataset. 3. `homepage` provides a link to the dataset homepage. 4. `license` specify the permissions for using a dataset as defined by the license type. 5. `citation` is a BibTeX citation of the dataset. <Tip> You'll notice a lot of the dataset information is defined earlier in the loading script which can make it easier to read. There are also other [`~Dataset.Features`] you can input, so be sure to check out the full list and [features guide](./about_dataset_features) for more details. </Tip> ```py def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "speaker_id": datasets.Value("string"), "path": datasets.Value("string"), "audio": datasets.Audio(sampling_rate=16_000), "sentence": datasets.Value("string"), } ), supervised_keys=None, homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION, ) ``` ### Download and define the dataset splits Now that you've added some information about your dataset, the next step is to download the dataset and define the splits. 1. Use the [`~DownloadManager.download`] method to download metadata file at `_PROMPTS_URLS` and audio TAR archive at `_DATA_URL`. This method returns the path to the local file/archive. In streaming mode, it doesn't download the file(s) and just returns a URL to stream the data from. This method accepts: * a relative path to a file inside a Hub dataset repository (for example, in the `data/` folder) * a URL to a file hosted somewhere else * a (nested) list or dictionary of file names or URLs 2. After you've downloaded the dataset, use the [`SplitGenerator`] to organize the audio files and sentence prompts in each split. Name each split with a standard name like: `Split.TRAIN`, `Split.TEST`, and `SPLIT.Validation`. In the `gen_kwargs` parameter, specify the file path to the `prompts_path` and `path_to_clips`. For `audio_files`, you'll need to use [`~DownloadManager.iter_archive`] to iterate over the audio files in the TAR archive. This enables streaming for your dataset. All of these file paths are passed onto the next step where you'll actually generate the dataset. ```py def _split_generators(self, dl_manager): """Returns SplitGenerators.""" prompts_paths = dl_manager.download(_PROMPTS_URLS) archive = dl_manager.download(_DATA_URL) train_dir = "vivos/train" test_dir = "vivos/test" return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "prompts_path": prompts_paths["train"], "path_to_clips": train_dir + "/waves", "audio_files": dl_manager.iter_archive(archive), }, ), datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "prompts_path": prompts_paths["test"], "path_to_clips": test_dir + "/waves", "audio_files": dl_manager.iter_archive(archive), }, ), ] ``` <Tip warning={true}> This implementation does not extract downloaded archives. If you want to extract files after download, you need to additionally use [`~DownloadManager.extract`], see the [(Advanced) Extract TAR archives](#advanced-extract-tar-archives-locally) section. </Tip> ### Generate the dataset The last method in the [`GeneratorBasedBuilder`] class actually generates the samples in the dataset. It yields a dataset according to the structure specified in `features` from the `info` method. As you can see, `generate_examples` accepts the `prompts_path`, `path_to_clips`, and `audio_files` from the previous method as arguments. Files inside TAR archives are accessed and yielded sequentially. This means you need to have the metadata associated with the audio files in the TAR file in hand first so you can yield it with its corresponding audio file. ```py examples = {} with open(prompts_path, encoding="utf-8") as f: for row in f: data = row.strip().split(" ", 1) speaker_id = data[0].split("_")[0] audio_path = "/".join([path_to_clips, speaker_id, data[0] + ".wav"]) examples[audio_path] = { "speaker_id": speaker_id, "path": audio_path, "sentence": data[1], } ``` Finally, iterate over files in `audio_files` and yield them along with their corresponding metadata. [`~DownloadManager.iter_archive`] yields a tuple of (`path`, `f`) where `path` is a **relative** path to a file inside TAR archive and `f` is a file object itself. ```py inside_clips_dir = False id_ = 0 for path, f in audio_files: if path.startswith(path_to_clips): inside_clips_dir = True if path in examples: audio = {"path": path, "bytes": f.read()} yield id_, {**examples[path], "audio": audio} id_ += 1 elif inside_clips_dir: break ``` Put these two steps together, and the whole `_generate_examples` method looks like: ```py def _generate_examples(self, prompts_path, path_to_clips, audio_files): """Yields examples as (key, example) tuples.""" examples = {} with open(prompts_path, encoding="utf-8") as f: for row in f: data = row.strip().split(" ", 1) speaker_id = data[0].split("_")[0] audio_path = "/".join([path_to_clips, speaker_id, data[0] + ".wav"]) examples[audio_path] = { "speaker_id": speaker_id, "path": audio_path, "sentence": data[1], } inside_clips_dir = False id_ = 0 for path, f in audio_files: if path.startswith(path_to_clips): inside_clips_dir = True if path in examples: audio = {"path": path, "bytes": f.read()} yield id_, {**examples[path], "audio": audio} id_ += 1 elif inside_clips_dir: break ``` ### Upload the dataset to the Hub Once your script is ready, [create a dataset card](./dataset_card) and [upload it to the Hub](./share). Congratulations, you can now load your dataset from the Hub! 🥳 ```py >>> from datasets import load_dataset >>> load_dataset("<username>/my_dataset") ``` ### (Advanced) Extract TAR archives locally In the example above downloaded archives are not extracted and therefore examples do not contain information about where they are stored locally. To explain how to do the extraction in a way that it also supports streaming, we will briefly go through the [LibriVox Indonesia](https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia/blob/main/librivox-indonesia.py) loading script. #### Download and define the dataset splits 1. Use the [`~DownloadManager.download`] method to download the audio data at `_AUDIO_URL`. 2. To extract audio TAR archive locally, use the [`~DownloadManager.extract`]. You can use this method only in non-streaming mode (when `dl_manager.is_streaming=False`). This returns a local path to the extracted archive directory: ```py local_extracted_archive = dl_manager.extract(audio_path) if not dl_manager.is_streaming else None ``` 3. Use the [`~DownloadManager.iter_archive`] method to iterate over the archive at `audio_path`, just like in the Vivos example above. [`~DownloadManager.iter_archive`] doesn't provide any information about the full paths of files from the archive, even if it has been extracted. As a result, you need to pass the `local_extracted_archive` path to the next step in `gen_kwargs`, in order to preserve information about where the archive was extracted to. This is required to construct the correct paths to the local files when you generate the examples. <Tip warning={true}> The reason you need to use a combination of [`~DownloadManager.download`] and [`~DownloadManager.iter_archive`] is because files in TAR archives can't be accessed directly by their paths. Instead, you'll need to iterate over the files within the archive! You can use [`~DownloadManager.download_and_extract`] and [`~DownloadManager.extract`] with TAR archives only in non-streaming mode, otherwise it would throw an error. </Tip> 4. Use the [`~DownloadManager.download_and_extract`] method to download the metadata file specified in `_METADATA_URL`. This method returns a path to a local file in non-streaming mode. In streaming mode, it doesn't download file locally and returns the same URL. 5. Now use the [`SplitGenerator`] to organize the audio files and metadata in each split. Name each split with a standard name like: `Split.TRAIN`, `Split.TEST`, and `SPLIT.Validation`. In the `gen_kwargs` parameter, specify the file paths to `local_extracted_archive`, `audio_files`, `metadata_path`, and `path_to_clips`. Remember, for `audio_files`, you need to use [`~DownloadManager.iter_archive`] to iterate over the audio files in the TAR archives. This enables streaming for your dataset! All of these file paths are passed onto the next step where the dataset samples are generated. ```py def _split_generators(self, dl_manager): """Returns SplitGenerators.""" dl_manager.download_config.ignore_url_params = True audio_path = dl_manager.download(_AUDIO_URL) local_extracted_archive = dl_manager.extract(audio_path) if not dl_manager.is_streaming else None path_to_clips = "librivox-indonesia" return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "local_extracted_archive": local_extracted_archive, "audio_files": dl_manager.iter_archive(audio_path), "metadata_path": dl_manager.download_and_extract(_METADATA_URL + "/metadata_train.csv.gz"), "path_to_clips": path_to_clips, }, ), datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "local_extracted_archive": local_extracted_archive, "audio_files": dl_manager.iter_archive(audio_path), "metadata_path": dl_manager.download_and_extract(_METADATA_URL + "/metadata_test.csv.gz"), "path_to_clips": path_to_clips, }, ), ] ``` #### Generate the dataset Here `_generate_examples` accepts `local_extracted_archive`, `audio_files`, `metadata_path`, and `path_to_clips` from the previous method as arguments. 1. TAR files are accessed and yielded sequentially. This means you need to have the metadata in `metadata_path` associated with the audio files in the TAR file in hand first so that you can yield it with its corresponding audio file further: ```py with open(metadata_path, "r", encoding="utf-8") as f: reader = csv.DictReader(f) for row in reader: if self.config.name == "all" or self.config.name == row["language"]: row["path"] = os.path.join(path_to_clips, row["path"]) # if data is incomplete, fill with empty values for field in data_fields: if field not in row: row[field] = "" metadata[row["path"]] = row ``` 2. Now you can yield the files in `audio_files` archive. When you use [`~DownloadManager.iter_archive`], it yielded a tuple of (`path`, `f`) where `path` is a **relative path** to a file inside the archive, and `f` is the file object itself. To get the **full path** to the locally extracted file, join the path of the directory (`local_extracted_path`) where the archive is extracted to and the relative audio file path (`path`): ```py for path, f in audio_files: if path in metadata: result = dict(metadata[path]) # set the audio feature and the path to the extracted file path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path result["audio"] = {"path": path, "bytes": f.read()} result["path"] = path yield id_, result id_ += 1 ```` Put both of these steps together, and the whole `_generate_examples` method should look like: ```py def _generate_examples( self, local_extracted_archive, audio_files, metadata_path, path_to_clips, ): """Yields examples.""" data_fields = list(self._info().features.keys()) metadata = {} with open(metadata_path, "r", encoding="utf-8") as f: reader = csv.DictReader(f) for row in reader: if self.config.name == "all" or self.config.name == row["language"]: row["path"] = os.path.join(path_to_clips, row["path"]) # if data is incomplete, fill with empty values for field in data_fields: if field not in row: row[field] = "" metadata[row["path"]] = row id_ = 0 for path, f in audio_files: if path in metadata: result = dict(metadata[path]) # set the audio feature and the path to the extracted file path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path result["audio"] = {"path": path, "bytes": f.read()} result["path"] = path yield id_, result id_ += 1 ```
datasets/docs/source/audio_dataset.mdx/0
{ "file_path": "datasets/docs/source/audio_dataset.mdx", "repo_id": "datasets", "token_count": 9843 }
67
# Process image data This guide shows specific methods for processing image datasets. Learn how to: - Use [`~Dataset.map`] with image dataset. - Apply data augmentations to a dataset with [`~Dataset.set_transform`]. For a guide on how to process any type of dataset, take a look at the <a class="underline decoration-sky-400 decoration-2 font-semibold" href="./process">general process guide</a>. ## Map The [`~Dataset.map`] function can apply transforms over an entire dataset. For example, create a basic [`Resize`](https://pytorch.org/vision/stable/generated/torchvision.transforms.Resize.html) function: ```py >>> def transforms(examples): ... examples["pixel_values"] = [image.convert("RGB").resize((100,100)) for image in examples["image"]] ... return examples ``` Now use the [`~Dataset.map`] function to resize the entire dataset, and set `batched=True` to speed up the process by accepting batches of examples. The transform returns `pixel_values` as a cacheable `PIL.Image` object: ```py >>> dataset = dataset.map(transforms, remove_columns=["image"], batched=True) >>> dataset[0] {'label': 6, 'pixel_values': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=100x100 at 0x7F058237BB10>} ``` The cache file saves time because you don't have to execute the same transform twice. The [`~Dataset.map`] function is best for operations you only run once per training - like resizing an image - instead of using it for operations executed for each epoch, like data augmentations. [`~Dataset.map`] takes up some memory, but you can reduce its memory requirements with the following parameters: - [`batch_size`](./package_reference/main_classes#datasets.DatasetDict.map.batch_size) determines the number of examples that are processed in one call to the transform function. - [`writer_batch_size`](./package_reference/main_classes#datasets.DatasetDict.map.writer_batch_size) determines the number of processed examples that are kept in memory before they are stored away. Both parameter values default to 1000, which can be expensive if you are storing images. Lower these values to use less memory when you use [`~Dataset.map`]. ## Apply transforms 🤗 Datasets applies data augmentations from any library or package to your dataset. Transforms can be applied on-the-fly on batches of data with [`~Dataset.set_transform`], which consumes less disk space. <Tip> The following example uses [torchvision](https://pytorch.org/vision/stable/index.html), but feel free to use other data augmentation libraries like [Albumentations](https://albumentations.ai/docs/), [Kornia](https://kornia.readthedocs.io/en/latest/), and [imgaug](https://imgaug.readthedocs.io/en/latest/). </Tip> For example, if you'd like to change the color properties of an image randomly: ```py >>> from torchvision.transforms import Compose, ColorJitter, ToTensor >>> jitter = Compose( ... [ ... ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.7), ... ToTensor(), ... ] ... ) ``` Create a function to apply the `ColorJitter` transform: ```py >>> def transforms(examples): ... examples["pixel_values"] = [jitter(image.convert("RGB")) for image in examples["image"]] ... return examples ``` Apply the transform with the [`~Dataset.set_transform`] function: ```py >>> dataset.set_transform(transforms) ```
datasets/docs/source/image_process.mdx/0
{ "file_path": "datasets/docs/source/image_process.mdx", "repo_id": "datasets", "token_count": 1031 }
68
# Utilities ## Configure logging 🤗 Datasets strives to be transparent and explicit about how it works, but this can be quite verbose at times. We have included a series of logging methods which allow you to easily adjust the level of verbosity of the entire library. Currently the default verbosity of the library is set to `WARNING`. To change the level of verbosity, use one of the direct setters. For instance, here is how to change the verbosity to the `INFO` level: ```py import datasets datasets.logging.set_verbosity_info() ``` You can also use the environment variable `DATASETS_VERBOSITY` to override the default verbosity, and set it to one of the following: `debug`, `info`, `warning`, `error`, `critical`: ```bash DATASETS_VERBOSITY=error ./myprogram.py ``` All the methods of this logging module are documented below. The main ones are: - [`logging.get_verbosity`] to get the current level of verbosity in the logger - [`logging.set_verbosity`] to set the verbosity to the level of your choice In order from the least to the most verbose (with their corresponding `int` values): 1. `logging.CRITICAL` or `logging.FATAL` (int value, 50): only report the most critical errors. 2. `logging.ERROR` (int value, 40): only report errors. 3. `logging.WARNING` or `logging.WARN` (int value, 30): only reports error and warnings. This the default level used by the library. 4. `logging.INFO` (int value, 20): reports error, warnings and basic information. 5. `logging.DEBUG` (int value, 10): report all information. [[autodoc]] datasets.logging.get_verbosity [[autodoc]] datasets.logging.set_verbosity [[autodoc]] datasets.logging.set_verbosity_info [[autodoc]] datasets.logging.set_verbosity_warning [[autodoc]] datasets.logging.set_verbosity_debug [[autodoc]] datasets.logging.set_verbosity_error [[autodoc]] datasets.logging.disable_propagation [[autodoc]] datasets.logging.enable_propagation ## Configure progress bars By default, `tqdm` progress bars will be displayed during dataset download and preprocessing. You can disable them globally by setting `HF_DATASETS_DISABLE_PROGRESS_BARS` environment variable. You can also enable/disable them using [`~utils.enable_progress_bars`] and [`~utils.disable_progress_bars`]. If set, the environment variable has priority on the helpers. [[autodoc]] datasets.utils.enable_progress_bars [[autodoc]] datasets.utils.disable_progress_bars [[autodoc]] datasets.utils.are_progress_bars_disabled
datasets/docs/source/package_reference/utilities.mdx/0
{ "file_path": "datasets/docs/source/package_reference/utilities.mdx", "repo_id": "datasets", "token_count": 725 }
69
stages: benchmark_array_xd: cmd: python ./benchmarks/benchmark_array_xd.py deps: - ./benchmarks/benchmark_array_xd.py metrics: - ./benchmarks/results/benchmark_array_xd.json: cache: false benchmark_indices_mapping: cmd: python ./benchmarks/benchmark_indices_mapping.py deps: - ./benchmarks/benchmark_indices_mapping.py metrics: - ./benchmarks/results/benchmark_indices_mapping.json: cache: false benchmark_map_filter: cmd: python ./benchmarks/benchmark_map_filter.py deps: - ./benchmarks/benchmark_map_filter.py metrics: - ./benchmarks/results/benchmark_map_filter.json: cache: false benchmark_iterating: cmd: python ./benchmarks/benchmark_iterating.py deps: - ./benchmarks/benchmark_iterating.py metrics: - ./benchmarks/results/benchmark_iterating.json: cache: false benchmark_getitem_100B: cmd: python ./benchmarks/benchmark_getitem_100B.py deps: - ./benchmarks/benchmark_getitem_100B.py metrics: - ./benchmarks/results/benchmark_getitem_100B.json: cache: false
datasets/dvc.yaml/0
{ "file_path": "datasets/dvc.yaml", "repo_id": "datasets", "token_count": 456 }
70
# Metric Card for COMET ## Metric description Crosslingual Optimized Metric for Evaluation of Translation (COMET) is an open-source framework used to train Machine Translation metrics that achieve high levels of correlation with different types of human judgments. ## How to use COMET takes 3 lists of strings as input: `sources` (a list of source sentences), `predictions` (a list of candidate translations) and `references` (a list of reference translations). ```python from datasets import load_metric comet_metric = load_metric('comet') source = ["Dem Feuer konnte Einhalt geboten werden", "Schulen und Kindergärten wurden eröffnet."] hypothesis = ["The fire could be stopped", "Schools and kindergartens were open"] reference = ["They were able to control the fire.", "Schools and kindergartens opened"] comet_score = comet_metric.compute(predictions=hypothesis, references=reference, sources=source) ``` It has several configurations, named after the COMET model to be used. It will default to `wmt20-comet-da` (previously known as `wmt-large-da-estimator-1719`). Alternate models that can be chosen include `wmt20-comet-qe-da`, `wmt21-comet-mqm`, `wmt21-cometinho-da`, `wmt21-comet-qe-mqm` and `emnlp20-comet-rank`. It also has several optional arguments: `gpus`: optional, an integer (number of GPUs to train on) or a list of integers (which GPUs to train on). Set to 0 to use CPU. The default value is `None` (uses one GPU if possible, else use CPU). `progress_bar`a boolean -- if set to `True`, progress updates will be printed out. The default value is `False`. More information about model characteristics can be found on the [COMET website](https://unbabel.github.io/COMET/html/models.html). ## Output values The COMET metric outputs two lists: `scores`: a list of COMET scores for each of the input sentences, ranging from 0-1. `mean_score`: the mean value of COMET scores `scores` over all the input sentences, ranging from 0-1. ### Values from popular papers The [original COMET paper](https://arxiv.org/pdf/2009.09025.pdf) reported average COMET scores ranging from 0.4 to 0.6, depending on the language pairs used for evaluating translation models. They also illustrate that COMET correlates well with human judgements compared to other metrics such as [BLEU](https://huggingface.co/metrics/bleu) and [CHRF](https://huggingface.co/metrics/chrf). ## Examples Full match: ```python from datasets import load_metric comet_metric = load_metric('comet') source = ["Dem Feuer konnte Einhalt geboten werden", "Schulen und Kindergärten wurden eröffnet."] hypothesis = ["They were able to control the fire.", "Schools and kindergartens opened"] reference = ["They were able to control the fire.", "Schools and kindergartens opened"] results = comet_metric.compute(predictions=hypothesis, references=reference, sources=source) print([round(v, 1) for v in results["scores"]]) [1.0, 1.0] ``` Partial match: ```python from datasets import load_metric comet_metric = load_metric('comet') source = ["Dem Feuer konnte Einhalt geboten werden", "Schulen und Kindergärten wurden eröffnet."] hypothesis = ["The fire could be stopped", "Schools and kindergartens were open"] reference = ["They were able to control the fire", "Schools and kindergartens opened"] results = comet_metric.compute(predictions=hypothesis, references=reference, sources=source) print([round(v, 2) for v in results["scores"]]) [0.19, 0.92] ``` No match: ```python from datasets import load_metric comet_metric = load_metric('comet') source = ["Dem Feuer konnte Einhalt geboten werden", "Schulen und Kindergärten wurden eröffnet."] hypothesis = ["The girl went for a walk", "The boy was sleeping"] reference = ["They were able to control the fire", "Schools and kindergartens opened"] results = comet_metric.compute(predictions=hypothesis, references=reference, sources=source) print([round(v, 2) for v in results["scores"]]) [0.00, 0.00] ``` ## Limitations and bias The models provided for calculating the COMET metric are built on top of XLM-R and cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable, as per the [COMET website](https://github.com/Unbabel/COMET) Also, calculating the COMET metric involves downloading the model from which features are obtained -- the default model, `wmt20-comet-da`, takes over 1.79GB of storage space and downloading it can take a significant amount of time depending on the speed of your internet connection. If this is an issue, choose a smaller model; for instance `wmt21-cometinho-da` is 344MB. ## Citation ```bibtex @inproceedings{rei-EtAl:2020:WMT, author = {Rei, Ricardo and Stewart, Craig and Farinha, Ana C and Lavie, Alon}, title = {Unbabel's Participation in the WMT20 Metrics Shared Task}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, month = {November}, year = {2020}, address = {Online}, publisher = {Association for Computational Linguistics}, pages = {909--918}, } ``` ```bibtex @inproceedings{rei-etal-2020-comet, title = "{COMET}: A Neural Framework for {MT} Evaluation", author = "Rei, Ricardo and Stewart, Craig and Farinha, Ana C and Lavie, Alon", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.213", pages = "2685--2702", ``` ## Further References - [COMET website](https://unbabel.github.io/COMET/html/index.html) - [Hugging Face Tasks - Machine Translation](https://huggingface.co/tasks/translation)
datasets/metrics/comet/README.md/0
{ "file_path": "datasets/metrics/comet/README.md", "repo_id": "datasets", "token_count": 2148 }
71
# Copyright 2020 The HuggingFace Datasets Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """GLUE benchmark metric.""" from scipy.stats import pearsonr, spearmanr from sklearn.metrics import f1_score, matthews_corrcoef import datasets _CITATION = """\ @inproceedings{wang2019glue, title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding}, author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.}, note={In the Proceedings of ICLR.}, year={2019} } """ _DESCRIPTION = """\ GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems. """ _KWARGS_DESCRIPTION = """ Compute GLUE evaluation metric associated to each GLUE dataset. Args: predictions: list of predictions to score. Each translation should be tokenized into a list of tokens. references: list of lists of references for each translation. Each reference should be tokenized into a list of tokens. Returns: depending on the GLUE subset, one or several of: "accuracy": Accuracy "f1": F1 score "pearson": Pearson Correlation "spearmanr": Spearman Correlation "matthews_correlation": Matthew Correlation Examples: >>> glue_metric = datasets.load_metric('glue', 'sst2') # 'sst2' or any of ["mnli", "mnli_mismatched", "mnli_matched", "qnli", "rte", "wnli", "hans"] >>> references = [0, 1] >>> predictions = [0, 1] >>> results = glue_metric.compute(predictions=predictions, references=references) >>> print(results) {'accuracy': 1.0} >>> glue_metric = datasets.load_metric('glue', 'mrpc') # 'mrpc' or 'qqp' >>> references = [0, 1] >>> predictions = [0, 1] >>> results = glue_metric.compute(predictions=predictions, references=references) >>> print(results) {'accuracy': 1.0, 'f1': 1.0} >>> glue_metric = datasets.load_metric('glue', 'stsb') >>> references = [0., 1., 2., 3., 4., 5.] >>> predictions = [0., 1., 2., 3., 4., 5.] >>> results = glue_metric.compute(predictions=predictions, references=references) >>> print({"pearson": round(results["pearson"], 2), "spearmanr": round(results["spearmanr"], 2)}) {'pearson': 1.0, 'spearmanr': 1.0} >>> glue_metric = datasets.load_metric('glue', 'cola') >>> references = [0, 1] >>> predictions = [0, 1] >>> results = glue_metric.compute(predictions=predictions, references=references) >>> print(results) {'matthews_correlation': 1.0} """ def simple_accuracy(preds, labels): return float((preds == labels).mean()) def acc_and_f1(preds, labels): acc = simple_accuracy(preds, labels) f1 = float(f1_score(y_true=labels, y_pred=preds)) return { "accuracy": acc, "f1": f1, } def pearson_and_spearman(preds, labels): pearson_corr = float(pearsonr(preds, labels)[0]) spearman_corr = float(spearmanr(preds, labels)[0]) return { "pearson": pearson_corr, "spearmanr": spearman_corr, } @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) class Glue(datasets.Metric): def _info(self): if self.config_name not in [ "sst2", "mnli", "mnli_mismatched", "mnli_matched", "cola", "stsb", "mrpc", "qqp", "qnli", "rte", "wnli", "hans", ]: raise KeyError( "You should supply a configuration name selected in " '["sst2", "mnli", "mnli_mismatched", "mnli_matched", ' '"cola", "stsb", "mrpc", "qqp", "qnli", "rte", "wnli", "hans"]' ) return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Value("int64" if self.config_name != "stsb" else "float32"), "references": datasets.Value("int64" if self.config_name != "stsb" else "float32"), } ), codebase_urls=[], reference_urls=[], format="numpy", ) def _compute(self, predictions, references): if self.config_name == "cola": return {"matthews_correlation": matthews_corrcoef(references, predictions)} elif self.config_name == "stsb": return pearson_and_spearman(predictions, references) elif self.config_name in ["mrpc", "qqp"]: return acc_and_f1(predictions, references) elif self.config_name in ["sst2", "mnli", "mnli_mismatched", "mnli_matched", "qnli", "rte", "wnli", "hans"]: return {"accuracy": simple_accuracy(predictions, references)} else: raise KeyError( "You should supply a configuration name selected in " '["sst2", "mnli", "mnli_mismatched", "mnli_matched", ' '"cola", "stsb", "mrpc", "qqp", "qnli", "rte", "wnli", "hans"]' )
datasets/metrics/glue/glue.py/0
{ "file_path": "datasets/metrics/glue/glue.py", "repo_id": "datasets", "token_count": 2408 }
72
# Copyright 2020 The HuggingFace Datasets Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """METEOR metric.""" import importlib.metadata import numpy as np from nltk.translate import meteor_score from packaging import version import datasets NLTK_VERSION = version.parse(importlib.metadata.version("nltk")) if NLTK_VERSION >= version.Version("3.6.4"): from nltk import word_tokenize _CITATION = """\ @inproceedings{banarjee2005, title = {{METEOR}: An Automatic Metric for {MT} Evaluation with Improved Correlation with Human Judgments}, author = {Banerjee, Satanjeev and Lavie, Alon}, booktitle = {Proceedings of the {ACL} Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization}, month = jun, year = {2005}, address = {Ann Arbor, Michigan}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/W05-0909}, pages = {65--72}, } """ _DESCRIPTION = """\ METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machine-produced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference. METEOR gets an R correlation value of 0.347 with human evaluation on the Arabic data and 0.331 on the Chinese data. This is shown to be an improvement on using simply unigram-precision, unigram-recall and their harmonic F1 combination. """ _KWARGS_DESCRIPTION = """ Computes METEOR score of translated segments against one or more references. Args: predictions: list of predictions to score. Each prediction should be a string with tokens separated by spaces. references: list of reference for each prediction. Each reference should be a string with tokens separated by spaces. alpha: Parameter for controlling relative weights of precision and recall. default: 0.9 beta: Parameter for controlling shape of penalty as a function of fragmentation. default: 3 gamma: Relative weight assigned to fragmentation penalty. default: 0.5 Returns: 'meteor': meteor score. Examples: >>> meteor = datasets.load_metric('meteor') >>> predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"] >>> references = ["It is a guide to action that ensures that the military will forever heed Party commands"] >>> results = meteor.compute(predictions=predictions, references=references) >>> print(round(results["meteor"], 4)) 0.6944 """ @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) class Meteor(datasets.Metric): def _info(self): return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Value("string", id="sequence"), "references": datasets.Value("string", id="sequence"), } ), codebase_urls=["https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py"], reference_urls=[ "https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score", "https://en.wikipedia.org/wiki/METEOR", ], ) def _download_and_prepare(self, dl_manager): import nltk nltk.download("wordnet") if NLTK_VERSION >= version.Version("3.6.5"): nltk.download("punkt") if NLTK_VERSION >= version.Version("3.6.6"): nltk.download("omw-1.4") def _compute(self, predictions, references, alpha=0.9, beta=3, gamma=0.5): if NLTK_VERSION >= version.Version("3.6.5"): scores = [ meteor_score.single_meteor_score( word_tokenize(ref), word_tokenize(pred), alpha=alpha, beta=beta, gamma=gamma ) for ref, pred in zip(references, predictions) ] else: scores = [ meteor_score.single_meteor_score(ref, pred, alpha=alpha, beta=beta, gamma=gamma) for ref, pred in zip(references, predictions) ] return {"meteor": np.mean(scores)}
datasets/metrics/meteor/meteor.py/0
{ "file_path": "datasets/metrics/meteor/meteor.py", "repo_id": "datasets", "token_count": 1898 }
73
# Copyright 2020 The HuggingFace Datasets Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """SACREBLEU metric.""" import sacrebleu as scb from packaging import version import datasets _CITATION = """\ @inproceedings{post-2018-call, title = "A Call for Clarity in Reporting {BLEU} Scores", author = "Post, Matt", booktitle = "Proceedings of the Third Conference on Machine Translation: Research Papers", month = oct, year = "2018", address = "Belgium, Brussels", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W18-6319", pages = "186--191", } """ _DESCRIPTION = """\ SacreBLEU provides hassle-free computation of shareable, comparable, and reproducible BLEU scores. Inspired by Rico Sennrich's `multi-bleu-detok.perl`, it produces the official WMT scores but works with plain text. It also knows all the standard test sets and handles downloading, processing, and tokenization for you. See the [README.md] file at https://github.com/mjpost/sacreBLEU for more information. """ _KWARGS_DESCRIPTION = """ Produces BLEU scores along with its sufficient statistics from a source against one or more references. Args: predictions (`list` of `str`): list of translations to score. Each translation should be tokenized into a list of tokens. references (`list` of `list` of `str`): A list of lists of references. The contents of the first sub-list are the references for the first prediction, the contents of the second sub-list are for the second prediction, etc. Note that there must be the same number of references for each prediction (i.e. all sub-lists must be of the same length). smooth_method (`str`): The smoothing method to use, defaults to `'exp'`. Possible values are: - `'none'`: no smoothing - `'floor'`: increment zero counts - `'add-k'`: increment num/denom by k for n>1 - `'exp'`: exponential decay smooth_value (`float`): The smoothing value. Only valid when `smooth_method='floor'` (in which case `smooth_value` defaults to `0.1`) or `smooth_method='add-k'` (in which case `smooth_value` defaults to `1`). tokenize (`str`): Tokenization method to use for BLEU. If not provided, defaults to `'zh'` for Chinese, `'ja-mecab'` for Japanese and `'13a'` (mteval) otherwise. Possible values are: - `'none'`: No tokenization. - `'zh'`: Chinese tokenization. - `'13a'`: mimics the `mteval-v13a` script from Moses. - `'intl'`: International tokenization, mimics the `mteval-v14` script from Moses - `'char'`: Language-agnostic character-level tokenization. - `'ja-mecab'`: Japanese tokenization. Uses the [MeCab tokenizer](https://pypi.org/project/mecab-python3). lowercase (`bool`): If `True`, lowercases the input, enabling case-insensitivity. Defaults to `False`. force (`bool`): If `True`, insists that your tokenized input is actually detokenized. Defaults to `False`. use_effective_order (`bool`): If `True`, stops including n-gram orders for which precision is 0. This should be `True`, if sentence-level BLEU will be computed. Defaults to `False`. Returns: 'score': BLEU score, 'counts': Counts, 'totals': Totals, 'precisions': Precisions, 'bp': Brevity penalty, 'sys_len': predictions length, 'ref_len': reference length, Examples: Example 1: >>> predictions = ["hello there general kenobi", "foo bar foobar"] >>> references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]] >>> sacrebleu = datasets.load_metric("sacrebleu") >>> results = sacrebleu.compute(predictions=predictions, references=references) >>> print(list(results.keys())) ['score', 'counts', 'totals', 'precisions', 'bp', 'sys_len', 'ref_len'] >>> print(round(results["score"], 1)) 100.0 Example 2: >>> predictions = ["hello there general kenobi", ... "on our way to ankh morpork"] >>> references = [["hello there general kenobi", "hello there !"], ... ["goodbye ankh morpork", "ankh morpork"]] >>> sacrebleu = datasets.load_metric("sacrebleu") >>> results = sacrebleu.compute(predictions=predictions, ... references=references) >>> print(list(results.keys())) ['score', 'counts', 'totals', 'precisions', 'bp', 'sys_len', 'ref_len'] >>> print(round(results["score"], 1)) 39.8 """ @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) class Sacrebleu(datasets.Metric): def _info(self): if version.parse(scb.__version__) < version.parse("1.4.12"): raise ImportWarning( "To use `sacrebleu`, the module `sacrebleu>=1.4.12` is required, and the current version of `sacrebleu` doesn't match this condition.\n" 'You can install it with `pip install "sacrebleu>=1.4.12"`.' ) return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, homepage="https://github.com/mjpost/sacreBLEU", inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Value("string", id="sequence"), "references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"), } ), codebase_urls=["https://github.com/mjpost/sacreBLEU"], reference_urls=[ "https://github.com/mjpost/sacreBLEU", "https://en.wikipedia.org/wiki/BLEU", "https://towardsdatascience.com/evaluating-text-output-in-nlp-bleu-at-your-own-risk-e8609665a213", ], ) def _compute( self, predictions, references, smooth_method="exp", smooth_value=None, force=False, lowercase=False, tokenize=None, use_effective_order=False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] output = scb.corpus_bleu( predictions, transformed_references, smooth_method=smooth_method, smooth_value=smooth_value, force=force, lowercase=lowercase, use_effective_order=use_effective_order, **({"tokenize": tokenize} if tokenize else {}), ) output_dict = { "score": output.score, "counts": output.counts, "totals": output.totals, "precisions": output.precisions, "bp": output.bp, "sys_len": output.sys_len, "ref_len": output.ref_len, } return output_dict
datasets/metrics/sacrebleu/sacrebleu.py/0
{ "file_path": "datasets/metrics/sacrebleu/sacrebleu.py", "repo_id": "datasets", "token_count": 3057 }
74
# Metric Card for TER ## Metric Description TER (Translation Edit Rate, also called Translation Error Rate) is a metric to quantify the edit operations that a hypothesis requires to match a reference translation. We use the implementation that is already present in [sacrebleu](https://github.com/mjpost/sacreBLEU#ter), which in turn is inspired by the [TERCOM implementation](https://github.com/jhclark/tercom). The implementation here is slightly different from sacrebleu in terms of the required input format. The length of the references and hypotheses lists need to be the same, so you may need to transpose your references compared to sacrebleu's required input format. See [this github issue](https://github.com/huggingface/datasets/issues/3154#issuecomment-950746534). See the README.md file at https://github.com/mjpost/sacreBLEU#ter for more information. ## How to Use This metric takes, at minimum, predicted sentences and reference sentences: ```python >>> predictions = ["does this sentence match??", ... "what about this sentence?", ... "What did the TER metric user say to the developer?"] >>> references = [["does this sentence match", "does this sentence match!?!"], ... ["wHaT aBoUt ThIs SeNtEnCe?", "wHaT aBoUt ThIs SeNtEnCe?"], ... ["Your jokes are...", "...TERrible"]] >>> ter = datasets.load_metric("ter") >>> results = ter.compute(predictions=predictions, ... references=references, ... case_sensitive=True) >>> print(results) {'score': 150.0, 'num_edits': 15, 'ref_length': 10.0} ``` ### Inputs This metric takes the following as input: - **`predictions`** (`list` of `str`): The system stream (a sequence of segments). - **`references`** (`list` of `list` of `str`): A list of one or more reference streams (each a sequence of segments). - **`normalized`** (`boolean`): If `True`, applies basic tokenization and normalization to sentences. Defaults to `False`. - **`ignore_punct`** (`boolean`): If `True`, applies basic tokenization and normalization to sentences. Defaults to `False`. - **`support_zh_ja_chars`** (`boolean`): If `True`, tokenization/normalization supports processing of Chinese characters, as well as Japanese Kanji, Hiragana, Katakana, and Phonetic Extensions of Katakana. Only applies if `normalized = True`. Defaults to `False`. - **`case_sensitive`** (`boolean`): If `False`, makes all predictions and references lowercase to ignore differences in case. Defaults to `False`. ### Output Values This metric returns the following: - **`score`** (`float`): TER score (num_edits / sum_ref_lengths * 100) - **`num_edits`** (`int`): The cumulative number of edits - **`ref_length`** (`float`): The cumulative average reference length The output takes the following form: ```python {'score': ter_score, 'num_edits': num_edits, 'ref_length': ref_length} ``` The metric can take on any value `0` and above. `0` is a perfect score, meaning the predictions exactly match the references and no edits were necessary. Higher scores are worse. Scores above 100 mean that the cumulative number of edits, `num_edits`, is higher than the cumulative length of the references, `ref_length`. #### Values from Popular Papers ### Examples Basic example with only predictions and references as inputs: ```python >>> predictions = ["does this sentence match??", ... "what about this sentence?"] >>> references = [["does this sentence match", "does this sentence match!?!"], ... ["wHaT aBoUt ThIs SeNtEnCe?", "wHaT aBoUt ThIs SeNtEnCe?"]] >>> ter = datasets.load_metric("ter") >>> results = ter.compute(predictions=predictions, ... references=references, ... case_sensitive=True) >>> print(results) {'score': 62.5, 'num_edits': 5, 'ref_length': 8.0} ``` Example with `normalization = True`: ```python >>> predictions = ["does this sentence match??", ... "what about this sentence?"] >>> references = [["does this sentence match", "does this sentence match!?!"], ... ["wHaT aBoUt ThIs SeNtEnCe?", "wHaT aBoUt ThIs SeNtEnCe?"]] >>> ter = datasets.load_metric("ter") >>> results = ter.compute(predictions=predictions, ... references=references, ... normalized=True, ... case_sensitive=True) >>> print(results) {'score': 57.14285714285714, 'num_edits': 6, 'ref_length': 10.5} ``` Example ignoring punctuation and capitalization, and everything matches: ```python >>> predictions = ["does this sentence match??", ... "what about this sentence?"] >>> references = [["does this sentence match", "does this sentence match!?!"], ... ["wHaT aBoUt ThIs SeNtEnCe?", "wHaT aBoUt ThIs SeNtEnCe?"]] >>> ter = datasets.load_metric("ter") >>> results = ter.compute(predictions=predictions, ... references=references, ... ignore_punct=True, ... case_sensitive=False) >>> print(results) {'score': 0.0, 'num_edits': 0, 'ref_length': 8.0} ``` Example ignoring punctuation and capitalization, but with an extra (incorrect) sample: ```python >>> predictions = ["does this sentence match??", ... "what about this sentence?", ... "What did the TER metric user say to the developer?"] >>> references = [["does this sentence match", "does this sentence match!?!"], ... ["wHaT aBoUt ThIs SeNtEnCe?", "wHaT aBoUt ThIs SeNtEnCe?"], ... ["Your jokes are...", "...TERrible"]] >>> ter = datasets.load_metric("ter") >>> results = ter.compute(predictions=predictions, ... references=references, ... ignore_punct=True, ... case_sensitive=False) >>> print(results) {'score': 100.0, 'num_edits': 10, 'ref_length': 10.0} ``` ## Limitations and Bias ## Citation ```bibtex @inproceedings{snover-etal-2006-study, title = "A Study of Translation Edit Rate with Targeted Human Annotation", author = "Snover, Matthew and Dorr, Bonnie and Schwartz, Rich and Micciulla, Linnea and Makhoul, John", booktitle = "Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers", month = aug # " 8-12", year = "2006", address = "Cambridge, Massachusetts, USA", publisher = "Association for Machine Translation in the Americas", url = "https://aclanthology.org/2006.amta-papers.25", pages = "223--231", } @inproceedings{post-2018-call, title = "A Call for Clarity in Reporting {BLEU} Scores", author = "Post, Matt", booktitle = "Proceedings of the Third Conference on Machine Translation: Research Papers", month = oct, year = "2018", address = "Belgium, Brussels", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W18-6319", pages = "186--191", } ``` ## Further References - See [the sacreBLEU github repo](https://github.com/mjpost/sacreBLEU#ter) for more information.
datasets/metrics/ter/README.md/0
{ "file_path": "datasets/metrics/ter/README.md", "repo_id": "datasets", "token_count": 2596 }
75
# Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Arrow ArrowReader.""" import copy import math import os import re import shutil from dataclasses import dataclass from functools import partial from pathlib import Path from typing import TYPE_CHECKING, List, Optional, Union import pyarrow as pa import pyarrow.parquet as pq from tqdm.contrib.concurrent import thread_map from .download.download_config import DownloadConfig from .naming import _split_re, filenames_for_dataset_split from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables from .utils import logging from .utils import tqdm as hf_tqdm from .utils.deprecation_utils import deprecated from .utils.file_utils import cached_path if TYPE_CHECKING: from .info import DatasetInfo # noqa: F401 from .splits import Split, SplitInfo # noqa: F401 logger = logging.get_logger(__name__) HF_GCP_BASE_URL = "https://storage.googleapis.com/huggingface-nlp/cache/datasets" _SUB_SPEC_RE = re.compile( rf""" ^ (?P<split>{_split_re[1:-1]}) (\[ ((?P<from>-?\d+) (?P<from_pct>%)?)? : ((?P<to>-?\d+) (?P<to_pct>%)?)? \])?(\((?P<rounding>[^\)]*)\))? $ """, # remove ^ and $ re.X, ) _ADDITION_SEP_RE = re.compile(r"\s*\+\s*") class DatasetNotOnHfGcsError(ConnectionError): """When you can't get the dataset from the Hf google cloud storage""" pass class MissingFilesOnHfGcsError(ConnectionError): """When some files are missing on the Hf oogle cloud storage""" pass @dataclass(frozen=True) class FileInstructions: """The file instructions associated with a split ReadInstruction. Attributes: num_examples: `int`, The total number of examples file_instructions: List[dict(filename, skip, take)], the files information. The filenames contains the relative path, not absolute. skip/take indicates which example read in the file: `ds.slice(skip, take)` """ num_examples: int file_instructions: List[dict] def make_file_instructions( name: str, split_infos: List["SplitInfo"], instruction: Union[str, "ReadInstruction"], filetype_suffix: Optional[str] = None, prefix_path: Optional[str] = None, ) -> FileInstructions: """Returns instructions of the split dict. Args: name (`str`): Name of the dataset. split_infos (`list` of `[SplitInfo]`): Dataset splits information. instruction ([`ReadInstruction`] or `str`): Reading instruction for a dataset. filetype_suffix (`str`, *optional*): Suffix of dataset files, e.g. 'arrow' or 'parquet'. prefix_path (`str`, *optional*): Prefix of dataset files, e.g. directory name. Returns: [`FileInstructions`] """ if not isinstance(name, str): raise TypeError(f"Expected str 'name', but got: {type(name).__name__}") elif not name: raise ValueError("Expected non-empty str 'name'") name2len = {info.name: info.num_examples for info in split_infos} name2shard_lengths = {info.name: info.shard_lengths for info in split_infos} name2filenames = { info.name: filenames_for_dataset_split( path=prefix_path, dataset_name=name, split=info.name, filetype_suffix=filetype_suffix, shard_lengths=name2shard_lengths[info.name], ) for info in split_infos } if not isinstance(instruction, ReadInstruction): instruction = ReadInstruction.from_spec(instruction) # Create the absolute instruction (per split) absolute_instructions = instruction.to_absolute(name2len) # For each split, return the files instruction (skip/take) file_instructions = [] num_examples = 0 for abs_instr in absolute_instructions: split_length = name2len[abs_instr.splitname] filenames = name2filenames[abs_instr.splitname] shard_lengths = name2shard_lengths[abs_instr.splitname] from_ = 0 if abs_instr.from_ is None else abs_instr.from_ to = split_length if abs_instr.to is None else abs_instr.to if shard_lengths is None: # not sharded for filename in filenames: take = to - from_ if take == 0: continue num_examples += take file_instructions.append({"filename": filename, "skip": from_, "take": take}) else: # sharded index_start = 0 # Beginning (included) of moving window. index_end = 0 # End (excluded) of moving window. for filename, shard_length in zip(filenames, shard_lengths): index_end += shard_length if from_ < index_end and to > index_start: # There is something to take. skip = from_ - index_start if from_ > index_start else 0 take = to - index_start - skip if to < index_end else -1 if take == 0: continue file_instructions.append({"filename": filename, "skip": skip, "take": take}) num_examples += shard_length - skip if take == -1 else take index_start += shard_length return FileInstructions( num_examples=num_examples, file_instructions=file_instructions, ) class BaseReader: """ Build a Dataset object out of Instruction instance(s). """ def __init__(self, path: str, info: Optional["DatasetInfo"]): """Initializes ArrowReader. Args: path (str): path where tfrecords are stored. info (DatasetInfo): info about the dataset. """ self._path: str = path self._info: Optional["DatasetInfo"] = info self._filetype_suffix: Optional[str] = None def _get_table_from_filename(self, filename_skip_take, in_memory=False) -> Table: """Returns a Dataset instance from given (filename, skip, take).""" raise NotImplementedError def _read_files(self, files, in_memory=False) -> Table: """Returns Dataset for given file instructions. Args: files: List[dict(filename, skip, take)], the files information. The filenames contain the absolute path, not relative. skip/take indicates which example read in the file: `ds.slice(skip, take)` in_memory (bool, default False): Whether to copy the data in-memory. """ if len(files) == 0 or not all(isinstance(f, dict) for f in files): raise ValueError("please provide valid file informations") files = copy.deepcopy(files) for f in files: f["filename"] = os.path.join(self._path, f["filename"]) pa_tables = thread_map( partial(self._get_table_from_filename, in_memory=in_memory), files, tqdm_class=hf_tqdm, desc="Loading dataset shards", # set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached disable=len(files) <= 16 or None, ) pa_tables = [t for t in pa_tables if len(t) > 0] if not pa_tables and (self._info is None or self._info.features is None): raise ValueError( "Tried to read an empty table. Please specify at least info.features to create an empty table with the right type." ) pa_tables = pa_tables or [InMemoryTable.from_batches([], schema=pa.schema(self._info.features.type))] pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0] return pa_table def get_file_instructions(self, name, instruction, split_infos): """Return list of dict {'filename': str, 'skip': int, 'take': int}""" file_instructions = make_file_instructions( name, split_infos, instruction, filetype_suffix=self._filetype_suffix, prefix_path=self._path ) files = file_instructions.file_instructions return files def read( self, name, instructions, split_infos, in_memory=False, ): """Returns Dataset instance(s). Args: name (str): name of the dataset. instructions (ReadInstruction): instructions to read. Instruction can be string and will then be passed to the Instruction constructor as it. split_infos (list of SplitInfo proto): the available splits for dataset. in_memory (bool, default False): Whether to copy the data in-memory. Returns: kwargs to build a single Dataset instance. """ files = self.get_file_instructions(name, instructions, split_infos) if not files: msg = f'Instruction "{instructions}" corresponds to no data!' raise ValueError(msg) return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) def read_files( self, files: List[dict], original_instructions: Union[None, "ReadInstruction", "Split"] = None, in_memory=False, ): """Returns single Dataset instance for the set of file instructions. Args: files: List[dict(filename, skip, take)], the files information. The filenames contains the relative path, not absolute. skip/take indicates which example read in the file: `ds.skip().take()` original_instructions: store the original instructions used to build the dataset split in the dataset. in_memory (bool, default False): Whether to copy the data in-memory. Returns: kwargs to build a Dataset instance. """ # Prepend path to filename pa_table = self._read_files(files, in_memory=in_memory) # If original_instructions is not None, convert it to a human-readable NamedSplit if original_instructions is not None: from .splits import Split # noqa split = Split(str(original_instructions)) else: split = None dataset_kwargs = {"arrow_table": pa_table, "info": self._info, "split": split} return dataset_kwargs @deprecated() def download_from_hf_gcs(self, download_config: DownloadConfig, relative_data_dir): """ Download the dataset files from the Hf GCS Args: dl_cache_dir: `str`, the local cache directory used to download files relative_data_dir: `str`, the relative directory of the remote files from the `datasets` directory on GCS. """ remote_cache_dir = HF_GCP_BASE_URL + "/" + relative_data_dir.replace(os.sep, "/") try: remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json") downloaded_dataset_info = cached_path( remote_dataset_info.replace(os.sep, "/"), download_config=download_config ) shutil.move(downloaded_dataset_info, os.path.join(self._path, "dataset_info.json")) if self._info is not None: self._info.update(self._info.from_directory(self._path)) except FileNotFoundError as err: raise DatasetNotOnHfGcsError(err) from None try: for split in self._info.splits: file_instructions = self.get_file_instructions( name=self._info.builder_name, instruction=split, split_infos=self._info.splits.values(), ) for file_instruction in file_instructions: file_to_download = str(Path(file_instruction["filename"]).relative_to(self._path)) remote_prepared_filename = os.path.join(remote_cache_dir, file_to_download) downloaded_prepared_filename = cached_path( remote_prepared_filename.replace(os.sep, "/"), download_config=download_config ) shutil.move(downloaded_prepared_filename, file_instruction["filename"]) except FileNotFoundError as err: raise MissingFilesOnHfGcsError(err) from None class ArrowReader(BaseReader): """ Build a Dataset object out of Instruction instance(s). This Reader uses either memory mapping or file descriptors (in-memory) on arrow files. """ def __init__(self, path: str, info: Optional["DatasetInfo"]): """Initializes ArrowReader. Args: path (str): path where Arrow files are stored. info (DatasetInfo): info about the dataset. """ super().__init__(path, info) self._filetype_suffix = "arrow" def _get_table_from_filename(self, filename_skip_take, in_memory=False) -> Table: """Returns a Dataset instance from given (filename, skip, take).""" filename, skip, take = ( filename_skip_take["filename"], filename_skip_take["skip"] if "skip" in filename_skip_take else None, filename_skip_take["take"] if "take" in filename_skip_take else None, ) table = ArrowReader.read_table(filename, in_memory=in_memory) if take == -1: take = len(table) - skip # here we don't want to slice an empty table, or it may segfault if skip is not None and take is not None and not (skip == 0 and take == len(table)): table = table.slice(skip, take) return table @staticmethod def read_table(filename, in_memory=False) -> Table: """ Read table from file. Args: filename (str): File name of the table. in_memory (bool, default=False): Whether to copy the data in-memory. Returns: pyarrow.Table """ table_cls = InMemoryTable if in_memory else MemoryMappedTable return table_cls.from_file(filename) class ParquetReader(BaseReader): """ Build a Dataset object out of Instruction instance(s). This Reader uses memory mapping on parquet files. """ def __init__(self, path: str, info: Optional["DatasetInfo"]): """Initializes ParquetReader. Args: path (str): path where tfrecords are stored. info (DatasetInfo): info about the dataset. """ super().__init__(path, info) self._filetype_suffix = "parquet" def _get_table_from_filename(self, filename_skip_take, **kwargs): """Returns a Dataset instance from given (filename, skip, take).""" filename, skip, take = ( filename_skip_take["filename"], filename_skip_take["skip"] if "skip" in filename_skip_take else None, filename_skip_take["take"] if "take" in filename_skip_take else None, ) # Parquet read_table always loads data in memory, independently of memory_map pa_table = pq.read_table(filename, memory_map=True) # here we don't want to slice an empty table, or it may segfault if skip is not None and take is not None and not (skip == 0 and take == len(pa_table)): pa_table = pa_table.slice(skip, take) return pa_table @dataclass(frozen=True) class _AbsoluteInstruction: """A machine friendly slice: defined absolute positive boundaries.""" splitname: str from_: int # uint (starting index). to: int # uint (ending index). @dataclass(frozen=True) class _RelativeInstruction: """Represents a single parsed slicing instruction, can use % and negatives.""" splitname: str from_: Optional[int] = None # int (starting index) or None if no lower boundary. to: Optional[int] = None # int (ending index) or None if no upper boundary. unit: Optional[str] = None rounding: Optional[str] = None def __post_init__(self): if self.unit is not None and self.unit not in ["%", "abs"]: raise ValueError("unit must be either % or abs") if self.rounding is not None and self.rounding not in ["closest", "pct1_dropremainder"]: raise ValueError("rounding must be either closest or pct1_dropremainder") if self.unit != "%" and self.rounding is not None: raise ValueError("It is forbidden to specify rounding if not using percent slicing.") if self.unit == "%" and self.from_ is not None and abs(self.from_) > 100: raise ValueError("Percent slice boundaries must be > -100 and < 100.") if self.unit == "%" and self.to is not None and abs(self.to) > 100: raise ValueError("Percent slice boundaries must be > -100 and < 100.") # Update via __dict__ due to instance being "frozen" self.__dict__["rounding"] = "closest" if self.rounding is None and self.unit == "%" else self.rounding def _str_to_read_instruction(spec): """Returns ReadInstruction for given string.""" res = _SUB_SPEC_RE.match(spec) if not res: raise ValueError(f"Unrecognized instruction format: {spec}") unit = "%" if res.group("from_pct") or res.group("to_pct") else "abs" return ReadInstruction( split_name=res.group("split"), rounding=res.group("rounding"), from_=int(res.group("from")) if res.group("from") else None, to=int(res.group("to")) if res.group("to") else None, unit=unit, ) def _pct_to_abs_pct1(boundary, num_examples): # Using math.trunc here, since -99.5% should give -99%, not -100%. if num_examples < 100: msg = ( 'Using "pct1_dropremainder" rounding on a split with less than 100 ' "elements is forbidden: it always results in an empty dataset." ) raise ValueError(msg) return boundary * math.trunc(num_examples / 100.0) def _pct_to_abs_closest(boundary, num_examples): return int(round(boundary * num_examples / 100.0)) def _rel_to_abs_instr(rel_instr, name2len): """Returns _AbsoluteInstruction instance for given RelativeInstruction. Args: rel_instr: RelativeInstruction instance. name2len: dict {split_name: num_examples}. """ pct_to_abs = _pct_to_abs_closest if rel_instr.rounding == "closest" else _pct_to_abs_pct1 split = rel_instr.splitname if split not in name2len: raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.') num_examples = name2len[split] from_ = rel_instr.from_ to = rel_instr.to if rel_instr.unit == "%": from_ = 0 if from_ is None else pct_to_abs(from_, num_examples) to = num_examples if to is None else pct_to_abs(to, num_examples) else: from_ = 0 if from_ is None else from_ to = num_examples if to is None else to if from_ < 0: from_ = max(num_examples + from_, 0) if to < 0: to = max(num_examples + to, 0) from_ = min(from_, num_examples) to = min(to, num_examples) return _AbsoluteInstruction(split, from_, to) class ReadInstruction: """Reading instruction for a dataset. Examples:: # The following lines are equivalent: ds = datasets.load_dataset('mnist', split='test[:33%]') ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec('test[:33%]')) ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction('test', to=33, unit='%')) ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction( 'test', from_=0, to=33, unit='%')) # The following lines are equivalent: ds = datasets.load_dataset('mnist', split='test[:33%]+train[1:-1]') ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec( 'test[:33%]+train[1:-1]')) ds = datasets.load_dataset('mnist', split=( datasets.ReadInstruction('test', to=33, unit='%') + datasets.ReadInstruction('train', from_=1, to=-1, unit='abs'))) # The following lines are equivalent: ds = datasets.load_dataset('mnist', split='test[:33%](pct1_dropremainder)') ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction.from_spec( 'test[:33%](pct1_dropremainder)')) ds = datasets.load_dataset('mnist', split=datasets.ReadInstruction( 'test', from_=0, to=33, unit='%', rounding="pct1_dropremainder")) # 10-fold validation: tests = datasets.load_dataset( 'mnist', [datasets.ReadInstruction('train', from_=k, to=k+10, unit='%') for k in range(0, 100, 10)]) trains = datasets.load_dataset( 'mnist', [datasets.ReadInstruction('train', to=k, unit='%') + datasets.ReadInstruction('train', from_=k+10, unit='%') for k in range(0, 100, 10)]) """ def _init(self, relative_instructions): # Private initializer. self._relative_instructions = relative_instructions @classmethod def _read_instruction_from_relative_instructions(cls, relative_instructions): """Returns ReadInstruction obj initialized with relative_instructions.""" # Use __new__ to bypass __init__ used by public API and not conveniant here. result = cls.__new__(cls) result._init(relative_instructions) # pylint: disable=protected-access return result def __init__(self, split_name, rounding=None, from_=None, to=None, unit=None): """Initialize ReadInstruction. Args: split_name (str): name of the split to read. Eg: 'train'. rounding (str, optional): The rounding behaviour to use when percent slicing is used. Ignored when slicing with absolute indices. Possible values: - 'closest' (default): The specified percentages are rounded to the closest value. Use this if you want specified percents to be as much exact as possible. - 'pct1_dropremainder': the specified percentages are treated as multiple of 1%. Use this option if you want consistency. Eg: len(5%) == 5 * len(1%). Using this option, one might not be able to use the full set of examples, if the number of those is not a multiple of 100. from_ (int): to (int): alternative way of specifying slicing boundaries. If any of {from_, to, unit} argument is used, slicing cannot be specified as string. unit (str): optional, one of: '%': to set the slicing unit as percents of the split size. 'abs': to set the slicing unit as absolute numbers. """ # This constructor is not always called. See factory method # `_read_instruction_from_relative_instructions`. Common init instructions # MUST be placed in the _init method. self._init([_RelativeInstruction(split_name, from_, to, unit, rounding)]) @classmethod def from_spec(cls, spec): """Creates a `ReadInstruction` instance out of a string spec. Args: spec (`str`): Split(s) + optional slice(s) to read + optional rounding if percents are used as the slicing unit. A slice can be specified, using absolute numbers (`int`) or percentages (`int`). Examples: ``` test: test split. test + validation: test split + validation split. test[10:]: test split, minus its first 10 records. test[:10%]: first 10% records of test split. test[:20%](pct1_dropremainder): first 10% records, rounded with the pct1_dropremainder rounding. test[:-5%]+train[40%:60%]: first 95% of test + middle 20% of train. ``` Returns: ReadInstruction instance. """ spec = str(spec) # Need to convert to str in case of NamedSplit instance. subs = _ADDITION_SEP_RE.split(spec) if not subs: raise ValueError(f"No instructions could be built out of {spec}") instruction = _str_to_read_instruction(subs[0]) return sum((_str_to_read_instruction(sub) for sub in subs[1:]), instruction) def to_spec(self): rel_instr_specs = [] for rel_instr in self._relative_instructions: rel_instr_spec = rel_instr.splitname if rel_instr.from_ is not None or rel_instr.to is not None: from_ = rel_instr.from_ to = rel_instr.to unit = rel_instr.unit rounding = rel_instr.rounding unit = unit if unit == "%" else "" from_ = str(from_) + unit if from_ is not None else "" to = str(to) + unit if to is not None else "" slice_str = f"[{from_}:{to}]" rounding_str = ( f"({rounding})" if unit == "%" and rounding is not None and rounding != "closest" else "" ) rel_instr_spec += slice_str + rounding_str rel_instr_specs.append(rel_instr_spec) return "+".join(rel_instr_specs) def __add__(self, other): """Returns a new ReadInstruction obj, result of appending other to self.""" if not isinstance(other, ReadInstruction): msg = "ReadInstruction can only be added to another ReadInstruction obj." raise TypeError(msg) self_ris = self._relative_instructions other_ris = other._relative_instructions # pylint: disable=protected-access if ( self_ris[0].unit != "abs" and other_ris[0].unit != "abs" and self._relative_instructions[0].rounding != other_ris[0].rounding ): raise ValueError("It is forbidden to sum ReadInstruction instances with different rounding values.") return self._read_instruction_from_relative_instructions(self_ris + other_ris) def __str__(self): return self.to_spec() def __repr__(self): return f"ReadInstruction({self._relative_instructions})" def to_absolute(self, name2len): """Translate instruction into a list of absolute instructions. Those absolute instructions are then to be added together. Args: name2len (`dict`): Associating split names to number of examples. Returns: list of _AbsoluteInstruction instances (corresponds to the + in spec). """ return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
datasets/src/datasets/arrow_reader.py/0
{ "file_path": "datasets/src/datasets/arrow_reader.py", "repo_id": "datasets", "token_count": 11429 }
76
import copy import warnings from dataclasses import InitVar, dataclass, field from pathlib import Path from typing import Any, Dict, Optional, Union from .. import config @dataclass class DownloadConfig: """Configuration for our cached path manager. Attributes: cache_dir (`str` or `Path`, *optional*): Specify a cache directory to save the file to (overwrite the default cache dir). force_download (`bool`, defaults to `False`): If `True`, re-dowload the file even if it's already cached in the cache dir. resume_download (`bool`, defaults to `False`): If `True`, resume the download if an incompletely received file is found. proxies (`dict`, *optional*): user_agent (`str`, *optional*): Optional string or dict that will be appended to the user-agent on remote requests. extract_compressed_file (`bool`, defaults to `False`): If `True` and the path point to a zip or tar file, extract the compressed file in a folder along the archive. force_extract (`bool`, defaults to `False`): If `True` when `extract_compressed_file` is `True` and the archive was already extracted, re-extract the archive and override the folder where it was extracted. delete_extracted (`bool`, defaults to `False`): Whether to delete (or keep) the extracted files. use_etag (`bool`, defaults to `True`): Whether to use the ETag HTTP response header to validate the cached files. num_proc (`int`, *optional*): The number of processes to launch to download the files in parallel. max_retries (`int`, default to `1`): The number of times to retry an HTTP request if it fails. token (`str` or `bool`, *optional*): Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If `True`, or not specified, will get token from `~/.huggingface`. use_auth_token (`str` or `bool`, *optional*): Optional string or boolean to use as Bearer token for remote files on the Datasets Hub. If `True`, or not specified, will get token from `~/.huggingface`. <Deprecated version="2.14.0"> `use_auth_token` was deprecated in favor of `token` in version 2.14.0 and will be removed in 3.0.0. </Deprecated> ignore_url_params (`bool`, defaults to `False`): Whether to strip all query parameters and fragments from the download URL before using it for caching the file. storage_options (`dict`, *optional*): Key/value pairs to be passed on to the dataset file-system backend, if any. download_desc (`str`, *optional*): A description to be displayed alongside with the progress bar while downloading the files. """ cache_dir: Optional[Union[str, Path]] = None force_download: bool = False resume_download: bool = False local_files_only: bool = False proxies: Optional[Dict] = None user_agent: Optional[str] = None extract_compressed_file: bool = False force_extract: bool = False delete_extracted: bool = False use_etag: bool = True num_proc: Optional[int] = None max_retries: int = 1 token: Optional[Union[str, bool]] = None use_auth_token: InitVar[Optional[Union[str, bool]]] = "deprecated" ignore_url_params: bool = False storage_options: Dict[str, Any] = field(default_factory=dict) download_desc: Optional[str] = None def __post_init__(self, use_auth_token): if use_auth_token != "deprecated": warnings.warn( "'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\n" f"You can remove this warning by passing 'token={use_auth_token}' instead.", FutureWarning, ) self.token = use_auth_token if "hf" not in self.storage_options: self.storage_options["hf"] = {"token": self.token, "endpoint": config.HF_ENDPOINT} def copy(self) -> "DownloadConfig": return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) def __setattr__(self, name, value): if name == "token" and getattr(self, "storage_options", None) is not None: if "hf" not in self.storage_options: self.storage_options["hf"] = {"token": value, "endpoint": config.HF_ENDPOINT} elif getattr(self.storage_options["hf"], "token", None) is None: self.storage_options["hf"]["token"] = value super().__setattr__(name, value)
datasets/src/datasets/download/download_config.py/0
{ "file_path": "datasets/src/datasets/download/download_config.py", "repo_id": "datasets", "token_count": 1880 }
77
# Copyright 2021 The HuggingFace Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 import sys from collections.abc import Mapping from typing import TYPE_CHECKING, Dict, Optional import numpy as np import pyarrow as pa from .. import config from ..utils.logging import get_logger from ..utils.py_utils import map_nested from .formatting import TensorFormatter if TYPE_CHECKING: import jax import jaxlib logger = get_logger() DEVICE_MAPPING: Optional[dict] = None class JaxFormatter(TensorFormatter[Mapping, "jax.Array", Mapping]): def __init__(self, features=None, device=None, **jnp_array_kwargs): super().__init__(features=features) import jax from jaxlib.xla_client import Device if isinstance(device, Device): raise ValueError( f"Expected {device} to be a `str` not {type(device)}, as `jaxlib.xla_extension.Device` " "is not serializable neither with `pickle` nor with `dill`. Instead you can surround " "the device with `str()` to get its string identifier that will be internally mapped " "to the actual `jaxlib.xla_extension.Device`." ) self.device = device if isinstance(device, str) else str(jax.devices()[0]) # using global variable since `jaxlib.xla_extension.Device` is not serializable neither # with `pickle` nor with `dill`, so we need to use a global variable instead global DEVICE_MAPPING if DEVICE_MAPPING is None: DEVICE_MAPPING = self._map_devices_to_str() if self.device not in list(DEVICE_MAPPING.keys()): logger.warning( f"Device with string identifier {self.device} not listed among the available " f"devices: {list(DEVICE_MAPPING.keys())}, so falling back to the default " f"device: {str(jax.devices()[0])}." ) self.device = str(jax.devices()[0]) self.jnp_array_kwargs = jnp_array_kwargs @staticmethod def _map_devices_to_str() -> Dict[str, "jaxlib.xla_extension.Device"]: import jax return {str(device): device for device in jax.devices()} def _consolidate(self, column): import jax import jax.numpy as jnp if isinstance(column, list) and column: if all( isinstance(x, jax.Array) and x.shape == column[0].shape and x.dtype == column[0].dtype for x in column ): return jnp.stack(column, axis=0) return column def _tensorize(self, value): import jax import jax.numpy as jnp if isinstance(value, (str, bytes, type(None))): return value elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character): return value.tolist() default_dtype = {} if isinstance(value, (np.number, np.ndarray)) and np.issubdtype(value.dtype, np.integer): # the default int precision depends on the jax config # see https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#double-64bit-precision if jax.config.jax_enable_x64: default_dtype = {"dtype": jnp.int64} else: default_dtype = {"dtype": jnp.int32} elif isinstance(value, (np.number, np.ndarray)) and np.issubdtype(value.dtype, np.floating): default_dtype = {"dtype": jnp.float32} elif config.PIL_AVAILABLE and "PIL" in sys.modules: import PIL.Image if isinstance(value, PIL.Image.Image): value = np.asarray(value) # using global variable since `jaxlib.xla_extension.Device` is not serializable neither # with `pickle` nor with `dill`, so we need to use a global variable instead global DEVICE_MAPPING if DEVICE_MAPPING is None: DEVICE_MAPPING = self._map_devices_to_str() with jax.default_device(DEVICE_MAPPING[self.device]): # calling jnp.array on a np.ndarray does copy the data # see https://github.com/google/jax/issues/4486 return jnp.array(value, **{**default_dtype, **self.jnp_array_kwargs}) def _recursive_tensorize(self, data_struct): import jax # support for torch, tf, jax etc. if config.TORCH_AVAILABLE and "torch" in sys.modules: import torch if isinstance(data_struct, torch.Tensor): return self._tensorize(data_struct.detach().cpu().numpy()[()]) if hasattr(data_struct, "__array__") and not isinstance(data_struct, jax.Array): data_struct = data_struct.__array__() # support for nested types like struct of list of struct if isinstance(data_struct, np.ndarray): if data_struct.dtype == object: # jax arrays cannot be instantied from an array of objects return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct]) elif isinstance(data_struct, (list, tuple)): return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct]) return self._tensorize(data_struct) def recursive_tensorize(self, data_struct: dict): return map_nested(self._recursive_tensorize, data_struct, map_list=False) def format_row(self, pa_table: pa.Table) -> Mapping: row = self.numpy_arrow_extractor().extract_row(pa_table) row = self.python_features_decoder.decode_row(row) return self.recursive_tensorize(row) def format_column(self, pa_table: pa.Table) -> "jax.Array": column = self.numpy_arrow_extractor().extract_column(pa_table) column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) column = self.recursive_tensorize(column) column = self._consolidate(column) return column def format_batch(self, pa_table: pa.Table) -> Mapping: batch = self.numpy_arrow_extractor().extract_batch(pa_table) batch = self.python_features_decoder.decode_batch(batch) batch = self.recursive_tensorize(batch) for column_name in batch: batch[column_name] = self._consolidate(batch[column_name]) return batch
datasets/src/datasets/formatting/jax_formatter.py/0
{ "file_path": "datasets/src/datasets/formatting/jax_formatter.py", "repo_id": "datasets", "token_count": 2858 }
78
import copy import itertools import sys import warnings from collections import Counter from copy import deepcopy from dataclasses import dataclass from functools import partial from itertools import cycle, islice from typing import Any, Callable, Dict, Iterable, Iterator, List, Optional, Tuple, Union import fsspec.asyn import numpy as np import pyarrow as pa from . import config from .arrow_dataset import Dataset, DatasetInfoMixin from .features import Features from .features.features import FeatureType, _align_features, _check_if_features_can_be_aligned, cast_to_python_objects from .formatting import PythonFormatter, TensorFormatter, get_format_type_from_alias, get_formatter from .info import DatasetInfo from .splits import NamedSplit from .table import cast_table_to_features, read_schema_from_file, table_cast from .utils.logging import get_logger from .utils.py_utils import Literal from .utils.sharding import _merge_gen_kwargs, _number_of_shards_in_gen_kwargs, _shuffle_gen_kwargs, _split_gen_kwargs logger = get_logger(__name__) Key = Union[int, str] def identity_func(x): return x def _rename_columns_fn(example: Dict, column_mapping: Dict[str, str]): if any(col not in example for col in column_mapping): raise ValueError( f"Error when renaming {list(column_mapping)} to {list(column_mapping.values())}: columns {set(column_mapping) - set(example)} are not in the dataset." ) if any(col in example for col in column_mapping.values()): raise ValueError( f"Error when renaming {list(column_mapping)} to {list(column_mapping.values())}: columns {set(example) - set(column_mapping.values())} are already in the dataset." ) return { new_column_name: example[original_column_name] for original_column_name, new_column_name in column_mapping.items() } def add_column_fn(example: Dict, idx: int, name: str, column: List[Dict]): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: column[idx]} def _infer_features_from_batch(batch: Dict[str, list], try_features: Optional[Features] = None) -> Features: pa_table = pa.Table.from_pydict(batch) if try_features is not None: try: pa_table = table_cast(pa_table, pa.schema(try_features.type)) except (TypeError, pa.ArrowInvalid, pa.ArrowNotImplementedError): pass return Features.from_arrow_schema(pa_table.schema) def _examples_to_batch(examples: List[Dict[str, Any]]) -> Dict[str, list]: # we order the columns by order of appearance # to do so, we use a dict as an ordered set cols = {col: None for example in examples for col in example} # when an example is missing a column, we set the value to None with .get() arrays = [[example.get(col) for example in examples] for col in cols] return dict(zip(cols, arrays)) def _batch_to_examples(batch: Dict[str, list]) -> List[Dict[str, Any]]: """Convert a batch (dict of examples) to examples list""" n_examples = len(batch[next(iter(batch))]) for i in range(n_examples): yield {col: array[i] for col, array in batch.items()} class _HasNextIterator(Iterator): """Iterator with an hasnext() function. Taken from https://stackoverflow.com/questions/1966591/has-next-in-python-iterators.""" def __init__(self, it): self.it = iter(it) self._hasnext = None def __iter__(self): return self def __next__(self): if self._hasnext: result = self._thenext else: result = next(self.it) self._hasnext = None return result def hasnext(self): if self._hasnext is None: try: self._thenext = next(self.it) except StopIteration: self._hasnext = False else: self._hasnext = True return self._hasnext def _convert_to_arrow( iterable: Iterable[Tuple[Key, dict]], batch_size: int, drop_last_batch: bool = False, ) -> Iterator[Tuple[Key, pa.Table]]: """Convert and group examples in Arrow tables of size `batch_size`. Args: iterable (`Iterable[Tuple[Key, dict]]`): An examples iterable containing tuples (example_key, example) of type (int/str, dict) batch_size (`Optional[int]`): Size of each sub-table to yield. If None or <= 0, yields the full table. drop_last_batch (`bool`, defaults to `False`): Drop the last batch if it is smaller than `batch_size`. """ if batch_size is None or batch_size <= 0: yield ( "all", pa.Table.from_pylist(cast_to_python_objects([example for _, example in iterable], only_1d_for_numpy=True)), ) return iterator = iter(iterable) for key, example in iterator: iterator_batch = islice(iterator, batch_size - 1) key_examples_list = [(key, example)] + list(iterator_batch) if len(key_examples_list) < batch_size and drop_last_batch: return keys, examples = zip(*key_examples_list) new_key = "_".join(str(key) for key in keys) yield new_key, pa.Table.from_pylist(cast_to_python_objects(examples, only_1d_for_numpy=True)) def _batch_arrow_tables( iterable: Iterable[Tuple[Key, pa.Table]], batch_size: Optional[int], drop_last_batch: bool = False, ) -> Iterator[Tuple[Key, pa.Table]]: """Iterate over sub-tables of size `batch_size`. Args: iterable (`Iterable[Tuple[Key, pa.Table]]`): A tables iterable containing tuples (table_key, table) of type (int/str, pa.Table) batch_size (`Optional[int]`): Size of each sub-table to yield. If None or <= 0, yields the full table. drop_last_batch (`bool`, defaults to `False`): Drop the last batch if it is smaller than `batch_size`. """ if batch_size is None or batch_size <= 0: yield "all", pa.concat_tables([pa_table for _, pa_table in iterable]) return keys_buffer = [] chunks_buffer = [] chunks_buffer_size = 0 for key, pa_table in iterable: for chunk in pa_table.to_reader(max_chunksize=batch_size): if len(chunk) == 0: continue elif chunks_buffer_size + len(chunk) < batch_size: keys_buffer.append(key) chunks_buffer.append(chunk) chunks_buffer_size += len(chunk) continue elif chunks_buffer_size + len(chunk) == batch_size: keys_buffer.append(key) chunks_buffer.append(chunk) new_key = "_".join(str(_key) for _key in keys_buffer) yield new_key, pa.Table.from_batches(chunks_buffer) keys_buffer = [] chunks_buffer = [] chunks_buffer_size = 0 else: cropped_chunk_length = batch_size - chunks_buffer_size keys_buffer.append(f"{key}[:{cropped_chunk_length}]") chunks_buffer.append(chunk.slice(0, cropped_chunk_length)) new_key = "_".join(str(_key) for _key in keys_buffer) yield new_key, pa.Table.from_batches(chunks_buffer) keys_buffer = [f"{key}[{cropped_chunk_length}:]"] chunks_buffer = [chunk.slice(cropped_chunk_length, len(chunk) - cropped_chunk_length)] chunks_buffer_size = len(chunk) - cropped_chunk_length if not drop_last_batch and chunks_buffer: new_key = "_".join(str(_key) for _key in keys_buffer) yield new_key, pa.Table.from_batches(chunks_buffer) class _BaseExamplesIterable: """Base class for the examples iterable used by an IterableDataset""" def __init__(self) -> None: self.iter_arrow: Optional[Callable[[], Iterator[Tuple[Key, pa.Table]]]] = None def __iter__(self) -> Iterator[Tuple[Key, dict]]: """An examples iterable should yield tuples (example_key, example) of type (int/str, dict)""" raise NotImplementedError(f"{type(self)} doesn't implement __iter__ yet") def shuffle_data_sources(self, generator: np.random.Generator) -> "_BaseExamplesIterable": """ Either shuffle the shards/sources of the dataset, or propagate the shuffling to the underlying iterable. If the order of the shards must stay fixed (when using .skip or .take for example), then this method returns self. """ raise NotImplementedError(f"{type(self)} doesn't implement shuffle_data_sources yet") def shard_data_sources(self, worker_id: int, num_workers: int) -> "_BaseExamplesIterable": """Either keep only the requested shard, or propagate the request to the underlying iterable.""" raise NotImplementedError(f"{type(self)} doesn't implement shard_data_sources yet") def split_shard_indices_by_worker(self, worker_id: int, num_workers: int) -> List[int]: return list(range(worker_id, self.n_shards, num_workers)) @property def n_shards(self) -> int: raise NotImplementedError(f"{type(self)} doesn't implement n_shards yet") class ExamplesIterable(_BaseExamplesIterable): def __init__(self, generate_examples_fn: Callable[..., Tuple[Key, dict]], kwargs: dict): super().__init__() self.generate_examples_fn = generate_examples_fn self.kwargs = kwargs def __iter__(self): yield from self.generate_examples_fn(**self.kwargs) def shuffle_data_sources(self, generator: np.random.Generator) -> "ExamplesIterable": return ShuffledDataSourcesExamplesIterable(self.generate_examples_fn, self.kwargs, generator) def shard_data_sources(self, worker_id: int, num_workers: int) -> "ExamplesIterable": """Keep only the requested shard.""" gen_kwargs_list = _split_gen_kwargs(self.kwargs, max_num_jobs=self.n_shards) shard_indices = self.split_shard_indices_by_worker(worker_id, num_workers) requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices]) return ExamplesIterable(self.generate_examples_fn, requested_gen_kwargs) @property def n_shards(self) -> int: return _number_of_shards_in_gen_kwargs(self.kwargs) class ShuffledDataSourcesExamplesIterable(ExamplesIterable): def __init__( self, generate_examples_fn: Callable[..., Tuple[Key, dict]], kwargs: dict, generator: np.random.Generator ): super().__init__(generate_examples_fn, kwargs) self.generator = deepcopy(generator) def __iter__(self): """Shuffle the kwargs order to shuffle shards""" rng = deepcopy(self.generator) kwargs_with_shuffled_shards = _shuffle_gen_kwargs(rng, self.kwargs) yield from self.generate_examples_fn(**kwargs_with_shuffled_shards) def shard_data_sources(self, worker_id: int, num_workers: int) -> "ExamplesIterable": """Keep only the requested shard.""" rng = deepcopy(self.generator) kwargs_with_shuffled_shards = _shuffle_gen_kwargs(rng, self.kwargs) return ExamplesIterable(self.generate_examples_fn, kwargs_with_shuffled_shards).shard_data_sources( worker_id, num_workers ) class ArrowExamplesIterable(_BaseExamplesIterable): def __init__(self, generate_tables_fn: Callable[..., Tuple[Key, pa.Table]], kwargs: dict): super().__init__() self.generate_tables_fn = generate_tables_fn self.kwargs = kwargs self.iter_arrow = self._iter_arrow def __iter__(self): formatter = PythonFormatter() for key, pa_table in self.generate_tables_fn(**self.kwargs): for pa_subtable in pa_table.to_reader(max_chunksize=config.ARROW_READER_BATCH_SIZE_IN_DATASET_ITER): formatted_batch = formatter.format_batch(pa_subtable) for example in _batch_to_examples(formatted_batch): yield key, example def _iter_arrow(self): yield from self.generate_tables_fn(**self.kwargs) def shuffle_data_sources(self, generator: np.random.Generator) -> "ArrowExamplesIterable": return ShuffledDataSourcesArrowExamplesIterable(self.generate_tables_fn, self.kwargs, generator) def shard_data_sources(self, worker_id: int, num_workers: int) -> "ArrowExamplesIterable": """Keep only the requested shard.""" gen_kwargs_list = _split_gen_kwargs(self.kwargs, max_num_jobs=self.n_shards) shard_indices = self.split_shard_indices_by_worker(worker_id, num_workers) requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices]) return ArrowExamplesIterable(self.generate_tables_fn, requested_gen_kwargs) @property def n_shards(self) -> int: return _number_of_shards_in_gen_kwargs(self.kwargs) class ShuffledDataSourcesArrowExamplesIterable(ArrowExamplesIterable): def __init__( self, generate_tables_fn: Callable[..., Tuple[Key, pa.Table]], kwargs: dict, generator: np.random.Generator, ): super().__init__(generate_tables_fn, kwargs) self.generator = deepcopy(generator) def __iter__(self): """Shuffle the kwargs order to shuffle shards""" rng = deepcopy(self.generator) kwargs_with_shuffled_shards = _shuffle_gen_kwargs(rng, self.kwargs) formatter = PythonFormatter() for key, pa_table in self.generate_tables_fn(**kwargs_with_shuffled_shards): for pa_subtable in pa_table.to_reader(max_chunksize=config.ARROW_READER_BATCH_SIZE_IN_DATASET_ITER): formatted_batch = formatter.format_batch(pa_subtable) for example in _batch_to_examples(formatted_batch): yield key, example def _iter_arrow(self): rng = deepcopy(self.generator) kwargs_with_shuffled_shards = _shuffle_gen_kwargs(rng, self.kwargs) yield from self.generate_tables_fn(**kwargs_with_shuffled_shards) def shard_data_sources(self, worker_id: int, num_workers: int) -> "ArrowExamplesIterable": """Keep only the requested shard.""" rng = deepcopy(self.generator) kwargs_with_shuffled_shards = _shuffle_gen_kwargs(rng, self.kwargs) return ArrowExamplesIterable(self.generate_tables_fn, kwargs_with_shuffled_shards).shard_data_sources( worker_id, num_workers ) class SelectColumnsIterable(_BaseExamplesIterable): def __init__(self, ex_iterable: _BaseExamplesIterable, column_names: List[str]): super().__init__() self.ex_iterable = ex_iterable self.column_names = column_names if self.ex_iterable.iter_arrow: self.iter_arrow = self._iter_arrow def __iter__(self): for idx, row in self.ex_iterable: yield idx, {c: row[c] for c in self.column_names} def _iter_arrow(self) -> Iterator[Tuple[Key, pa.Table]]: for idx, pa_table in self.ex_iterable.iter_arrow(): yield idx, pa_table.select(self.column_names) def shuffle_data_sources(self, generator: np.random.Generator) -> "SelectColumnsIterable": return SelectColumnsIterable(self.ex_iterable.shuffle_data_sources(generator), self.column_names) def shard_data_sources(self, worker_id: int, num_workers: int) -> "SelectColumnsIterable": return SelectColumnsIterable(self.ex_iterable.shard_data_sources(worker_id, num_workers), self.column_names) @property def n_shards(self) -> int: return self.ex_iterable.n_shards class StepExamplesIterable(_BaseExamplesIterable): def __init__(self, ex_iterable: _BaseExamplesIterable, step: int, offset: int): super().__init__() self.ex_iterable = ex_iterable self.step = step self.offset = offset # TODO(QL): implement iter_arrow def __iter__(self): ex_iterator = iter(self.ex_iterable) while True: batch = list(islice(ex_iterator, self.step)) if len(batch) > self.offset: yield batch[self.offset] else: break def shuffle_data_sources(self, generator: np.random.Generator) -> "StepExamplesIterable": return StepExamplesIterable( self.ex_iterable.shuffle_data_sources(generator), step=self.step, offset=self.offset ) def shard_data_sources(self, worker_id: int, num_workers: int) -> "StepExamplesIterable": return StepExamplesIterable( self.ex_iterable.shard_data_sources(worker_id, num_workers), step=self.step, offset=self.offset ) @property def n_shards(self) -> int: return self.ex_iterable.n_shards class CyclingMultiSourcesExamplesIterable(_BaseExamplesIterable): def __init__( self, ex_iterables: List[_BaseExamplesIterable], stopping_strategy: Literal["first_exhausted", "all_exhausted"] = "first_exhausted", ): super().__init__() self.ex_iterables = ex_iterables self.stopping_strategy = stopping_strategy # if undersampling ("first_exhausted"), we stop as soon as one dataset is exhausted # if oversampling ("all_exhausted"), we stop as soons as every dataset is exhausted, i.e as soon as every samples of every dataset has been visited at least once self.bool_strategy_func = np.all if (stopping_strategy == "all_exhausted") else np.any # TODO(QL): implement iter_arrow def _get_indices_iterator(self): # this is an infinite iterator to keep track of which iterator we want to pick examples from return cycle(range(len(self.ex_iterables))) def __iter__(self): iterators = [_HasNextIterator(ex_iterable) for ex_iterable in self.ex_iterables] indices_iterator = self._get_indices_iterator() is_exhausted = np.full(len(self.ex_iterables), False) for i in indices_iterator: try: # let's pick one example from the iterator at index i yield next(iterators[i]) # it will resume from the yield at the next call so that we can directly test if the iterable is exhausted and if we need to break out of the loop if not iterators[i].hasnext(): is_exhausted[i] = True if self.bool_strategy_func(is_exhausted): # if the stopping criteria is met, break the main for loop break # otherwise reinitialise the iterator and yield the first example iterators[i] = _HasNextIterator(self.ex_iterables[i]) except StopIteration: # here it means that the i-th iterabledataset is empty, i.e we never have the occasion to yield an element of the i-th dataset. # we still check if the stopping criteria is met and if we break out of the loop in case of an oversampling strategy is_exhausted[i] = True if self.bool_strategy_func(is_exhausted): # if the stopping criteria is met, break the main for loop break def shuffle_data_sources(self, generator: np.random.Generator) -> "CyclingMultiSourcesExamplesIterable": """Shuffle each underlying examples iterable.""" ex_iterables = [ex_iterable.shuffle_data_sources(generator) for ex_iterable in self.ex_iterables] return CyclingMultiSourcesExamplesIterable(ex_iterables, self.stopping_strategy) @property def n_shards(self) -> int: return min(ex_iterable.n_shards for ex_iterable in self.ex_iterables) def shard_data_sources(self, worker_id: int, num_workers: int) -> "CyclingMultiSourcesExamplesIterable": """Either keep only the requested shard, or propagate the request to the underlying iterable.""" return CyclingMultiSourcesExamplesIterable( [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables], stopping_strategy=self.stopping_strategy, ) class VerticallyConcatenatedMultiSourcesExamplesIterable(_BaseExamplesIterable): """ VerticallyConcatenatedMultiSourcesExamplesIterable simply chains the input iterables. It doesn't require the examples iterables to always yield the same columns. Instead, this is handled by the `IterableDataset` class or `TypedExamplesIterable`. For information, `IterableDataset` merges the features of all the datasets to concatenate into one. We use `IterableDataset._resolve_features` to obtain the features of all the datasets to concatenate. Then for each example, `IterableDataset` and `TypedExamplesIterable` automatically fill missing columns with None. This is done with `_apply_feature_types_on_example`. """ def __init__(self, ex_iterables: List[_BaseExamplesIterable]): super().__init__() self.ex_iterables = ex_iterables if all(ex_iterable.iter_arrow is not None for ex_iterable in ex_iterables): self.iter_arrow = self._iter_arrow def __iter__(self): for ex_iterable in self.ex_iterables: yield from ex_iterable def _iter_arrow(self): for ex_iterable in self.ex_iterables: yield from ex_iterable.iter_arrow() def shuffle_data_sources( self, generator: np.random.Generator ) -> "VerticallyConcatenatedMultiSourcesExamplesIterable": """Shuffle the list of examples iterable, as well as each underlying examples iterable.""" rng = deepcopy(generator) ex_iterables = list(self.ex_iterables) rng.shuffle(ex_iterables) ex_iterables = [ex_iterable.shuffle_data_sources(generator) for ex_iterable in ex_iterables] return VerticallyConcatenatedMultiSourcesExamplesIterable(ex_iterables) @property def n_shards(self) -> int: return min(ex_iterable.n_shards for ex_iterable in self.ex_iterables) def shard_data_sources( self, worker_id: int, num_workers: int ) -> "VerticallyConcatenatedMultiSourcesExamplesIterable": """Either keep only the requested shard, or propagate the request to the underlying iterable.""" return VerticallyConcatenatedMultiSourcesExamplesIterable( [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables] ) def _check_column_names(column_names: List[str]): """Check the column names to make sure they don't contain duplicates.""" counter = Counter(column_names) if not all(count == 1 for count in counter.values()): duplicated_columns = [col for col in counter if counter[col] > 1] raise ValueError( f"The examples iterables can't have duplicated columns but columns {duplicated_columns} are duplicated." ) class HorizontallyConcatenatedMultiSourcesExamplesIterable(_BaseExamplesIterable): """ HorizontallyConcatenatedMultiSourcesExamplesIterable merges examples together for the input list of iterables. It also checks that there are no duplicate columns (otherwise we don't know which one to keep). This check is done once when yielding the first example. However it doesn't fill missing columns with None. Instead, this is handled by the `IterableDataset` class or `TypedExamplesIterable`. For information, `IterableDataset` merges the features of all the datasets to concatenate into one. We use `IterableDataset._resolve_features` to obtain the features of all the datasets to concatenate. Then for each example, `IterableDataset` and `TypedExamplesIterable` automatically fill missing columns with None. This is done with `_apply_feature_types_on_example`. """ def __init__(self, ex_iterables: List[_BaseExamplesIterable]): super().__init__() self.ex_iterables = ex_iterables # TODO(QL): implement iter_arrow def __iter__(self): ex_iterators = [iter(ex_iterable) for ex_iterable in self.ex_iterables] for i in itertools.count(): keys = [] examples = [] for ex_iterator in list(ex_iterators): try: key, example = next(ex_iterator) keys.append(key) examples.append(example) except StopIteration: ex_iterators.remove(ex_iterator) if ex_iterators: if i == 0: _check_column_names([column_name for example in examples for column_name in example]) new_example = {} for example in examples: new_example.update(example) new_key = "_".join(str(key) for key in keys) yield new_key, new_example else: break def shuffle_data_sources( self, generator: np.random.Generator ) -> "HorizontallyConcatenatedMultiSourcesExamplesIterable": """Doesn't shuffle the wrapped examples iterable since it would break the alignment between them.""" return self @property def n_shards(self) -> int: return 1 def shard_data_sources( self, worker_id: int, num_workers: int ) -> "HorizontallyConcatenatedMultiSourcesExamplesIterable": """Either keep only the requested shard, or propagate the request to the underlying iterable.""" return HorizontallyConcatenatedMultiSourcesExamplesIterable( [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables] ) class RandomlyCyclingMultiSourcesExamplesIterable(CyclingMultiSourcesExamplesIterable): def __init__( self, ex_iterables: List[_BaseExamplesIterable], generator: np.random.Generator, probabilities: Optional[List[float]] = None, stopping_strategy: Literal["first_exhausted", "all_exhausted"] = "first_exhausted", ): super().__init__(ex_iterables, stopping_strategy) self.generator = deepcopy(generator) self.probabilities = probabilities # TODO(QL): implement iter_arrow @staticmethod def _iter_random_indices( rng: np.random.Generator, num_sources: int, random_batch_size=1000, p: Optional[List[float]] = None, ) -> Iterator[int]: """Get an infinite iterator that randomly samples the index of the source to pick examples from.""" if p is None: while True: yield from (int(i) for i in rng.integers(0, num_sources, size=random_batch_size)) else: while True: yield from (int(i) for i in rng.choice(num_sources, size=random_batch_size, p=p)) def _get_indices_iterator(self): rng = deepcopy(self.generator) # this is an infinite iterator that randomly samples the index of the source to pick examples from return self._iter_random_indices(rng, len(self.ex_iterables), p=self.probabilities) def shuffle_data_sources(self, generator: np.random.Generator) -> "RandomlyCyclingMultiSourcesExamplesIterable": """Shuffle the data sources of each wrapped examples iterable.""" ex_iterables = [ex_iterable.shuffle_data_sources(generator) for ex_iterable in self.ex_iterables] return RandomlyCyclingMultiSourcesExamplesIterable( ex_iterables, generator=generator, probabilities=self.probabilities, stopping_strategy=self.stopping_strategy, ) def shard_data_sources(self, worker_id: int, num_workers: int) -> "RandomlyCyclingMultiSourcesExamplesIterable": """Either keep only the requested shard, or propagate the request to the underlying iterable.""" return RandomlyCyclingMultiSourcesExamplesIterable( [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables], self.generator, self.probabilities, self.stopping_strategy, ) class MappedExamplesIterable(_BaseExamplesIterable): def __init__( self, ex_iterable: _BaseExamplesIterable, function: Callable, with_indices: bool = False, input_columns: Optional[List[str]] = None, batched: bool = False, batch_size: Optional[int] = 1000, drop_last_batch: bool = False, remove_columns: Optional[List[str]] = None, fn_kwargs: Optional[dict] = None, formatting: Optional["FormattingConfig"] = None, format_type="deprecated", ): if format_type != "deprecated": warning_msg = "'format_type' is deprecated and will be removed in the next major version of datasets. " help_message = "Please use 'formatting=FormattingConfig(format_type=format_type)' instead." warnings.warn(warning_msg + help_message, category=FutureWarning, stacklevel=2) formatting = FormattingConfig(format_type=format_type) super().__init__() self.ex_iterable = ex_iterable self.function = function self.batched = batched self.batch_size = batch_size self.drop_last_batch = drop_last_batch self.remove_columns = remove_columns self.with_indices = with_indices self.input_columns = input_columns self.fn_kwargs = fn_kwargs or {} self.formatting = formatting if self.formatting and self.formatting.format_type == "arrow": self.iter_arrow = self._iter_arrow def __iter__(self): if self.formatting and self.formatting.format_type == "arrow": yield from ArrowExamplesIterable(self._iter_arrow, {}) else: yield from self._iter() def _iter(self): iterator = iter(self.ex_iterable) current_idx = 0 if self.formatting: formatter = get_formatter(self.formatting.format_type) format_dict = ( formatter.recursive_tensorize if isinstance(formatter, TensorFormatter) else cast_to_python_objects ) else: format_dict = None if self.batched: for key, example in iterator: # If `batched`, first build the batch, if `batch_size` is None or <=0, then the batch is the whole dataset iterator_batch = ( iterator if self.batch_size is None or self.batch_size <= 0 else islice(iterator, self.batch_size - 1) ) key_examples_list = [(key, example)] + list(iterator_batch) keys, examples = zip(*key_examples_list) if ( self.drop_last_batch and self.batch_size is not None and self.batch_size > 0 and len(examples) < self.batch_size ): # ignore last batch return batch = _examples_to_batch(examples) batch = format_dict(batch) if format_dict else batch # then apply the transform inputs = batch function_args = [inputs] if self.input_columns is None else [inputs[col] for col in self.input_columns] if self.with_indices: function_args.append([current_idx + i for i in range(len(key_examples_list))]) transformed_batch = dict(batch) # this will be updated with the function output transformed_batch.update(self.function(*function_args, **self.fn_kwargs)) # then remove the unwanted columns if self.remove_columns: for c in self.remove_columns: del transformed_batch[c] if transformed_batch: first_col = next(iter(transformed_batch)) bad_cols = [ col for col in transformed_batch if len(transformed_batch[col]) != len(transformed_batch[first_col]) ] if bad_cols: raise ValueError( f"Column lengths mismatch: columns {bad_cols} have length {[len(transformed_batch[col]) for col in bad_cols]} while {first_col} has length {len(transformed_batch[first_col])}." ) # the new key is the concatenation of the examples keys from the batch new_key = "_".join(str(key) for key in keys) # yield one example at a time from the transformed batch for example in _batch_to_examples(transformed_batch): yield new_key, example current_idx += 1 else: for key, example in iterator: # If not batched, we can apply the transform and yield the example directly # first copy the example, since we might drop some keys example = dict(example) example = format_dict(example) if format_dict else example # then apply the transform inputs = example function_args = [inputs] if self.input_columns is None else [inputs[col] for col in self.input_columns] if self.with_indices: function_args.append(current_idx) transformed_example = dict(example) # this will be updated with the function output transformed_example.update(self.function(*function_args, **self.fn_kwargs)) # then we remove the unwanted columns if self.remove_columns: for c in self.remove_columns: del transformed_example[c] yield key, transformed_example current_idx += 1 def _iter_arrow(self) -> Iterator[Tuple[Key, pa.Table]]: if self.ex_iterable.iter_arrow: iterator = _batch_arrow_tables( self.ex_iterable.iter_arrow(), batch_size=self.batch_size if self.batched else 1, drop_last_batch=self.drop_last_batch, ) else: iterator = _convert_to_arrow( self.ex_iterable, batch_size=self.batch_size if self.batched else 1, drop_last_batch=self.drop_last_batch, ) current_idx = 0 for key, pa_table in iterator: # first build the batch function_args = [pa_table] if self.input_columns is None else [pa_table[col] for col in self.input_columns] if self.with_indices: if self.batched: function_args.append([current_idx + i for i in range(len(pa_table))]) else: function_args.append(current_idx) # then apply the transform output_table = self.function(*function_args, **self.fn_kwargs) if not isinstance(output_table, pa.Table): raise TypeError( f"Provided `function` which is applied to pyarrow tables returns a variable of type {type(output_table)}. Make sure provided `function` returns a a pyarrow table to update the dataset." ) # we don't need to merge results for consistency with Dataset.map which merges iif both input and output are dicts # then remove the unwanted columns if self.remove_columns: for column in self.remove_columns: if column in output_table.column_names: output_table = output_table.remove_column(output_table.column_names.index(column)) # return output yield key, output_table current_idx += len(pa_table) def shuffle_data_sources(self, generator: np.random.Generator) -> "MappedExamplesIterable": """Shuffle the wrapped examples iterable.""" return MappedExamplesIterable( self.ex_iterable.shuffle_data_sources(generator), function=self.function, with_indices=self.with_indices, input_columns=self.input_columns, batched=self.batched, batch_size=self.batch_size, drop_last_batch=self.drop_last_batch, remove_columns=self.remove_columns, fn_kwargs=self.fn_kwargs, formatting=self.formatting, ) def shard_data_sources(self, worker_id: int, num_workers: int) -> "MappedExamplesIterable": """Keep only the requested shard.""" return MappedExamplesIterable( self.ex_iterable.shard_data_sources(worker_id, num_workers), function=self.function, with_indices=self.with_indices, input_columns=self.input_columns, batched=self.batched, batch_size=self.batch_size, drop_last_batch=self.drop_last_batch, remove_columns=self.remove_columns, fn_kwargs=self.fn_kwargs, formatting=self.formatting, ) @property def n_shards(self) -> int: return self.ex_iterable.n_shards class FilteredExamplesIterable(_BaseExamplesIterable): def __init__( self, ex_iterable: _BaseExamplesIterable, function: Callable, with_indices: bool = False, input_columns: Optional[List[str]] = None, batched: bool = False, batch_size: Optional[int] = 1000, fn_kwargs: Optional[dict] = None, formatting: Optional["FormattingConfig"] = None, format_type="deprecated", ): if format_type != "deprecated": warning_msg = "'format_type' is deprecated and will be removed in the next major version of datasets. " help_message = "Please use 'formatting=FormattingConfig(format_type=format_type)' instead." warnings.warn(warning_msg + help_message, category=FutureWarning, stacklevel=2) formatting = FormattingConfig(format_type=format_type) super().__init__() self.ex_iterable = ex_iterable self.function = function self.batched = batched self.batch_size = batch_size self.with_indices = with_indices self.input_columns = input_columns self.fn_kwargs = fn_kwargs or {} self.formatting = formatting if self.formatting and self.formatting.format_type == "arrow": self.iter_arrow = self._iter_arrow def __iter__(self): if self.formatting and self.formatting.format_type == "arrow": yield from ArrowExamplesIterable(self._iter_arrow, {}) else: yield from self._iter() def _iter(self): if self.formatting: formatter = get_formatter(self.formatting.format_type) format_dict = ( formatter.recursive_tensorize if isinstance(formatter, TensorFormatter) else cast_to_python_objects ) else: format_dict = None iterator = iter(self.ex_iterable) current_idx = 0 if self.batched: for key, example in iterator: # If `batched`, first build the batch, if `batch_size` is None or <=0, then the batch is the whole dataset iterator_batch = ( iterator if self.batch_size is None or self.batch_size <= 0 else islice(iterator, self.batch_size - 1) ) key_examples_list = [(key, example)] + list(iterator_batch) keys, examples = zip(*key_examples_list) batch = _examples_to_batch(examples) batch = format_dict(batch) if format_dict else batch # then compute the mask for the batch inputs = batch function_args = [inputs] if self.input_columns is None else [inputs[col] for col in self.input_columns] if self.with_indices: function_args.append([current_idx + i for i in range(len(key_examples_list))]) mask = self.function(*function_args, **self.fn_kwargs) # yield one example at a time from the batch for key_example, to_keep in zip(key_examples_list, mask): if to_keep: yield key_example current_idx += 1 else: for key, example in iterator: # If not batched, we can apply the filtering function direcly example = dict(example) inputs = format_dict(example) if format_dict else example function_args = [inputs] if self.input_columns is None else [inputs[col] for col in self.input_columns] if self.with_indices: function_args.append(current_idx) to_keep = self.function(*function_args, **self.fn_kwargs) if to_keep: yield key, example current_idx += 1 def _iter_arrow(self): if self.ex_iterable.iter_arrow: iterator = _batch_arrow_tables( self.ex_iterable.iter_arrow(), batch_size=self.batch_size if self.batched else 1 ) else: iterator = _convert_to_arrow(self.ex_iterable, batch_size=self.batch_size if self.batched else 1) current_idx = 0 for key, pa_table in iterator: # first build the batch function_args = [pa_table] if self.input_columns is None else [pa_table[col] for col in self.input_columns] if self.with_indices: if self.batched: function_args.append([current_idx + i for i in range(len(pa_table))]) else: function_args.append(current_idx) # then apply the transform mask = self.function(*function_args, **self.fn_kwargs) # yield the filtered table if self.batched: yield key, pa_table.filter(mask) elif mask.as_py() if isinstance(mask, pa.BooleanScalar) else mask: yield key, pa_table current_idx += len(pa_table) def shuffle_data_sources(self, seed: Optional[int]) -> "FilteredExamplesIterable": """Shuffle the wrapped examples iterable.""" return FilteredExamplesIterable( self.ex_iterable.shuffle_data_sources(seed), function=self.function, with_indices=self.with_indices, input_columns=self.input_columns, batched=self.batched, batch_size=self.batch_size, ) def shard_data_sources(self, worker_id: int, num_workers: int) -> "FilteredExamplesIterable": """Keep only the requested shard.""" return FilteredExamplesIterable( self.ex_iterable.shard_data_sources(worker_id, num_workers), function=self.function, with_indices=self.with_indices, input_columns=self.input_columns, batched=self.batched, batch_size=self.batch_size, ) @property def n_shards(self) -> int: return self.ex_iterable.n_shards class BufferShuffledExamplesIterable(_BaseExamplesIterable): def __init__(self, ex_iterable: _BaseExamplesIterable, buffer_size: int, generator: np.random.Generator): super().__init__() self.ex_iterable = ex_iterable self.buffer_size = buffer_size self.generator = generator # TODO(QL): implement iter_arrow @staticmethod def _iter_random_indices(rng: np.random.Generator, buffer_size: int, random_batch_size=1000) -> Iterator[int]: while True: yield from (int(i) for i in rng.integers(0, buffer_size, size=random_batch_size)) def __iter__(self): buffer_size = self.buffer_size rng = deepcopy(self.generator) indices_iterator = self._iter_random_indices(rng, buffer_size) # this is the shuffle buffer that we keep in memory mem_buffer = [] for x in self.ex_iterable: if len(mem_buffer) == buffer_size: # if the buffer is full, pick and example from it i = next(indices_iterator) yield mem_buffer[i] mem_buffer[i] = x # replace the picked example by a new one else: # otherwise, keep filling the buffer mem_buffer.append(x) # when we run out of examples, we shuffle the remaining examples in the buffer and yield them rng.shuffle(mem_buffer) yield from mem_buffer def shuffle_data_sources(self, generator: np.random.Generator) -> "BufferShuffledExamplesIterable": """Shuffle the wrapped examples iterable as well as the shuffling buffer.""" return BufferShuffledExamplesIterable( self.ex_iterable.shuffle_data_sources(generator), buffer_size=self.buffer_size, generator=generator ) def shard_data_sources(self, worker_id: int, num_workers: int) -> "BufferShuffledExamplesIterable": """Keep only the requested shard.""" return BufferShuffledExamplesIterable( self.ex_iterable.shard_data_sources(worker_id, num_workers), buffer_size=self.buffer_size, generator=self.generator, ) @property def n_shards(self) -> int: return self.ex_iterable.n_shards class SkipExamplesIterable(_BaseExamplesIterable): def __init__(self, ex_iterable: _BaseExamplesIterable, n: int): super().__init__() self.ex_iterable = ex_iterable self.n = n # TODO(QL): implement iter_arrow def __iter__(self): yield from islice(self.ex_iterable, self.n, None) def shuffle_data_sources(self, generator: np.random.Generator) -> "SkipExamplesIterable": """Doesn't shuffle the wrapped examples iterable since it would skip examples from other shards instead.""" return self @property def n_shards(self) -> int: return self.ex_iterable.n_shards class TakeExamplesIterable(_BaseExamplesIterable): def __init__(self, ex_iterable: _BaseExamplesIterable, n: int): super().__init__() self.ex_iterable = ex_iterable self.n = n # TODO(QL): implement iter_arrow def __iter__(self): yield from islice(self.ex_iterable, self.n) def shuffle_data_sources(self, generator: np.random.Generator) -> "TakeExamplesIterable": """Doesn't shuffle the wrapped examples iterable since it would take examples from other shards instead.""" return self @staticmethod def split_number(num, n): quotient = num // n remainder = num % n result = [quotient] * n for i in range(remainder): result[i] += 1 return result def shard_data_sources(self, worker_id: int, num_workers: int) -> "TakeExamplesIterable": """Keep only the requested shard.""" return TakeExamplesIterable( self.ex_iterable.shard_data_sources(worker_id, num_workers), n=self.split_number(self.n, num_workers)[worker_id], ) @property def n_shards(self) -> int: return self.ex_iterable.n_shards def _apply_feature_types_on_example( example: dict, features: Features, token_per_repo_id: Dict[str, Union[str, bool, None]] ) -> dict: example = dict(example) # add missing columns for column_name in features: if column_name not in example: example[column_name] = None # we encode the example for ClassLabel feature types for example encoded_example = features.encode_example(example) # Decode example for Audio feature, e.g. decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) return decoded_example def _apply_feature_types_on_batch( batch: dict, features: Features, token_per_repo_id: Dict[str, Union[str, bool, None]] ) -> dict: batch = dict(batch) # add missing columns n_examples = len(batch[next(iter(batch))]) for column_name in features: if column_name not in batch: batch[column_name] = [None] * n_examples # we encode the batch for ClassLabel feature types for example encoded_batch = features.encode_batch(batch) # Decode batch for Audio feature, e.g. decoded_batch = features.decode_batch(encoded_batch, token_per_repo_id=token_per_repo_id) return decoded_batch class TypedExamplesIterable(_BaseExamplesIterable): def __init__( self, ex_iterable: _BaseExamplesIterable, features: Features, token_per_repo_id: Dict[str, Union[str, bool, None]], ): super().__init__() self.ex_iterable = ex_iterable self.features = features self.token_per_repo_id = token_per_repo_id if self.ex_iterable.iter_arrow is not None: self.iter_arrow = self._iter_arrow def __iter__(self): # Then for each example, `TypedExamplesIterable` automatically fills missing columns with None. # This is done with `_apply_feature_types_on_example`. for key, example in self.ex_iterable: yield ( key, _apply_feature_types_on_example(example, self.features, token_per_repo_id=self.token_per_repo_id), ) def _iter_arrow(self) -> Iterator[Tuple[Key, pa.Table]]: schema = self.features.arrow_schema for key, pa_table in self.ex_iterable.iter_arrow(): columns = set(pa_table.column_names) # add missing columns for column_name in self.features: if column_name not in columns: col = pa.NullArray.from_buffers(pa.null(), len(pa_table), [None]) pa_table = pa_table.append_column(column_name, col) if pa_table.schema != schema: pa_table = cast_table_to_features(pa_table, self.features) yield key, pa_table def shuffle_data_sources(self, generator: np.random.Generator) -> "TypedExamplesIterable": """Shuffle the wrapped examples iterable.""" return TypedExamplesIterable( self.ex_iterable.shuffle_data_sources(generator), features=self.features, token_per_repo_id=self.token_per_repo_id, ) def shard_data_sources(self, worker_id: int, num_workers: int) -> "TypedExamplesIterable": """Keep only the requested shard.""" return TypedExamplesIterable( self.ex_iterable.shard_data_sources(worker_id, num_workers), features=self.features, token_per_repo_id=self.token_per_repo_id, ) @property def n_shards(self) -> int: return self.ex_iterable.n_shards @dataclass class FormattingConfig: format_type: Optional[str] def __post_init__(self): if self.format_type == "pandas": raise NotImplementedError( "The 'pandas' formatting is not implemented for iterable datasets. You can use 'numpy' or 'arrow' instead." ) @dataclass class ShufflingConfig: generator: np.random.Generator _original_seed: Optional[int] = None @dataclass class DistributedConfig: rank: int world_size: int def _maybe_add_torch_iterable_dataset_parent_class(cls): """Add torch.utils.data.IterableDataset as a parent class if 'torch' is available""" if config.TORCH_AVAILABLE: import torch.utils.data if torch.utils.data.IterableDataset not in cls.__bases__: cls.__bases__ += (torch.utils.data.IterableDataset,) class IterableDataset(DatasetInfoMixin): """A Dataset backed by an iterable.""" def __init__( self, ex_iterable: _BaseExamplesIterable, info: Optional[DatasetInfo] = None, split: Optional[NamedSplit] = None, formatting: Optional[FormattingConfig] = None, shuffling: Optional[ShufflingConfig] = None, distributed: Optional[DistributedConfig] = None, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None, format_type="deprecated", ): if distributed and distributed.world_size > 1 and shuffling and shuffling._original_seed is None: raise RuntimeError( "The dataset doesn't have a fixed random seed across nodes to shuffle and split the list of dataset shards by node. " "Please pass e.g. `seed=42` in `.shuffle()` to make all the nodes use the same seed. " ) if format_type != "deprecated": warning_msg = "'format_type' is deprecated and will be removed in the next major version of datasets. " help_message = "Please use 'formatting=FormattingConfig(format_type=format_type)' instead." warnings.warn(warning_msg + help_message, category=FutureWarning, stacklevel=2) formatting = FormattingConfig(format_type=format_type) info = info.copy() if info is not None else DatasetInfo() DatasetInfoMixin.__init__(self, info=info, split=split) self._ex_iterable = ex_iterable self._formatting = formatting self._shuffling = shuffling self._distributed = distributed self._epoch = 0 self._token_per_repo_id: Dict[str, Union[str, bool, None]] = token_per_repo_id or {} _maybe_add_torch_iterable_dataset_parent_class(self.__class__) def __repr__(self): return f"IterableDataset({{\n features: {list(self._info.features.keys()) if self._info.features is not None else 'Unknown'},\n n_shards: {self.n_shards}\n}})" def __getstate__(self): return self.__dict__ def __setstate__(self, d): self.__dict__ = d # Re-add torch iterable dataset as a parent class, since dynamically added parent classes are not kept when pickling _maybe_add_torch_iterable_dataset_parent_class(self.__class__) def _head(self, n=5): return _examples_to_batch(list(self.take(n))) def _effective_generator(self): if self._shuffling and self._epoch == 0: return self._shuffling.generator elif self._shuffling: # Create effective seed using self._epoch (we subtract in order to avoir overflow in long_scalars) effective_seed = deepcopy(self._shuffling.generator).integers(0, 1 << 63) - self._epoch effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed return np.random.default_rng(effective_seed) else: raise ValueError("This dataset is not shuffled") @property def n_shards(self) -> int: if self._distributed and self._ex_iterable.n_shards % self._distributed.world_size == 0: return self._ex_iterable.n_shards // self._distributed.world_size return self._ex_iterable.n_shards def _iter_pytorch(self): ex_iterable = self._prepare_ex_iterable_for_iteration() # Fix for fsspec when using multiprocess to avoid hanging in the ML training loop. (only required for fsspec >= 0.9.0) # See https://github.com/fsspec/gcsfs/issues/379 fsspec.asyn.reset_lock() # check if there aren't too many workers import torch.utils.data worker_info = torch.utils.data.get_worker_info() if self._is_main_process() and ex_iterable.n_shards < worker_info.num_workers: logger.warning( f"Too many dataloader workers: {worker_info.num_workers} (max is dataset.n_shards={ex_iterable.n_shards}). " f"Stopping {worker_info.num_workers - ex_iterable.n_shards} dataloader workers." ) logger.info( f"To parallelize data loading, we give each process some shards (or data sources) to process. " f"Therefore it's unnecessary to have a number of workers greater than dataset.n_shards={ex_iterable.n_shards}. " f"To enable more parallelism, please split the dataset in more files than {ex_iterable.n_shards}." ) # split workload _log_prefix = f"node#{self._distributed.rank} " if self._distributed else "" shards_indices = ex_iterable.split_shard_indices_by_worker(worker_info.id, worker_info.num_workers) if shards_indices: logger.debug( f"{_log_prefix}dataloader worker#{worker_info.id}, ': Starting to iterate over {len(shards_indices)}/{ex_iterable.n_shards} shards." ) ex_iterable = ex_iterable.shard_data_sources(worker_id=worker_info.id, num_workers=worker_info.num_workers) if self._formatting: formatter = get_formatter(self._formatting.format_type, features=self.features) format_dict = ( formatter.recursive_tensorize if isinstance(formatter, TensorFormatter) else cast_to_python_objects ) else: format_dict = None if self._formatting and (ex_iterable.iter_arrow or self._formatting == "arrow"): if ex_iterable.iter_arrow: iterator = _batch_arrow_tables(ex_iterable.iter_arrow(), batch_size=1) else: iterator = _convert_to_arrow(ex_iterable, batch_size=1) for key, pa_table in iterator: yield formatter.format_row(pa_table) return else: for key, example in ex_iterable: if self.features: # `IterableDataset` automatically fills missing columns with None. # This is done with `_apply_feature_types_on_example`. example = _apply_feature_types_on_example( example, self.features, token_per_repo_id=self._token_per_repo_id ) yield format_dict(example) if format_dict else example logger.debug( f"{_log_prefix}dataloader worker#{worker_info.id}, ': Finished iterating over {len(shards_indices)}/{ex_iterable.n_shards} shards." ) else: logger.debug( f"{_log_prefix}dataloader worker#{worker_info.id}, ': Stopping... Number of dataset shards < num_workers ({ex_iterable.n_shards}<{worker_info.num_workers})." ) def _is_main_process(self): if self._distributed and self._distributed.rank > 0: return False if "torch" in sys.modules: import torch.utils.data worker_info = torch.utils.data.get_worker_info() if worker_info is not None and worker_info.id > 0: return False return True def _prepare_ex_iterable_for_iteration(self) -> _BaseExamplesIterable: if self._shuffling: ex_iterable = self._ex_iterable.shuffle_data_sources(self._effective_generator()) else: ex_iterable = self._ex_iterable if self._distributed: rank = self._distributed.rank world_size = self._distributed.world_size if ex_iterable.n_shards % world_size == 0: if self._is_main_process(): n_shards_per_node = ex_iterable.n_shards // world_size plural = "s" if n_shards_per_node > 1 else "" logger.info( f"Assigning {n_shards_per_node} shard{plural} (or data source{plural}) of the dataset to each node." ) ex_iterable = ex_iterable.shard_data_sources(rank, world_size) else: if self._is_main_process(): logger.info( f"Assigning 1 out of {world_size} examples of the dataset to each node. The others are skipped during the iteration." ) logger.info( f"It is more optimized to distribute the dataset shards (or data sources) across nodes. " f"You can do that by using a dataset with number of shards that is a factor of world_size={world_size}. " f"The current dataset has {ex_iterable.n_shards} which is not a factor of {world_size}" ) ex_iterable = StepExamplesIterable(ex_iterable, step=world_size, offset=rank) return ex_iterable def __iter__(self): if "torch" in sys.modules: import torch.utils.data worker_info = torch.utils.data.get_worker_info() if isinstance(self, torch.utils.data.IterableDataset) and worker_info is not None: # We're a torch.utils.data.IterableDataset in a PyTorch worker process yield from self._iter_pytorch() return ex_iterable = self._prepare_ex_iterable_for_iteration() if self._formatting: formatter = get_formatter(self._formatting.format_type, features=self.features) format_dict = ( formatter.recursive_tensorize if isinstance(formatter, TensorFormatter) else cast_to_python_objects ) else: format_dict = None if self._formatting and (ex_iterable.iter_arrow or self._formatting.format_type == "arrow"): if ex_iterable.iter_arrow: iterator = _batch_arrow_tables(ex_iterable.iter_arrow(), batch_size=1) else: iterator = _convert_to_arrow(ex_iterable, batch_size=1) for key, pa_table in iterator: yield formatter.format_row(pa_table) return for key, example in ex_iterable: if self.features: # `IterableDataset` automatically fills missing columns with None. # This is done with `_apply_feature_types_on_example`. example = _apply_feature_types_on_example( example, self.features, token_per_repo_id=self._token_per_repo_id ) yield format_dict(example) if format_dict else example def iter(self, batch_size: int, drop_last_batch: bool = False): """Iterate through the batches of size `batch_size`. Args: batch_size (:obj:`int`): size of each batch to yield. drop_last_batch (:obj:`bool`, default `False`): Whether a last batch smaller than the batch_size should be dropped """ if self._formatting: formatter = get_formatter(self._formatting.format_type, features=self.features) format_dict = ( formatter.recursive_tensorize if isinstance(formatter, TensorFormatter) else cast_to_python_objects ) else: format_dict = None ex_iterable = self._prepare_ex_iterable_for_iteration() if self._formatting and (ex_iterable.iter_arrow or self._formatting == "arrow"): if ex_iterable.iter_arrow: iterator = _batch_arrow_tables( ex_iterable.iter_arrow(), batch_size=batch_size, drop_last_batch=drop_last_batch ) else: iterator = _convert_to_arrow(ex_iterable, batch_size=batch_size, drop_last_batch=drop_last_batch) for key, pa_table in iterator: yield formatter.format_batch(pa_table) return iterator = iter(ex_iterable) for key, example in iterator: # If batched, first build the batch examples = [example] + [example for key, example in islice(iterator, batch_size - 1)] if drop_last_batch and len(examples) < batch_size: # ignore last batch return batch = _examples_to_batch(examples) if self.features: # `IterableDataset` automatically fills missing columns with None. # This is done with `_apply_feature_types_on_batch`. batch = _apply_feature_types_on_batch(batch, self.features, token_per_repo_id=self._token_per_repo_id) yield format_dict(batch) if format_dict else batch @staticmethod def from_generator( generator: Callable, features: Optional[Features] = None, gen_kwargs: Optional[dict] = None, ) -> "IterableDataset": """Create an Iterable Dataset from a generator. Args: generator (`Callable`): A generator function that `yields` examples. features (`Features`, *optional*): Dataset features. gen_kwargs(`dict`, *optional*): Keyword arguments to be passed to the `generator` callable. You can define a sharded iterable dataset by passing the list of shards in `gen_kwargs`. This can be used to improve shuffling and when iterating over the dataset with multiple workers. Returns: `IterableDataset` Example: ```py >>> def gen(): ... yield {"text": "Good", "label": 0} ... yield {"text": "Bad", "label": 1} ... >>> ds = IterableDataset.from_generator(gen) ``` ```py >>> def gen(shards): ... for shard in shards: ... with open(shard) as f: ... for line in f: ... yield {"line": line} ... >>> shards = [f"data{i}.txt" for i in range(32)] >>> ds = IterableDataset.from_generator(gen, gen_kwargs={"shards": shards}) >>> ds = ds.shuffle(seed=42, buffer_size=10_000) # shuffles the shards order + uses a shuffle buffer >>> from torch.utils.data import DataLoader >>> dataloader = DataLoader(ds.with_format("torch"), num_workers=4) # give each worker a subset of 32/4=8 shards ``` """ from .io.generator import GeneratorDatasetInputStream return GeneratorDatasetInputStream( generator=generator, features=features, gen_kwargs=gen_kwargs, streaming=True, ).read() @staticmethod def from_spark( df: "pyspark.sql.DataFrame", split: Optional[NamedSplit] = None, features: Optional[Features] = None, **kwargs, ) -> "IterableDataset": """Create an IterableDataset from Spark DataFrame. The dataset is streamed to the driver in batches. Args: df (`pyspark.sql.DataFrame`): The DataFrame containing the desired data. split (`NamedSplit`, *optional*): Split name to be assigned to the dataset. features (`Features`, *optional*): Dataset features. Returns: [`IterableDataset`] Example: ```py >>> df = spark.createDataFrame( >>> data=[[1, "Elia"], [2, "Teo"], [3, "Fang"]], >>> columns=["id", "name"], >>> ) >>> ds = IterableDataset.from_spark(df) ``` """ from .io.spark import SparkDatasetReader if sys.platform == "win32": raise EnvironmentError("IterableDataset.from_spark is not currently supported on Windows") return SparkDatasetReader( df, split=split, features=features, streaming=True, **kwargs, ).read() @staticmethod def from_file(filename: str) -> "IterableDataset": """Instantiate a IterableDataset from Arrow table at filename. Args: filename (`str`): File name of the dataset. Returns: [`IterableDataset`] """ pa_table_schema = read_schema_from_file(filename) inferred_features = Features.from_arrow_schema(pa_table_schema) ex_iterable = ArrowExamplesIterable(Dataset._generate_tables_from_cache_file, kwargs={"filename": filename}) return IterableDataset(ex_iterable=ex_iterable, info=DatasetInfo(features=inferred_features)) def with_format( self, type: Optional[str] = None, ) -> "IterableDataset": """ Return a dataset with the specified format. Supported formats: "arrow", or None for regular python objects. The other formats are currently not implemented. Args: type (`str`, optional, default None): if set to "torch", the returned dataset will be a subclass of torch.utils.data.IterableDataset to be used in a DataLoader """ type = get_format_type_from_alias(type) # TODO(QL): add format_kwargs # TODO(QL): add format_columns and return_all_columns # TODO(QL): add pandas format return IterableDataset( ex_iterable=self._ex_iterable, info=self._info.copy(), split=self._split, formatting=FormattingConfig(format_type=type), shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def map( self, function: Optional[Callable] = None, with_indices: bool = False, input_columns: Optional[Union[str, List[str]]] = None, batched: bool = False, batch_size: Optional[int] = 1000, drop_last_batch: bool = False, remove_columns: Optional[Union[str, List[str]]] = None, features: Optional[Features] = None, fn_kwargs: Optional[dict] = None, ) -> "IterableDataset": """ Apply a function to all the examples in the iterable dataset (individually or in batches) and update them. If your function returns a column that already exists, then it overwrites it. The function is applied on-the-fly on the examples when iterating over the dataset. You can specify whether the function should be batched or not with the `batched` parameter: - If batched is `False`, then the function takes 1 example in and should return 1 example. An example is a dictionary, e.g. `{"text": "Hello there !"}`. - If batched is `True` and `batch_size` is 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples. A batch is a dictionary, e.g. a batch of 1 example is {"text": ["Hello there !"]}. - If batched is `True` and `batch_size` is `n` > 1, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples. Note that the last batch may have less than `n` examples. A batch is a dictionary, e.g. a batch of `n` examples is `{"text": ["Hello there !"] * n}`. Args: function (`Callable`, *optional*, defaults to `None`): Function applied on-the-fly on the examples when you iterate on the dataset. It must have one of the following signatures: - `function(example: Dict[str, Any]) -> Dict[str, Any]` if `batched=False` and `with_indices=False` - `function(example: Dict[str, Any], idx: int) -> Dict[str, Any]` if `batched=False` and `with_indices=True` - `function(batch: Dict[str, List]) -> Dict[str, List]` if `batched=True` and `with_indices=False` - `function(batch: Dict[str, List], indices: List[int]) -> Dict[str, List]` if `batched=True` and `with_indices=True` For advanced usage, the function can also return a `pyarrow.Table`. Moreover if your function returns nothing (`None`), then `map` will run your function and return the dataset unchanged. If no function is provided, default to identity function: `lambda x: x`. with_indices (`bool`, defaults to `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx[, rank]): ...`. input_columns (`Optional[Union[str, List[str]]]`, defaults to `None`): The columns to be passed into `function` as positional arguments. If `None`, a dict mapping to all formatted columns is passed as one argument. batched (`bool`, defaults to `False`): Provide batch of examples to `function`. batch_size (`int`, *optional*, defaults to `1000`): Number of examples per batch provided to `function` if `batched=True`. `batch_size <= 0` or `batch_size == None` then provide the full dataset as a single batch to `function`. drop_last_batch (`bool`, defaults to `False`): Whether a last batch smaller than the batch_size should be dropped instead of being processed by the function. remove_columns (`[List[str]]`, *optional*, defaults to `None`): Remove a selection of columns while doing the mapping. Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in `remove_columns`, these columns will be kept. features (`[Features]`, *optional*, defaults to `None`): Feature types of the resulting dataset. fn_kwargs (`Dict`, *optional*, default `None`): Keyword arguments to be passed to `function`. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> def add_prefix(example): ... example["text"] = "Review: " + example["text"] ... return example >>> ds = ds.map(add_prefix) >>> list(ds.take(3)) [{'label': 1, 'text': 'Review: the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}, {'label': 1, 'text': 'Review: the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson\'s expanded vision of j . r . r . tolkien\'s middle-earth .'}, {'label': 1, 'text': 'Review: effective but too-tepid biopic'}] ``` """ if isinstance(input_columns, str): input_columns = [input_columns] if isinstance(remove_columns, str): remove_columns = [remove_columns] if function is None: function = identity_func if fn_kwargs is None: fn_kwargs = {} ex_iterable = MappedExamplesIterable( TypedExamplesIterable(self._ex_iterable, self._info.features, token_per_repo_id=self._token_per_repo_id) if self._info.features is not None else self._ex_iterable, function=function, with_indices=with_indices, input_columns=input_columns, batched=batched, batch_size=batch_size, drop_last_batch=drop_last_batch, remove_columns=remove_columns, fn_kwargs=fn_kwargs, formatting=self._formatting, ) info = self.info.copy() info.features = features return IterableDataset( ex_iterable=ex_iterable, info=info, split=self._split, formatting=self._formatting, shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def filter( self, function: Optional[Callable] = None, with_indices=False, input_columns: Optional[Union[str, List[str]]] = None, batched: bool = False, batch_size: Optional[int] = 1000, fn_kwargs: Optional[dict] = None, ) -> "IterableDataset": """Apply a filter function to all the elements so that the dataset only includes examples according to the filter function. The filtering is done on-the-fly when iterating over the dataset. Args: function (`Callable`): Callable with one of the following signatures: - `function(example: Dict[str, Any]) -> bool` if `with_indices=False, batched=False` - `function(example: Dict[str, Any], indices: int) -> bool` if `with_indices=True, batched=False` - `function(example: Dict[str, List]) -> List[bool]` if `with_indices=False, batched=True` - `function(example: Dict[str, List], indices: List[int]) -> List[bool]` if `with_indices=True, batched=True` If no function is provided, defaults to an always True function: `lambda x: True`. with_indices (`bool`, defaults to `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx): ...`. input_columns (`str` or `List[str]`, *optional*): The columns to be passed into `function` as positional arguments. If `None`, a dict mapping to all formatted columns is passed as one argument. batched (`bool`, defaults to `False`): Provide batch of examples to `function`. batch_size (`int`, *optional*, default `1000`): Number of examples per batch provided to `function` if `batched=True`. fn_kwargs (`Dict`, *optional*, default `None`): Keyword arguments to be passed to `function`. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> ds = ds.filter(lambda x: x["label"] == 0) >>> list(ds.take(3)) [{'label': 0, 'movie_review': 'simplistic , silly and tedious .'}, {'label': 0, 'movie_review': "it's so laddish and juvenile , only teenage boys could possibly find it funny ."}, {'label': 0, 'movie_review': 'exploitative and largely devoid of the depth or sophistication that would make watching such a graphic treatment of the crimes bearable .'}] ``` """ if isinstance(input_columns, str): input_columns = [input_columns] # TODO(QL): keep the features (right now if we keep it it would call decode_example again on an already decoded example) info = copy.deepcopy(self._info) info.features = None # We need the examples to be decoded for certain feature types like Image or Audio, so we use TypedExamplesIterable here ex_iterable = FilteredExamplesIterable( TypedExamplesIterable(self._ex_iterable, self._info.features, token_per_repo_id=self._token_per_repo_id) if self._info.features is not None else self._ex_iterable, function=function, with_indices=with_indices, input_columns=input_columns, batched=batched, batch_size=batch_size, fn_kwargs=fn_kwargs, formatting=self._formatting, ) return IterableDataset( ex_iterable=ex_iterable, info=info, split=self._split, formatting=self._formatting, shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def shuffle( self, seed=None, generator: Optional[np.random.Generator] = None, buffer_size: int = 1000 ) -> "IterableDataset": """ Randomly shuffles the elements of this dataset. This dataset fills a buffer with `buffer_size` elements, then randomly samples elements from this buffer, replacing the selected elements with new elements. For perfect shuffling, a buffer size greater than or equal to the full size of the dataset is required. For instance, if your dataset contains 10,000 elements but `buffer_size` is set to 1000, then `shuffle` will initially select a random element from only the first 1000 elements in the buffer. Once an element is selected, its space in the buffer is replaced by the next (i.e. 1,001-st) element, maintaining the 1000 element buffer. If the dataset is made of several shards, it also does shuffle the order of the shards. However if the order has been fixed by using [`~datasets.IterableDataset.skip`] or [`~datasets.IterableDataset.take`] then the order of the shards is kept unchanged. Args: seed (`int`, *optional*, defaults to `None`): Random seed that will be used to shuffle the dataset. It is used to sample from the shuffle buffer and also to shuffle the data shards. generator (`numpy.random.Generator`, *optional*): Numpy random Generator to use to compute the permutation of the dataset rows. If `generator=None` (default), uses `np.random.default_rng` (the default BitGenerator (PCG64) of NumPy). buffer_size (`int`, defaults to `1000`): Size of the buffer. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> list(ds.take(3)) [{'label': 1, 'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}, {'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson\'s expanded vision of j . r . r . tolkien\'s middle-earth .'}, {'label': 1, 'text': 'effective but too-tepid biopic'}] >>> shuffled_ds = ds.shuffle(seed=42) >>> list(shuffled_ds.take(3)) [{'label': 1, 'text': "a sports movie with action that's exciting on the field and a story you care about off it ."}, {'label': 1, 'text': 'at its best , the good girl is a refreshingly adult take on adultery . . .'}, {'label': 1, 'text': "sam jones became a very lucky filmmaker the day wilco got dropped from their record label , proving that one man's ruin may be another's fortune ."}] ``` """ if generator is None: generator = np.random.default_rng(seed) else: generator = deepcopy(generator) shuffling = ShufflingConfig(generator=generator, _original_seed=seed) return IterableDataset( ex_iterable=BufferShuffledExamplesIterable( self._ex_iterable, buffer_size=buffer_size, generator=generator ).shuffle_data_sources(generator), info=self._info.copy(), split=self._split, formatting=self._formatting, shuffling=shuffling, distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def set_epoch(self, epoch: int): self._epoch = epoch def skip(self, n) -> "IterableDataset": """ Create a new [`IterableDataset`] that skips the first `n` elements. Args: n (`int`): Number of elements to skip. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> list(ds.take(3)) [{'label': 1, 'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}, {'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson\'s expanded vision of j . r . r . tolkien\'s middle-earth .'}, {'label': 1, 'text': 'effective but too-tepid biopic'}] >>> ds = ds.skip(1) >>> list(ds.take(3)) [{'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson\'s expanded vision of j . r . r . tolkien\'s middle-earth .'}, {'label': 1, 'text': 'effective but too-tepid biopic'}, {'label': 1, 'text': 'if you sometimes like to go to the movies to have fun , wasabi is a good place to start .'}] ``` """ ex_iterable = SkipExamplesIterable(self._ex_iterable, n) return IterableDataset( ex_iterable=ex_iterable, info=self._info.copy(), split=self._split, formatting=self._formatting, shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def take(self, n) -> "IterableDataset": """ Create a new [`IterableDataset`] with only the first `n` elements. Args: n (`int`): Number of elements to take. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> small_ds = ds.take(2) >>> list(small_ds) [{'label': 1, 'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}, {'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson\'s expanded vision of j . r . r . tolkien\'s middle-earth .'}] ``` """ ex_iterable = TakeExamplesIterable(self._ex_iterable, n) return IterableDataset( ex_iterable=ex_iterable, info=self._info.copy(), split=self._split, formatting=self._formatting, shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) @property def column_names(self) -> Optional[List[str]]: """Names of the columns in the dataset. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="validation", streaming=True) >>> ds.column_names ['text', 'label'] ``` """ return list(self._info.features.keys()) if self._info.features is not None else None def add_column(self, name: str, column: Union[list, np.array]) -> "IterableDataset": """Add column to Dataset. Args: name (str): Column name. column (list or np.array): Column data to be added. Returns: `IterableDataset` """ return self.map(partial(add_column_fn, name=name, column=column), with_indices=True) def rename_column(self, original_column_name: str, new_column_name: str) -> "IterableDataset": """ Rename a column in the dataset, and move the features associated to the original column under the new column name. Args: original_column_name (`str`): Name of the column to rename. new_column_name (`str`): New name for the column. Returns: `IterableDataset`: A copy of the dataset with a renamed column. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> next(iter(ds)) {'label': 1, 'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'} >>> ds = ds.rename_column("text", "movie_review") >>> next(iter(ds)) {'label': 1, 'movie_review': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'} ``` """ return self.rename_columns({original_column_name: new_column_name}) def rename_columns(self, column_mapping: Dict[str, str]) -> "IterableDataset": """ Rename several columns in the dataset, and move the features associated to the original columns under the new column names. Args: column_mapping (`Dict[str, str]`): A mapping of columns to rename to their new names Returns: `IterableDataset`: A copy of the dataset with renamed columns """ original_features = self._info.features.copy() if self._info.features else None ds_iterable = self.map( partial(_rename_columns_fn, column_mapping=column_mapping), remove_columns=list(column_mapping) ) if original_features is not None: ds_iterable._info.features = Features( { column_mapping[col] if col in column_mapping.keys() else col: feature for col, feature in original_features.items() } ) # check that it's still valid, especially with regard to task templates try: ds_iterable._info.copy() except ValueError: ds_iterable._info.task_templates = None return ds_iterable def remove_columns(self, column_names: Union[str, List[str]]) -> "IterableDataset": """ Remove one or several column(s) in the dataset and the features associated to them. The removal is done on-the-fly on the examples when iterating over the dataset. Args: column_names (`Union[str, List[str]]`): Name of the column(s) to remove. Returns: `IterableDataset`: A copy of the dataset object without the columns to remove. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> next(iter(ds)) {'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .', 'label': 1} >>> ds = ds.remove_columns("label") >>> next(iter(ds)) {'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'} ``` """ original_features = self._info.features.copy() if self._info.features else None ds_iterable = self.map(remove_columns=column_names) if original_features is not None: ds_iterable._info.features = original_features.copy() for col, _ in original_features.items(): if col in column_names: del ds_iterable._info.features[col] # check that it's still valid, especially with regard to task templates try: ds_iterable._info.copy() except ValueError: ds_iterable._info.task_templates = None return ds_iterable def select_columns(self, column_names: Union[str, List[str]]) -> "IterableDataset": """Select one or several column(s) in the dataset and the features associated to them. The selection is done on-the-fly on the examples when iterating over the dataset. Args: column_names (`Union[str, List[str]]`): Name of the column(s) to select. Returns: `IterableDataset`: A copy of the dataset object with selected columns. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> next(iter(ds)) {'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .', 'label': 1} >>> ds = ds.select_columns("text") >>> next(iter(ds)) {'text': 'the rock is destined to be the 21st century\'s new " conan " and that he\'s going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'} ``` """ if isinstance(column_names, str): column_names = [column_names] if self._info: info = copy.deepcopy(self._info) if self._info.features is not None: missing_columns = set(column_names) - set(self._info.features.keys()) if missing_columns: raise ValueError( f"Column name {list(missing_columns)} not in the " "dataset. Columns in the dataset: " f"{list(self._info.features.keys())}." ) info.features = Features({c: info.features[c] for c in column_names}) # check that it's still valid, especially with regard to task templates try: info.copy() except ValueError: info.task_templates = None ex_iterable = SelectColumnsIterable(self._ex_iterable, column_names) return IterableDataset( ex_iterable=ex_iterable, info=info, split=self._split, formatting=self._formatting, shuffling=self._shuffling, distributed=self._distributed, token_per_repo_id=self._token_per_repo_id, ) def cast_column(self, column: str, feature: FeatureType) -> "IterableDataset": """Cast column to feature for decoding. Args: column (`str`): Column name. feature (`Feature`): Target feature. Returns: `IterableDataset` Example: ```py >>> from datasets import load_dataset, Audio >>> ds = load_dataset("PolyAI/minds14", name="en-US", split="train", streaming=True) >>> ds.features {'audio': Audio(sampling_rate=8000, mono=True, decode=True, id=None), 'english_transcription': Value(dtype='string', id=None), 'intent_class': ClassLabel(num_classes=14, names=['abroad', 'address', 'app_error', 'atm_limit', 'balance', 'business_loan', 'card_issues', 'cash_deposit', 'direct_debit', 'freeze', 'high_value_payment', 'joint_account', 'latest_transactions', 'pay_bill'], id=None), 'lang_id': ClassLabel(num_classes=14, names=['cs-CZ', 'de-DE', 'en-AU', 'en-GB', 'en-US', 'es-ES', 'fr-FR', 'it-IT', 'ko-KR', 'nl-NL', 'pl-PL', 'pt-PT', 'ru-RU', 'zh-CN'], id=None), 'path': Value(dtype='string', id=None), 'transcription': Value(dtype='string', id=None)} >>> ds = ds.cast_column("audio", Audio(sampling_rate=16000)) >>> ds.features {'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None), 'english_transcription': Value(dtype='string', id=None), 'intent_class': ClassLabel(num_classes=14, names=['abroad', 'address', 'app_error', 'atm_limit', 'balance', 'business_loan', 'card_issues', 'cash_deposit', 'direct_debit', 'freeze', 'high_value_payment', 'joint_account', 'latest_transactions', 'pay_bill'], id=None), 'lang_id': ClassLabel(num_classes=14, names=['cs-CZ', 'de-DE', 'en-AU', 'en-GB', 'en-US', 'es-ES', 'fr-FR', 'it-IT', 'ko-KR', 'nl-NL', 'pl-PL', 'pt-PT', 'ru-RU', 'zh-CN'], id=None), 'path': Value(dtype='string', id=None), 'transcription': Value(dtype='string', id=None)} ``` """ info = self._info.copy() info.features[column] = feature # check that it's still valid, especially with regard to task templates try: info.copy() except ValueError: info.task_templates = None return IterableDataset( ex_iterable=self._ex_iterable, info=info, split=self._split, formatting=self._formatting, shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def cast( self, features: Features, ) -> "IterableDataset": """ Cast the dataset to a new set of features. Args: features ([`Features`]): New features to cast the dataset to. The name of the fields in the features must match the current column names. The type of the data must also be convertible from one type to the other. For non-trivial conversion, e.g. `string` <-> `ClassLabel` you should use [`~Dataset.map`] to update the Dataset. Returns: `IterableDataset`: A copy of the dataset with casted features. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> ds.features {'label': ClassLabel(num_classes=2, names=['neg', 'pos'], id=None), 'text': Value(dtype='string', id=None)} >>> new_features = ds.features.copy() >>> new_features["label"] = ClassLabel(names=["bad", "good"]) >>> new_features["text"] = Value("large_string") >>> ds = ds.cast(new_features) >>> ds.features {'label': ClassLabel(num_classes=2, names=['bad', 'good'], id=None), 'text': Value(dtype='large_string', id=None)} ``` """ info = self._info.copy() info.features = features # check that it's still valid, especially with regard to task templates try: info.copy() except ValueError: info.task_templates = None return IterableDataset( ex_iterable=self._ex_iterable, info=info, split=self._split, formatting=self._formatting, shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def _step(self, step: int, offset: int) -> "IterableDataset": ex_iterable = StepExamplesIterable(self._ex_iterable, step=step, offset=offset) return IterableDataset( ex_iterable=ex_iterable, info=self._info.copy(), split=self._split, formatting=self._formatting, shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def _resolve_features(self): if self.features is not None: return self elif isinstance(self._ex_iterable, TypedExamplesIterable): features = self._ex_iterable.features else: features = _infer_features_from_batch(self.with_format(None)._head()) info = self.info.copy() info.features = features return IterableDataset( ex_iterable=self._ex_iterable, info=info, split=self._split, formatting=self._formatting, shuffling=copy.deepcopy(self._shuffling), distributed=copy.deepcopy(self._distributed), token_per_repo_id=self._token_per_repo_id, ) def _concatenate_iterable_datasets( dsets: List[IterableDataset], info: Optional[DatasetInfo] = None, split: Optional[NamedSplit] = None, axis: int = 0, ) -> IterableDataset: """ Converts a list of `IterableDataset` with the same schema into a single `IterableDataset`. Missing data are filled with None values. <Added version="2.4.0"/> Args: dsets (`List[datasets.IterableDataset]`): List of Datasets to concatenate. info (`DatasetInfo`, optional): Dataset information, like description, citation, etc. split (`NamedSplit`, optional): Name of the dataset split. axis (``{0, 1}``, default ``0``, meaning over rows): Axis to concatenate over, where ``0`` means over rows (vertically) and ``1`` means over columns (horizontally). *New in version 1.6.0* Example: ```py >>> ds3 = _concatenate_iterable_datasets([ds1, ds2]) ``` """ dsets = [d._resolve_features() for d in dsets] # Perform checks (and a potentional cast if axis=0) if axis == 0: _check_if_features_can_be_aligned([dset.features for dset in dsets]) else: _check_column_names([col_name for dset in dsets for col_name in dset.features]) # TODO: improve this to account for a mix of ClassLabel and Value for example # right now it would keep the type of the first dataset in the list features = Features( {k: v for features in _align_features([dset.features for dset in dsets]) for k, v in features.items()} ) ex_iterables = [d._ex_iterable for d in dsets] if axis == 0: ex_iterable = VerticallyConcatenatedMultiSourcesExamplesIterable(ex_iterables) else: ex_iterable = HorizontallyConcatenatedMultiSourcesExamplesIterable(ex_iterables) # Set new info - we update the features # setting the features also ensures to fill missing columns with None if info is None: info = DatasetInfo.from_merge([d.info for d in dsets]) else: info = info.copy() info.features = features # Get all the auth tokens per repository - in case the datasets come from different private repositories token_per_repo_id = {repo_id: token for dataset in dsets for repo_id, token in dataset._token_per_repo_id.items()} # Return new daset return IterableDataset(ex_iterable=ex_iterable, info=info, split=split, token_per_repo_id=token_per_repo_id) def _interleave_iterable_datasets( datasets: List[IterableDataset], probabilities: Optional[List[float]] = None, seed: Optional[int] = None, info: Optional[DatasetInfo] = None, split: Optional[NamedSplit] = None, stopping_strategy: Literal["first_exhausted", "all_exhausted"] = "first_exhausted", ) -> IterableDataset: """ Interleave several iterable datasets (sources) into a single iterable dataset. The new iterable dataset alternates between the sources to yield examples. If `probabilities = None` (default) the iterable dataset will cycles through the sources in order for each next example in the iteration. If `probabilities` is not `None, the iterable dataset will sample a random source according to the provided probabilities for each next examples in the iteration. <Added version="2.4.0"/> Args: datasets (`List[IterableDataset]`): list of datasets to interleave probabilities (`List[float]`, optional, default None): If specified, the new iterable dataset samples examples from one source at a time according to these probabilities. seed (`int`, optional, default None): The random seed used to choose a source for each example. stopping_strategy (`str`, defaults to `first_exhausted`): Two strategies are proposed right now. By default, `first_exhausted` is an undersampling strategy, i.e the dataset construction is stopped as soon as one dataset has ran out of samples. If the strategy is `all_exhausted`, we use an oversampling strategy, i.e the dataset construction is stopped as soon as every samples of every dataset has been added at least once. Note that if the strategy is `all_exhausted`, the interleaved dataset size can get enormous: - with no probabilities, the resulting dataset will have max_length_datasets*nb_dataset samples. - with given probabilities, the resulting dataset will have more samples if some datasets have really low probability of visiting. Output: `datasets.IterableDataset` """ datasets = [d._resolve_features() for d in datasets] # Perform checks _check_if_features_can_be_aligned([dset.features for dset in datasets]) # TODO: improve this to account for a mix of ClassLabel and Value for example # right now it would keep the type of the first dataset in the list features = Features( {k: v for features in _align_features([dset.features for dset in datasets]) for k, v in features.items()} ) ex_iterables = [d._ex_iterable for d in datasets] # Use cycling or random cycling of sources if probabilities is None: ex_iterable = CyclingMultiSourcesExamplesIterable(ex_iterables, stopping_strategy=stopping_strategy) else: generator = np.random.default_rng(seed) ex_iterable = RandomlyCyclingMultiSourcesExamplesIterable( ex_iterables, generator=generator, probabilities=probabilities, stopping_strategy=stopping_strategy ) # Set new info - we update the features # setting the features also ensures to fill missing columns with None if info is None: info = DatasetInfo.from_merge([d.info for d in datasets]) else: info = info.copy() info.features = features # Get all the auth tokens per repository - in case the datasets come from different private repositories token_per_repo_id = { repo_id: token for dataset in datasets for repo_id, token in dataset._token_per_repo_id.items() } # Return new daset return IterableDataset(ex_iterable=ex_iterable, info=info, split=split, token_per_repo_id=token_per_repo_id) def _split_by_node_iterable_dataset(dataset: IterableDataset, rank: int, world_size: int) -> IterableDataset: """ Split an iterable dataset for the node at rank `rank` in a pool of nodes of size `world_size`. If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples. Args: dataset ([`IterableDataset`]): The iterable dataset to split by node. rank (`int`): Rank of the current node. world_size (`int`): Total number of nodes. Returns: [`IterableDataset`]: The iterable dataset to be used on the node at rank `rank`. """ if dataset._distributed: world_size = world_size * dataset._distributed.world_size rank = world_size * dataset._distributed.rank + rank distributed = DistributedConfig(rank=rank, world_size=world_size) return IterableDataset( ex_iterable=dataset._ex_iterable, info=dataset._info.copy(), split=dataset._split, formatting=dataset._formatting, shuffling=copy.deepcopy(dataset._shuffling), distributed=distributed, token_per_repo_id=dataset._token_per_repo_id, )
datasets/src/datasets/iterable_dataset.py/0
{ "file_path": "datasets/src/datasets/iterable_dataset.py", "repo_id": "datasets", "token_count": 46516 }
79
from dataclasses import dataclass, field from typing import ClassVar, Dict from ..features import Features, Value from .base import TaskTemplate @dataclass(frozen=True) class Summarization(TaskTemplate): # `task` is not a ClassVar since we want it to be part of the `asdict` output for JSON serialization task: str = field(default="summarization", metadata={"include_in_asdict_even_if_is_default": True}) input_schema: ClassVar[Features] = Features({"text": Value("string")}) label_schema: ClassVar[Features] = Features({"summary": Value("string")}) text_column: str = "text" summary_column: str = "summary" @property def column_mapping(self) -> Dict[str, str]: return {self.text_column: "text", self.summary_column: "summary"}
datasets/src/datasets/tasks/summarization.py/0
{ "file_path": "datasets/src/datasets/tasks/summarization.py", "repo_id": "datasets", "token_count": 254 }
80
# Copyright 2020 Optuna, Hugging Face # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Logging utilities.""" import logging import os from logging import ( CRITICAL, # NOQA DEBUG, # NOQA ERROR, # NOQA FATAL, # NOQA INFO, # NOQA NOTSET, # NOQA WARN, # NOQA WARNING, # NOQA ) from typing import Optional from .tqdm import ( # noqa: F401 # imported for backward compatibility disable_progress_bar, enable_progress_bar, is_progress_bar_enabled, tqdm, ) log_levels = { "debug": logging.DEBUG, "info": logging.INFO, "warning": logging.WARNING, "error": logging.ERROR, "critical": logging.CRITICAL, } _default_log_level = logging.WARNING def _get_default_logging_level(): """ If DATASETS_VERBOSITY env var is set to one of the valid choices return that as the new default level. If it is not - fall back to ``_default_log_level`` """ env_level_str = os.getenv("DATASETS_VERBOSITY", None) if env_level_str: if env_level_str in log_levels: return log_levels[env_level_str] else: logging.getLogger().warning( f"Unknown option DATASETS_VERBOSITY={env_level_str}, " f"has to be one of: { ', '.join(log_levels.keys()) }" ) return _default_log_level def _get_library_name() -> str: return __name__.split(".")[0] def _get_library_root_logger() -> logging.Logger: return logging.getLogger(_get_library_name()) def _configure_library_root_logger() -> None: # Apply our default configuration to the library root logger. library_root_logger = _get_library_root_logger() library_root_logger.addHandler(logging.StreamHandler()) library_root_logger.setLevel(_get_default_logging_level()) def _reset_library_root_logger() -> None: library_root_logger = _get_library_root_logger() library_root_logger.setLevel(logging.NOTSET) def get_logger(name: Optional[str] = None) -> logging.Logger: """Return a logger with the specified name. This function can be used in dataset scripts. """ if name is None: name = _get_library_name() return logging.getLogger(name) def get_verbosity() -> int: """Return the current level for the HuggingFace datasets library's root logger. Returns: Logging level, e.g., `datasets.logging.DEBUG` and `datasets.logging.INFO`. <Tip> HuggingFace datasets library has following logging levels: - `datasets.logging.CRITICAL`, `datasets.logging.FATAL` - `datasets.logging.ERROR` - `datasets.logging.WARNING`, `datasets.logging.WARN` - `datasets.logging.INFO` - `datasets.logging.DEBUG` </Tip> """ return _get_library_root_logger().getEffectiveLevel() def set_verbosity(verbosity: int) -> None: """Set the level for the Hugging Face Datasets library's root logger. Args: verbosity: Logging level, e.g., `datasets.logging.DEBUG` and `datasets.logging.INFO`. """ _get_library_root_logger().setLevel(verbosity) def set_verbosity_info(): """Set the level for the Hugging Face datasets library's root logger to `INFO`. This will display most of the logging information and tqdm bars. Shortcut to `datasets.logging.set_verbosity(datasets.logging.INFO)`. """ return set_verbosity(INFO) def set_verbosity_warning(): """Set the level for the Hugging Face datasets library's root logger to `WARNING`. This will display only the warning and errors logging information and tqdm bars. Shortcut to `datasets.logging.set_verbosity(datasets.logging.WARNING)`. """ return set_verbosity(WARNING) def set_verbosity_debug(): """Set the level for the Hugging Face datasets library's root logger to `DEBUG`. This will display all the logging information and tqdm bars. Shortcut to `datasets.logging.set_verbosity(datasets.logging.DEBUG)`. """ return set_verbosity(DEBUG) def set_verbosity_error(): """Set the level for the Hugging Face datasets library's root logger to `ERROR`. This will display only the errors logging information and tqdm bars. Shortcut to `datasets.logging.set_verbosity(datasets.logging.ERROR)`. """ return set_verbosity(ERROR) def disable_propagation() -> None: """Disable propagation of the library log outputs. Note that log propagation is disabled by default. """ _get_library_root_logger().propagate = False def enable_propagation() -> None: """Enable propagation of the library log outputs. Please disable the Hugging Face datasets library's default handler to prevent double logging if the root logger has been configured. """ _get_library_root_logger().propagate = True # Configure the library root logger at the module level (singleton-like) _configure_library_root_logger()
datasets/src/datasets/utils/logging.py/0
{ "file_path": "datasets/src/datasets/utils/logging.py", "repo_id": "datasets", "token_count": 1934 }
81
import os from typing import Dict, List, Tuple, TypeVar, Union T = TypeVar("T") ListLike = Union[List[T], Tuple[T, ...]] NestedDataStructureLike = Union[T, List[T], Dict[str, T]] PathLike = Union[str, bytes, os.PathLike]
datasets/src/datasets/utils/typing.py/0
{ "file_path": "datasets/src/datasets/utils/typing.py", "repo_id": "datasets", "token_count": 84 }
82
import json import tarfile import numpy as np import pytest from datasets import Audio, DownloadManager, Features, Image, Value from datasets.packaged_modules.webdataset.webdataset import WebDataset from ..utils import require_pil, require_sndfile @pytest.fixture def image_wds_file(tmp_path, image_file): json_file = tmp_path / "data.json" filename = tmp_path / "file.tar" num_examples = 3 with json_file.open("w", encoding="utf-8") as f: f.write(json.dumps({"caption": "this is an image"})) with tarfile.open(str(filename), "w") as f: for example_idx in range(num_examples): f.add(json_file, f"{example_idx:05d}.json") f.add(image_file, f"{example_idx:05d}.jpg") return str(filename) @pytest.fixture def audio_wds_file(tmp_path, audio_file): json_file = tmp_path / "data.json" filename = tmp_path / "file.tar" num_examples = 3 with json_file.open("w", encoding="utf-8") as f: f.write(json.dumps({"transcript": "this is a transcript"})) with tarfile.open(str(filename), "w") as f: for example_idx in range(num_examples): f.add(json_file, f"{example_idx:05d}.json") f.add(audio_file, f"{example_idx:05d}.wav") return str(filename) @pytest.fixture def bad_wds_file(tmp_path, image_file, text_file): json_file = tmp_path / "data.json" filename = tmp_path / "bad_file.tar" with json_file.open("w", encoding="utf-8") as f: f.write(json.dumps({"caption": "this is an image"})) with tarfile.open(str(filename), "w") as f: f.add(image_file) f.add(json_file) return str(filename) @require_pil def test_image_webdataset(image_wds_file): import PIL.Image data_files = {"train": [image_wds_file]} webdataset = WebDataset(data_files=data_files) split_generators = webdataset._split_generators(DownloadManager()) assert webdataset.info.features == Features( { "__key__": Value("string"), "__url__": Value("string"), "json": {"caption": Value("string")}, "jpg": Image(), } ) assert len(split_generators) == 1 split_generator = split_generators[0] assert split_generator.name == "train" generator = webdataset._generate_examples(**split_generator.gen_kwargs) _, examples = zip(*generator) assert len(examples) == 3 assert isinstance(examples[0]["json"], dict) assert isinstance(examples[0]["json"]["caption"], str) assert isinstance(examples[0]["jpg"], dict) # keep encoded to avoid unecessary copies encoded = webdataset.info.features.encode_example(examples[0]) decoded = webdataset.info.features.decode_example(encoded) assert isinstance(decoded["json"], dict) assert isinstance(decoded["json"]["caption"], str) assert isinstance(decoded["jpg"], PIL.Image.Image) @require_sndfile def test_audio_webdataset(audio_wds_file): data_files = {"train": [audio_wds_file]} webdataset = WebDataset(data_files=data_files) split_generators = webdataset._split_generators(DownloadManager()) assert webdataset.info.features == Features( { "__key__": Value("string"), "__url__": Value("string"), "json": {"transcript": Value("string")}, "wav": Audio(), } ) assert len(split_generators) == 1 split_generator = split_generators[0] assert split_generator.name == "train" generator = webdataset._generate_examples(**split_generator.gen_kwargs) _, examples = zip(*generator) assert len(examples) == 3 assert isinstance(examples[0]["json"], dict) assert isinstance(examples[0]["json"]["transcript"], str) assert isinstance(examples[0]["wav"], dict) assert isinstance(examples[0]["wav"]["bytes"], bytes) # keep encoded to avoid unecessary copies encoded = webdataset.info.features.encode_example(examples[0]) decoded = webdataset.info.features.decode_example(encoded) assert isinstance(decoded["json"], dict) assert isinstance(decoded["json"]["transcript"], str) assert isinstance(decoded["wav"], dict) assert isinstance(decoded["wav"]["array"], np.ndarray) def test_webdataset_errors_on_bad_file(bad_wds_file): data_files = {"train": [bad_wds_file]} webdataset = WebDataset(data_files=data_files) with pytest.raises(ValueError): webdataset._split_generators(DownloadManager()) @require_pil def test_webdataset_with_features(image_wds_file): import PIL.Image data_files = {"train": [image_wds_file]} features = Features( { "__key__": Value("string"), "__url__": Value("string"), "json": {"caption": Value("string"), "additional_field": Value("int64")}, "jpg": Image(), } ) webdataset = WebDataset(data_files=data_files, features=features) split_generators = webdataset._split_generators(DownloadManager()) assert webdataset.info.features == features split_generator = split_generators[0] assert split_generator.name == "train" generator = webdataset._generate_examples(**split_generator.gen_kwargs) _, example = next(iter(generator)) encoded = webdataset.info.features.encode_example(example) decoded = webdataset.info.features.decode_example(encoded) assert decoded["json"]["additional_field"] is None assert isinstance(decoded["json"], dict) assert isinstance(decoded["json"]["caption"], str) assert isinstance(decoded["jpg"], PIL.Image.Image)
datasets/tests/packaged_modules/test_webdataset.py/0
{ "file_path": "datasets/tests/packaged_modules/test_webdataset.py", "repo_id": "datasets", "token_count": 2263 }
83
import json import os import pickle import subprocess from functools import partial from pathlib import Path from tempfile import gettempdir from textwrap import dedent from types import FunctionType from unittest import TestCase from unittest.mock import patch import numpy as np import pytest from multiprocess import Pool import datasets from datasets import config from datasets.fingerprint import Hasher, fingerprint_transform from datasets.table import InMemoryTable from .utils import ( require_not_windows, require_regex, require_spacy, require_spacy_model, require_tiktoken, require_torch, require_transformers, ) class Foo: def __init__(self, foo): self.foo = foo def __call__(self): return self.foo class DatasetChild(datasets.Dataset): @fingerprint_transform(inplace=False) def func1(self, new_fingerprint, *args, **kwargs): return DatasetChild(self.data, fingerprint=new_fingerprint) @fingerprint_transform(inplace=False) def func2(self, new_fingerprint, *args, **kwargs): return DatasetChild(self.data, fingerprint=new_fingerprint) class UnpicklableCallable: def __init__(self, callable): self.callable = callable def __call__(self, *args, **kwargs): if self.callable is not None: return self.callable(*args, **kwargs) def __getstate__(self): raise pickle.PicklingError() if config.TORCH_AVAILABLE: import torch import torch.nn as nn import torch.nn.functional as F class TorchModule(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) else: TorchModule = None class TokenizersHashTest(TestCase): @require_transformers @pytest.mark.integration def test_hash_tokenizer(self): from transformers import AutoTokenizer def encode(x): return tokenizer(x) # TODO: add hash consistency tests across sessions tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") hash1 = Hasher.hash(tokenizer) hash1_lambda = Hasher.hash(lambda x: tokenizer(x)) hash1_encode = Hasher.hash(encode) tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") hash2 = Hasher.hash(tokenizer) hash2_lambda = Hasher.hash(lambda x: tokenizer(x)) hash2_encode = Hasher.hash(encode) tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") hash3 = Hasher.hash(tokenizer) hash3_lambda = Hasher.hash(lambda x: tokenizer(x)) hash3_encode = Hasher.hash(encode) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) self.assertEqual(hash1_lambda, hash3_lambda) self.assertNotEqual(hash1_lambda, hash2_lambda) self.assertEqual(hash1_encode, hash3_encode) self.assertNotEqual(hash1_encode, hash2_encode) @require_transformers @pytest.mark.integration def test_hash_tokenizer_with_cache(self): from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("gpt2") hash1 = Hasher.hash(tokenizer) tokenizer("Hello world !") # call once to change the tokenizer's cache hash2 = Hasher.hash(tokenizer) self.assertEqual(hash1, hash2) @require_regex def test_hash_regex(self): import regex pat = regex.Regex("foo") hash1 = Hasher.hash(pat) pat = regex.Regex("bar") hash2 = Hasher.hash(pat) pat = regex.Regex("foo") hash3 = Hasher.hash(pat) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) class RecurseHashTest(TestCase): def test_recurse_hash_for_function(self): def func(): return foo foo = [0] hash1 = Hasher.hash(func) foo = [1] hash2 = Hasher.hash(func) foo = [0] hash3 = Hasher.hash(func) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) def test_hash_ignores_line_definition_of_function(self): def func(): pass hash1 = Hasher.hash(func) def func(): pass hash2 = Hasher.hash(func) self.assertEqual(hash1, hash2) def test_recurse_hash_for_class(self): hash1 = Hasher.hash(Foo([0])) hash2 = Hasher.hash(Foo([1])) hash3 = Hasher.hash(Foo([0])) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) def test_recurse_hash_for_method(self): hash1 = Hasher.hash(Foo([0]).__call__) hash2 = Hasher.hash(Foo([1]).__call__) hash3 = Hasher.hash(Foo([0]).__call__) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) def test_hash_ipython_function(self): def create_ipython_func(co_filename, returned_obj): def func(): return returned_obj code = func.__code__ # Use _create_code from dill in order to make it work for different python versions code = code.replace(co_filename=co_filename) return FunctionType(code, func.__globals__, func.__name__, func.__defaults__, func.__closure__) co_filename, returned_obj = "<ipython-input-2-e0383a102aae>", [0] hash1 = Hasher.hash(create_ipython_func(co_filename, returned_obj)) co_filename, returned_obj = "<ipython-input-2-e0383a102aae>", [1] hash2 = Hasher.hash(create_ipython_func(co_filename, returned_obj)) co_filename, returned_obj = "<ipython-input-5-713f6613acf3>", [0] hash3 = Hasher.hash(create_ipython_func(co_filename, returned_obj)) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) co_filename, returned_obj = os.path.join(gettempdir(), "ipykernel_12345", "321456789.py"), [0] hash4 = Hasher.hash(create_ipython_func(co_filename, returned_obj)) co_filename, returned_obj = os.path.join(gettempdir(), "ipykernel_12345", "321456789.py"), [1] hash5 = Hasher.hash(create_ipython_func(co_filename, returned_obj)) co_filename, returned_obj = os.path.join(gettempdir(), "ipykernel_12345", "654123987.py"), [0] hash6 = Hasher.hash(create_ipython_func(co_filename, returned_obj)) self.assertEqual(hash4, hash6) self.assertNotEqual(hash4, hash5) def test_recurse_hash_for_function_with_shuffled_globals(self): foo, bar = [0], [1] def func(): return foo, bar func.__module__ = "__main__" def globalvars_mock1_side_effect(func, *args, **kwargs): return {"foo": foo, "bar": bar} def globalvars_mock2_side_effect(func, *args, **kwargs): return {"bar": bar, "foo": foo} with patch("dill.detect.globalvars", side_effect=globalvars_mock1_side_effect) as globalvars_mock1: hash1 = Hasher.hash(func) self.assertGreater(globalvars_mock1.call_count, 0) with patch("dill.detect.globalvars", side_effect=globalvars_mock2_side_effect) as globalvars_mock2: hash2 = Hasher.hash(func) self.assertGreater(globalvars_mock2.call_count, 0) self.assertEqual(hash1, hash2) class HashingTest(TestCase): def test_hash_simple(self): hash1 = Hasher.hash("hello") hash2 = Hasher.hash("hello") hash3 = Hasher.hash("there") self.assertEqual(hash1, hash2) self.assertNotEqual(hash1, hash3) def test_hash_class_instance(self): hash1 = Hasher.hash(Foo("hello")) hash2 = Hasher.hash(Foo("hello")) hash3 = Hasher.hash(Foo("there")) self.assertEqual(hash1, hash2) self.assertNotEqual(hash1, hash3) def test_hash_update(self): hasher = Hasher() for x in ["hello", Foo("hello")]: hasher.update(x) hash1 = hasher.hexdigest() hasher = Hasher() for x in ["hello", Foo("hello")]: hasher.update(x) hash2 = hasher.hexdigest() hasher = Hasher() for x in ["there", Foo("there")]: hasher.update(x) hash3 = hasher.hexdigest() self.assertEqual(hash1, hash2) self.assertNotEqual(hash1, hash3) def test_hash_unpicklable(self): with self.assertRaises(pickle.PicklingError): Hasher.hash(UnpicklableCallable(Foo("hello"))) def test_hash_same_strings(self): string = "abc" obj1 = [string, string] # two strings have the same ids obj2 = [string, string] obj3 = json.loads(f'["{string}", "{string}"]') # two strings have different ids self.assertIs(obj1[0], string) self.assertIs(obj1[0], obj1[1]) self.assertIs(obj2[0], string) self.assertIs(obj2[0], obj2[1]) self.assertIsNot(obj3[0], string) self.assertIsNot(obj3[0], obj3[1]) hash1 = Hasher.hash(obj1) hash2 = Hasher.hash(obj2) hash3 = Hasher.hash(obj3) self.assertEqual(hash1, hash2) self.assertEqual(hash1, hash3) def test_set_stable(self): rng = np.random.default_rng(42) set_ = {rng.random() for _ in range(10_000)} expected_hash = Hasher.hash(set_) assert expected_hash == Pool(1).apply_async(partial(Hasher.hash, set(set_))).get() def test_set_doesnt_depend_on_order(self): set_ = set("abc") hash1 = Hasher.hash(set_) set_ = set("def") hash2 = Hasher.hash(set_) set_ = set("cba") hash3 = Hasher.hash(set_) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) @require_tiktoken def test_hash_tiktoken_encoding(self): import tiktoken enc = tiktoken.get_encoding("gpt2") hash1 = Hasher.hash(enc) enc = tiktoken.get_encoding("r50k_base") hash2 = Hasher.hash(enc) enc = tiktoken.get_encoding("gpt2") hash3 = Hasher.hash(enc) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) @require_torch def test_hash_torch_tensor(self): import torch t = torch.tensor([1.0]) hash1 = Hasher.hash(t) t = torch.tensor([2.0]) hash2 = Hasher.hash(t) t = torch.tensor([1.0]) hash3 = Hasher.hash(t) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) @require_torch def test_hash_torch_generator(self): import torch t = torch.Generator(device="cpu").manual_seed(42) hash1 = Hasher.hash(t) t = t = torch.Generator(device="cpu").manual_seed(50) hash2 = Hasher.hash(t) t = t = torch.Generator(device="cpu").manual_seed(42) hash3 = Hasher.hash(t) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) @require_spacy @require_spacy_model("en_core_web_sm") @require_spacy_model("fr_core_news_sm") @pytest.mark.integration def test_hash_spacy_model(self): import spacy nlp = spacy.load("en_core_web_sm") hash1 = Hasher.hash(nlp) nlp = spacy.load("fr_core_news_sm") hash2 = Hasher.hash(nlp) nlp = spacy.load("en_core_web_sm") hash3 = Hasher.hash(nlp) self.assertEqual(hash1, hash3) self.assertNotEqual(hash1, hash2) @require_not_windows @require_torch def test_hash_torch_compiled_function(self): import torch def f(x): return torch.sin(x) + torch.cos(x) hash1 = Hasher.hash(f) f = torch.compile(f) hash2 = Hasher.hash(f) self.assertEqual(hash1, hash2) @require_not_windows @require_torch def test_hash_torch_compiled_module(self): m = TorchModule() next(iter(m.parameters())).data.fill_(1.0) hash1 = Hasher.hash(m) m = torch.compile(m) hash2 = Hasher.hash(m) m = TorchModule() next(iter(m.parameters())).data.fill_(2.0) m = torch.compile(m) hash3 = Hasher.hash(m) self.assertEqual(hash1, hash2) self.assertNotEqual(hash1, hash3) self.assertNotEqual(hash2, hash3) @pytest.mark.integration def test_move_script_doesnt_change_hash(tmp_path: Path): dir1 = tmp_path / "dir1" dir2 = tmp_path / "dir2" dir1.mkdir() dir2.mkdir() script_filename = "script.py" code = dedent( """ from datasets.fingerprint import Hasher def foo(): pass print(Hasher.hash(foo)) """ ) script_path1 = dir1 / script_filename script_path2 = dir2 / script_filename with script_path1.open("w") as f: f.write(code) with script_path2.open("w") as f: f.write(code) fingerprint1 = subprocess.check_output(["python", str(script_path1)]) fingerprint2 = subprocess.check_output(["python", str(script_path2)]) assert fingerprint1 == fingerprint2 def test_fingerprint_in_multiprocessing(): data = {"a": [0, 1, 2]} dataset = DatasetChild(InMemoryTable.from_pydict(data)) expected_fingerprint = dataset.func1()._fingerprint assert expected_fingerprint == dataset.func1()._fingerprint assert expected_fingerprint != dataset.func2()._fingerprint with Pool(2) as p: assert expected_fingerprint == p.apply_async(dataset.func1).get()._fingerprint assert expected_fingerprint != p.apply_async(dataset.func2).get()._fingerprint def test_fingerprint_when_transform_version_changes(): data = {"a": [0, 1, 2]} class DummyDatasetChild(datasets.Dataset): @fingerprint_transform(inplace=False) def func(self, new_fingerprint): return DummyDatasetChild(self.data, fingerprint=new_fingerprint) fingeprint_no_version = DummyDatasetChild(InMemoryTable.from_pydict(data)).func() class DummyDatasetChild(datasets.Dataset): @fingerprint_transform(inplace=False, version="1.0.0") def func(self, new_fingerprint): return DummyDatasetChild(self.data, fingerprint=new_fingerprint) fingeprint_1 = DummyDatasetChild(InMemoryTable.from_pydict(data)).func() class DummyDatasetChild(datasets.Dataset): @fingerprint_transform(inplace=False, version="2.0.0") def func(self, new_fingerprint): return DummyDatasetChild(self.data, fingerprint=new_fingerprint) fingeprint_2 = DummyDatasetChild(InMemoryTable.from_pydict(data)).func() assert len({fingeprint_no_version, fingeprint_1, fingeprint_2}) == 3 def test_dependency_on_dill(): # AttributeError: module 'dill._dill' has no attribute 'stack' hasher = Hasher() hasher.update(lambda x: x)
datasets/tests/test_fingerprint.py/0
{ "file_path": "datasets/tests/test_fingerprint.py", "repo_id": "datasets", "token_count": 6783 }
84
import re import tempfile from pathlib import Path import pytest import yaml from datasets.utils.readme import ReadMe # @pytest.fixture # def example_yaml_structure(): example_yaml_structure = yaml.safe_load( """\ name: "" allow_empty: false allow_empty_text: true subsections: - name: "Dataset Card for X" # First-level markdown heading allow_empty: false allow_empty_text: true subsections: - name: "Table of Contents" allow_empty: false allow_empty_text: false subsections: null - name: "Dataset Description" allow_empty: false allow_empty_text: false subsections: - name: "Dataset Summary" allow_empty: false allow_empty_text: false subsections: null - name: "Supported Tasks and Leaderboards" allow_empty: true allow_empty_text: true subsections: null - name: Languages allow_empty: false allow_empty_text: true subsections: null """ ) CORRECT_DICT = { "name": "root", "text": "", "is_empty_text": True, "subsections": [ { "name": "Dataset Card for My Dataset", "text": "", "is_empty_text": True, "subsections": [ {"name": "Table of Contents", "text": "Some text here.", "is_empty_text": False, "subsections": []}, { "name": "Dataset Description", "text": "Some text here.", "is_empty_text": False, "subsections": [ { "name": "Dataset Summary", "text": "Some text here.", "is_empty_text": False, "subsections": [], }, { "name": "Supported Tasks and Leaderboards", "text": "", "is_empty_text": True, "subsections": [], }, {"name": "Languages", "text": "Language Text", "is_empty_text": False, "subsections": []}, ], }, ], } ], } README_CORRECT = """\ --- language: - zh - en --- # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages Language Text """ README_CORRECT_FOUR_LEVEL = """\ --- language: - zh - en --- # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. #### Extra Ignored Subsection ### Supported Tasks and Leaderboards ### Languages Language Text """ CORRECT_DICT_FOUR_LEVEL = { "name": "root", "text": "", "is_empty_text": True, "subsections": [ { "name": "Dataset Card for My Dataset", "text": "", "is_empty_text": True, "subsections": [ {"name": "Table of Contents", "text": "Some text here.", "is_empty_text": False, "subsections": []}, { "name": "Dataset Description", "text": "Some text here.", "is_empty_text": False, "subsections": [ { "name": "Dataset Summary", "text": "Some text here.", "is_empty_text": False, "subsections": [ { "name": "Extra Ignored Subsection", "text": "", "is_empty_text": True, "subsections": [], } ], }, { "name": "Supported Tasks and Leaderboards", "text": "", "is_empty_text": True, "subsections": [], }, {"name": "Languages", "text": "Language Text", "is_empty_text": False, "subsections": []}, ], }, ], } ], } README_EMPTY_YAML = """\ --- --- # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages Language Text """ EXPECTED_ERROR_README_EMPTY_YAML = ( "The following issues were found for the README at `{path}`:\n-\tEmpty YAML markers are present in the README." ) README_NO_YAML = """\ # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages Language Text """ EXPECTED_ERROR_README_NO_YAML = ( "The following issues were found for the README at `{path}`:\n-\tNo YAML markers are present in the README." ) README_INCORRECT_YAML = """\ --- # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages Language Text """ EXPECTED_ERROR_README_INCORRECT_YAML = "The following issues were found for the README at `{path}`:\n-\tOnly the start of YAML tags present in the README." README_MISSING_TEXT = """\ --- language: - zh - en --- # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages Language Text """ EXPECTED_ERROR_README_MISSING_TEXT = "The following issues were found for the README at `{path}`:\n-\tExpected some content in section `Dataset Summary` but it is empty.\n-\tExpected some text in section `Dataset Summary` but it is empty (text in subsections are ignored)." README_NONE_SUBSECTION = """\ --- language: - zh - en --- # Dataset Card for My Dataset """ EXPECTED_ERROR_README_NONE_SUBSECTION = "The following issues were found for the README at `{path}`:\n-\tExpected some content in section `Dataset Card for My Dataset` but it is empty.\n-\tSection `Dataset Card for My Dataset` expected the following subsections: `Table of Contents`, `Dataset Description`. Found 'None'." README_MISSING_SUBSECTION = """\ --- language: - zh - en --- # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Languages Language Text """ EXPECTED_ERROR_README_MISSING_SUBSECTION = "The following issues were found for the README at `{path}`:\n-\tSection `Dataset Description` is missing subsection: `Supported Tasks and Leaderboards`." README_MISSING_CONTENT = """\ --- language: - zh - en --- # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages """ EXPECTED_ERROR_README_MISSING_CONTENT = "The following issues were found for the README at `{path}`:\n-\tExpected some content in section `Languages` but it is empty." README_MISSING_FIRST_LEVEL = """\ --- language: - zh - en --- ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages Language Text """ EXPECTED_ERROR_README_MISSING_FIRST_LEVEL = "The following issues were found for the README at `{path}`:\n-\tThe README has no first-level headings. One heading is expected. Skipping further validation for this README." README_MULTIPLE_WRONG_FIRST_LEVEL = """\ --- language: - zh - en --- # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages Language Text # Dataset Card My Dataset """ EXPECTED_ERROR_README_MULTIPLE_WRONG_FIRST_LEVEL = "The following issues were found for the README at `{path}`:\n-\tThe README has several first-level headings: `Dataset Card for My Dataset`, `Dataset Card My Dataset`. Only one heading is expected. Skipping further validation for this README." README_WRONG_FIRST_LEVEL = """\ --- language: - zh - en --- # Dataset Card My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages Language Text """ EXPECTED_ERROR_README_WRONG_FIRST_LEVEL = "The following issues were found for the README at `{path}`:\n-\tNo first-level heading starting with `Dataset Card for` found in README. Skipping further validation for this README." README_EMPTY = "" EXPECTED_ERROR_README_EMPTY = "The following issues were found for the README at `{path}`:\n-\tThe README has no first-level headings. One heading is expected. Skipping further validation for this README.\n-\tNo YAML markers are present in the README." README_MULTIPLE_SAME_HEADING_1 = """\ --- language: - zh - en --- # Dataset Card for My Dataset # Dataset Card for My Dataset ## Table of Contents Some text here. ## Dataset Description Some text here. ### Dataset Summary Some text here. ### Supported Tasks and Leaderboards ### Languages Language Text """ EXPECTED_ERROR_README_MULTIPLE_SAME_HEADING_1 = "The following issues were found while parsing the README at `{path}`:\n-\tMultiple sections with the same heading `Dataset Card for My Dataset` have been found. Please keep only one of these sections." @pytest.mark.parametrize( "readme_md, expected_dict", [ (README_CORRECT, CORRECT_DICT), (README_CORRECT_FOUR_LEVEL, CORRECT_DICT_FOUR_LEVEL), ], ) def test_readme_from_string_correct(readme_md, expected_dict): assert ReadMe.from_string(readme_md, example_yaml_structure).to_dict() == expected_dict @pytest.mark.parametrize( "readme_md, expected_error", [ (README_NO_YAML, EXPECTED_ERROR_README_NO_YAML), (README_EMPTY_YAML, EXPECTED_ERROR_README_EMPTY_YAML), (README_INCORRECT_YAML, EXPECTED_ERROR_README_INCORRECT_YAML), (README_EMPTY, EXPECTED_ERROR_README_EMPTY), (README_NONE_SUBSECTION, EXPECTED_ERROR_README_NONE_SUBSECTION), (README_MISSING_FIRST_LEVEL, EXPECTED_ERROR_README_MISSING_FIRST_LEVEL), (README_MISSING_SUBSECTION, EXPECTED_ERROR_README_MISSING_SUBSECTION), (README_MISSING_TEXT, EXPECTED_ERROR_README_MISSING_TEXT), (README_WRONG_FIRST_LEVEL, EXPECTED_ERROR_README_WRONG_FIRST_LEVEL), (README_MULTIPLE_WRONG_FIRST_LEVEL, EXPECTED_ERROR_README_MULTIPLE_WRONG_FIRST_LEVEL), (README_MISSING_CONTENT, EXPECTED_ERROR_README_MISSING_CONTENT), ], ) def test_readme_from_string_validation_errors(readme_md, expected_error): with pytest.raises(ValueError, match=re.escape(expected_error.format(path="root"))): readme = ReadMe.from_string(readme_md, example_yaml_structure) readme.validate() @pytest.mark.parametrize( "readme_md, expected_error", [ (README_MULTIPLE_SAME_HEADING_1, EXPECTED_ERROR_README_MULTIPLE_SAME_HEADING_1), ], ) def test_readme_from_string_parsing_errors(readme_md, expected_error): with pytest.raises(ValueError, match=re.escape(expected_error.format(path="root"))): ReadMe.from_string(readme_md, example_yaml_structure) @pytest.mark.parametrize( "readme_md,", [ (README_MULTIPLE_SAME_HEADING_1), ], ) def test_readme_from_string_suppress_parsing_errors(readme_md): ReadMe.from_string(readme_md, example_yaml_structure, suppress_parsing_errors=True) @pytest.mark.parametrize( "readme_md, expected_dict", [ (README_CORRECT, CORRECT_DICT), (README_CORRECT_FOUR_LEVEL, CORRECT_DICT_FOUR_LEVEL), ], ) def test_readme_from_readme_correct(readme_md, expected_dict): with tempfile.TemporaryDirectory() as tmp_dir: path = Path(tmp_dir) / "README.md" with open(path, "w+") as readme_file: readme_file.write(readme_md) out = ReadMe.from_readme(path, example_yaml_structure).to_dict() assert out["name"] == path assert out["text"] == "" assert out["is_empty_text"] assert out["subsections"] == expected_dict["subsections"] @pytest.mark.parametrize( "readme_md, expected_error", [ (README_NO_YAML, EXPECTED_ERROR_README_NO_YAML), (README_EMPTY_YAML, EXPECTED_ERROR_README_EMPTY_YAML), (README_INCORRECT_YAML, EXPECTED_ERROR_README_INCORRECT_YAML), (README_EMPTY, EXPECTED_ERROR_README_EMPTY), (README_NONE_SUBSECTION, EXPECTED_ERROR_README_NONE_SUBSECTION), (README_MISSING_FIRST_LEVEL, EXPECTED_ERROR_README_MISSING_FIRST_LEVEL), (README_MISSING_SUBSECTION, EXPECTED_ERROR_README_MISSING_SUBSECTION), (README_MISSING_TEXT, EXPECTED_ERROR_README_MISSING_TEXT), (README_WRONG_FIRST_LEVEL, EXPECTED_ERROR_README_WRONG_FIRST_LEVEL), (README_MULTIPLE_WRONG_FIRST_LEVEL, EXPECTED_ERROR_README_MULTIPLE_WRONG_FIRST_LEVEL), (README_MISSING_CONTENT, EXPECTED_ERROR_README_MISSING_CONTENT), ], ) def test_readme_from_readme_error(readme_md, expected_error): with tempfile.TemporaryDirectory() as tmp_dir: path = Path(tmp_dir) / "README.md" with open(path, "w+") as readme_file: readme_file.write(readme_md) expected_error = expected_error.format(path=path) with pytest.raises(ValueError, match=re.escape(expected_error)): readme = ReadMe.from_readme(path, example_yaml_structure) readme.validate() @pytest.mark.parametrize( "readme_md, expected_error", [ (README_MULTIPLE_SAME_HEADING_1, EXPECTED_ERROR_README_MULTIPLE_SAME_HEADING_1), ], ) def test_readme_from_readme_parsing_errors(readme_md, expected_error): with tempfile.TemporaryDirectory() as tmp_dir: path = Path(tmp_dir) / "README.md" with open(path, "w+") as readme_file: readme_file.write(readme_md) expected_error = expected_error.format(path=path) with pytest.raises(ValueError, match=re.escape(expected_error)): ReadMe.from_readme(path, example_yaml_structure) @pytest.mark.parametrize( "readme_md,", [ (README_MULTIPLE_SAME_HEADING_1), ], ) def test_readme_from_readme_suppress_parsing_errors(readme_md): with tempfile.TemporaryDirectory() as tmp_dir: path = Path(tmp_dir) / "README.md" with open(path, "w+") as readme_file: readme_file.write(readme_md) ReadMe.from_readme(path, example_yaml_structure, suppress_parsing_errors=True)
datasets/tests/test_readme_util.py/0
{ "file_path": "datasets/tests/test_readme_util.py", "repo_id": "datasets", "token_count": 6733 }
85
<jupyter_start><jupyter_text>Bonus Unit 1: Let's train Huggy the Dog 🐶 to fetch a stick In this notebook, we'll reinforce what we learned in the first Unit by **teaching Huggy the Dog to fetch the stick and then play with it directly in your browser**⬇️ Here is an example of what **you will achieve at the end of the unit.** ⬇️ (launch ▶ to see)<jupyter_code>%%html <video controls autoplay><source src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit-bonus1/huggy.mp4" type="video/mp4"></video><jupyter_output><empty_output><jupyter_text>The environment 🎮- Huggy the Dog, an environment created by [Thomas Simonini](https://twitter.com/ThomasSimonini) based on [Puppo The Corgi](https://blog.unity.com/technology/puppo-the-corgi-cuteness-overload-with-the-unity-ml-agents-toolkit) The library used 📚- [MLAgents](https://github.com/Unity-Technologies/ml-agents) We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the Github Repo](https://github.com/huggingface/deep-rl-class/issues). Objectives of this notebook 🏆At the end of the notebook, you will:- Understand **the state space, action space and reward function used to train Huggy**.- **Train your own Huggy** to fetch the stick.- Be able to play **with your trained Huggy directly in your browser**. This notebook is from Deep Reinforcement Learning Course In this free course, you will:- 📖 Study Deep Reinforcement Learning in **theory and practice**.- 🧑‍💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, CleanRL and Sample Factory 2.0.- 🤖 Train **agents in unique environments**And more check 📚 the syllabus 👉 https://simoninithomas.github.io/deep-rl-courseDon’t forget to **sign up to the course** (we are collecting your email to be able to **send you the links when each Unit is published and give you information about the challenges and updates).**The best way to keep in touch is to join our discord server to exchange with the community and with us 👉🏻 https://discord.gg/ydHrjt3WP5 Prerequisites 🏗️Before diving into the notebook, you need to:🔲 📚 **Develop an understanding of the foundations of Reinforcement learning** (MC, TD, Rewards hypothesis...) by doing Unit 1🔲 📚 **Read the introduction to Huggy** by doing Bonus Unit 1 Set the GPU 💪- To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type` - `Hardware Accelerator > GPU` Clone the repository and install the dependencies 🔽- We need to clone the repository, that contains **ML-Agents.**<jupyter_code>%%capture # Clone the repository (can take 3min) !git clone --depth 1 https://github.com/Unity-Technologies/ml-agents %%capture # Go inside the repository and install the package (can take 3min) %cd ml-agents !pip3 install -e ./ml-agents-envs !pip3 install -e ./ml-agents<jupyter_output><empty_output><jupyter_text>Download and move the environment zip file in `./trained-envs-executables/linux/`- Our environment executable is in a zip file.- We need to download it and place it to `./trained-envs-executables/linux/`<jupyter_code>!mkdir ./trained-envs-executables !mkdir ./trained-envs-executables/linux<jupyter_output><empty_output><jupyter_text>We downloaded the file Huggy.zip from https://github.com/huggingface/Huggy using `wget`<jupyter_code>!wget "https://github.com/huggingface/Huggy/raw/main/Huggy.zip" -O ./trained-envs-executables/linux/Huggy.zip %%capture !unzip -d ./trained-envs-executables/linux/ ./trained-envs-executables/linux/Huggy.zip<jupyter_output><empty_output><jupyter_text>Make sure your file is accessible<jupyter_code>!chmod -R 755 ./trained-envs-executables/linux/Huggy<jupyter_output><empty_output><jupyter_text>Let's recap how this environment works The State Space: what Huggy "perceives."Huggy doesn't "see" his environment. Instead, we provide him information about the environment:- The target (stick) position- The relative position between himself and the target- The orientation of his legs.Given all this information, Huggy **can decide which action to take next to fulfill his goal**. The Action Space: what moves Huggy can do**Joint motors drive huggy legs**. It means that to get the target, Huggy needs to **learn to rotate the joint motors of each of his legs correctly so he can move**. The Reward FunctionThe reward function is designed so that **Huggy will fulfill his goal** : fetch the stick.Remember that one of the foundations of Reinforcement Learning is the *reward hypothesis*: a goal can be described as the **maximization of the expected cumulative reward**.Here, our goal is that Huggy **goes towards the stick but without spinning too much**. Hence, our reward function must translate this goal.Our reward function:- *Orientation bonus*: we **reward him for getting close to the target**.- *Time penalty*: a fixed-time penalty given at every action to **force him to get to the stick as fast as possible**.- *Rotation penalty*: we penalize Huggy if **he spins too much and turns too quickly**.- *Getting to the target reward*: we reward Huggy for **reaching the target**. Create the Huggy config file- In ML-Agents, you define the **training hyperparameters into config.yaml files.**- For the scope of this notebook, we're not going to modify the hyperparameters, but if you want to try as an experiment, you should also try to modify some other hyperparameters, Unity provides very [good documentation explaining each of them here](https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Training-Configuration-File.md).- But we need to create a config file for Huggy. - To do that click on Folder logo on the left of your screen. - Go to `/content/ml-agents/config/ppo` - Right mouse click and create a new file called `Huggy.yaml` - Copy and paste the content below 🔽<jupyter_code>behaviors: Huggy: trainer_type: ppo hyperparameters: batch_size: 2048 buffer_size: 20480 learning_rate: 0.0003 beta: 0.005 epsilon: 0.2 lambd: 0.95 num_epoch: 3 learning_rate_schedule: linear network_settings: normalize: true hidden_units: 512 num_layers: 3 vis_encode_type: simple reward_signals: extrinsic: gamma: 0.995 strength: 1.0 checkpoint_interval: 200000 keep_checkpoints: 15 max_steps: 2e6 time_horizon: 1000 summary_freq: 50000<jupyter_output><empty_output><jupyter_text>- Don't forget to save the file! - **In the case you want to modify the hyperparameters**, in Google Colab notebook, you can click here to open the config.yaml: `/content/ml-agents/config/ppo/Huggy.yaml`- For instance **if you want to save more models during the training** (for now, we save every 200,000 training timesteps). You need to modify: - `checkpoint_interval`: The number of training timesteps collected between each checkpoint. - `keep_checkpoints`: The maximum number of model checkpoints to keep.=> Just keep in mind that **decreasing the `checkpoint_interval` means more models to upload to the Hub and so a longer uploading time**We’re now ready to train our agent 🔥. Train our agentTo train our agent, we just need to **launch mlagents-learn and select the executable containing the environment.**With ML Agents, we run a training script. We define four parameters:1. `mlagents-learn `: the path where the hyperparameter config file is.2. `--env`: where the environment executable is.3. `--run-id`: the name you want to give to your training run id.4. `--no-graphics`: to not launch the visualization during the training.Train the model and use the `--resume` flag to continue training in case of interruption.> It will fail first time when you use `--resume`, try running the block again to bypass the error. The training will take 30 to 45min depending on your machine (don't forget to **set up a GPU**), go take a ☕️you deserve it 🤗.<jupyter_code>!mlagents-learn ./config/ppo/Huggy.yaml --env=./trained-envs-executables/linux/Huggy/Huggy --run-id="Huggy2" --no-graphics<jupyter_output><empty_output><jupyter_text>Push the agent to the 🤗 Hub- Now that we trained our agent, we’re **ready to push it to the Hub to be able to play with Huggy on your browser🔥.** To be able to share your model with the community there are three more steps to follow:1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.- Create a new token (https://huggingface.co/settings/tokens) **with write role**- Copy the token- Run the cell below and paste the token<jupyter_code>from huggingface_hub import notebook_login notebook_login()<jupyter_output><empty_output><jupyter_text>If you don't want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` Then, we simply need to run `mlagents-push-to-hf`. And we define 4 parameters:1. `--run-id`: the name of the training run id.2. `--local-dir`: where the agent was saved, it’s results/, so in my case results/First Training.3. `--repo-id`: the name of the Hugging Face repo you want to create or update. It’s always /If the repo does not exist **it will be created automatically**4. `--commit-message`: since HF repos are git repository you need to define a commit message.<jupyter_code>!mlagents-push-to-hf --run-id="HuggyTraining" --local-dir="./results/Huggy" --repo-id="ThomasSimonini/ppo-Huggy" --commit-message="Huggy"<jupyter_output><empty_output>
deep-rl-class/notebooks/bonus-unit1/bonus_unit1.ipynb/0
{ "file_path": "deep-rl-class/notebooks/bonus-unit1/bonus_unit1.ipynb", "repo_id": "deep-rl-class", "token_count": 2886 }
86
# Live 1: How the course work, Q&A, and playing with Huggy In this first live stream, we explained how the course work (scope, units, challenges, and more) and answered your questions. And finally, we saw some LunarLander agents you've trained and play with your Huggies 🐶 <Youtube id="JeJIswxyrsM" /> To know when the next live is scheduled **check the discord server**. We will also send **you an email**. If you can't participate, don't worry, we record the live sessions.
deep-rl-class/units/en/live1/live1.mdx/0
{ "file_path": "deep-rl-class/units/en/live1/live1.mdx", "repo_id": "deep-rl-class", "token_count": 131 }
87
# What is Reinforcement Learning? [[what-is-reinforcement-learning]] To understand Reinforcement Learning, let’s start with the big picture. ## The big picture [[the-big-picture]] The idea behind Reinforcement Learning is that an agent (an AI) will learn from the environment by **interacting with it** (through trial and error) and **receiving rewards** (negative or positive) as feedback for performing actions. Learning from interactions with the environment **comes from our natural experiences.** For instance, imagine putting your little brother in front of a video game he never played, giving him a controller, and leaving him alone. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/Illustration_1.jpg" alt="Illustration_1" width="100%"> Your brother will interact with the environment (the video game) by pressing the right button (action). He got a coin, that’s a +1 reward. It’s positive, he just understood that in this game **he must get the coins.** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/Illustration_2.jpg" alt="Illustration_2" width="100%"> But then, **he presses the right button again** and he touches an enemy. He just died, so that's a -1 reward. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit1/Illustration_3.jpg" alt="Illustration_3" width="100%"> By interacting with his environment through trial and error, your little brother understands that **he needs to get coins in this environment but avoid the enemies.** **Without any supervision**, the child will get better and better at playing the game. That’s how humans and animals learn, **through interaction.** Reinforcement Learning is just a **computational approach of learning from actions.** ### A formal definition [[a-formal-definition]] We can now make a formal definition: <Tip> Reinforcement learning is a framework for solving control tasks (also called decision problems) by building agents that learn from the environment by interacting with it through trial and error and receiving rewards (positive or negative) as unique feedback. </Tip> But how does Reinforcement Learning work?
deep-rl-class/units/en/unit1/what-is-rl.mdx/0
{ "file_path": "deep-rl-class/units/en/unit1/what-is-rl.mdx", "repo_id": "deep-rl-class", "token_count": 624 }
88
# Additional Readings [[additional-readings]] These are **optional readings** if you want to go deeper. - [Foundations of Deep RL Series, L2 Deep Q-Learning by Pieter Abbeel](https://youtu.be/Psrhxy88zww) - [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/abs/1312.5602) - [Double Deep Q-Learning](https://papers.nips.cc/paper/2010/hash/091d584fced301b442654dd8c23b3fc9-Abstract.html) - [Prioritized Experience Replay](https://arxiv.org/abs/1511.05952)
deep-rl-class/units/en/unit3/additional-readings.mdx/0
{ "file_path": "deep-rl-class/units/en/unit3/additional-readings.mdx", "repo_id": "deep-rl-class", "token_count": 163 }
89
# Diving deeper into policy-gradient methods ## Getting the big picture We just learned that policy-gradient methods aim to find parameters \\( \theta \\) that **maximize the expected return**. The idea is that we have a *parameterized stochastic policy*. In our case, a neural network outputs a probability distribution over actions. The probability of taking each action is also called the *action preference*. If we take the example of CartPole-v1: - As input, we have a state. - As output, we have a probability distribution over actions at that state. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/policy_based.png" alt="Policy based" /> Our goal with policy-gradient is to **control the probability distribution of actions** by tuning the policy such that **good actions (that maximize the return) are sampled more frequently in the future.** Each time the agent interacts with the environment, we tweak the parameters such that good actions will be sampled more likely in the future. But **how are we going to optimize the weights using the expected return**? The idea is that we're going to **let the agent interact during an episode**. And if we win the episode, we consider that each action taken was good and must be more sampled in the future since they lead to win. So for each state-action pair, we want to increase the \\(P(a|s)\\): the probability of taking that action at that state. Or decrease if we lost. The Policy-gradient algorithm (simplified) looks like this: <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/pg_bigpicture.jpg" alt="Policy Gradient Big Picture"/> </figure> Now that we got the big picture, let's dive deeper into policy-gradient methods. ## Diving deeper into policy-gradient methods We have our stochastic policy \\(\pi\\) which has a parameter \\(\theta\\). This \\(\pi\\), given a state, **outputs a probability distribution of actions**. <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/stochastic_policy.png" alt="Policy"/> </figure> Where \\(\pi_\theta(a_t|s_t)\\) is the probability of the agent selecting action \\(a_t\\) from state \\(s_t\\) given our policy. **But how do we know if our policy is good?** We need to have a way to measure it. To know that, we define a score/objective function called \\(J(\theta)\\). ### The objective function The *objective function* gives us the **performance of the agent** given a trajectory (state action sequence without considering reward (contrary to an episode)), and it outputs the *expected cumulative reward*. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/objective.jpg" alt="Return"/> Let's give some more details on this formula: - The *expected return* (also called expected cumulative reward), is the weighted average (where the weights are given by \\(P(\tau;\theta)\\) of all possible values that the return \\(R(\tau)\\) can take). <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/expected_reward.png" alt="Return"/> - \\(R(\tau)\\) : Return from an arbitrary trajectory. To take this quantity and use it to calculate the expected return, we need to multiply it by the probability of each possible trajectory. - \\(P(\tau;\theta)\\) : Probability of each possible trajectory \\(\tau\\) (that probability depends on \\( \theta\\) since it defines the policy that it uses to select the actions of the trajectory which has an impact of the states visited). <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/probability.png" alt="Probability"/> - \\(J(\theta)\\) : Expected return, we calculate it by summing for all trajectories, the probability of taking that trajectory given \\(\theta \\) multiplied by the return of this trajectory. Our objective then is to maximize the expected cumulative reward by finding the \\(\theta \\) that will output the best action probability distributions: <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/max_objective.png" alt="Max objective"/> ## Gradient Ascent and the Policy-gradient Theorem Policy-gradient is an optimization problem: we want to find the values of \\(\theta\\) that maximize our objective function \\(J(\theta)\\), so we need to use **gradient-ascent**. It's the inverse of *gradient-descent* since it gives the direction of the steepest increase of \\(J(\theta)\\). (If you need a refresher on the difference between gradient descent and gradient ascent [check this](https://www.baeldung.com/cs/gradient-descent-vs-ascent) and [this](https://stats.stackexchange.com/questions/258721/gradient-ascent-vs-gradient-descent-in-logistic-regression)). Our update step for gradient-ascent is: \\( \theta \leftarrow \theta + \alpha * \nabla_\theta J(\theta) \\) We can repeatedly apply this update in the hopes that \\(\theta \\) converges to the value that maximizes \\(J(\theta)\\). However, there are two problems with computing the derivative of \\(J(\theta)\\): 1. We can't calculate the true gradient of the objective function since it requires calculating the probability of each possible trajectory, which is computationally super expensive. So we want to **calculate a gradient estimation with a sample-based estimate (collect some trajectories)**. 2. We have another problem that I explain in the next optional section. To differentiate this objective function, we need to differentiate the state distribution, called the Markov Decision Process dynamics. This is attached to the environment. It gives us the probability of the environment going into the next state, given the current state and the action taken by the agent. The problem is that we can't differentiate it because we might not know about it. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/probability.png" alt="Probability"/> Fortunately we're going to use a solution called the Policy Gradient Theorem that will help us to reformulate the objective function into a differentiable function that does not involve the differentiation of the state distribution. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/policy_gradient_theorem.png" alt="Policy Gradient"/> If you want to understand how we derive this formula for approximating the gradient, check out the next (optional) section. ## The Reinforce algorithm (Monte Carlo Reinforce) The Reinforce algorithm, also called Monte-Carlo policy-gradient, is a policy-gradient algorithm that **uses an estimated return from an entire episode to update the policy parameter** \\(\theta\\): In a loop: - Use the policy \\(\pi_\theta\\) to collect an episode \\(\tau\\) - Use the episode to estimate the gradient \\(\hat{g} = \nabla_\theta J(\theta)\\) <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/policy_gradient_one.png" alt="Policy Gradient"/> </figure> - Update the weights of the policy: \\(\theta \leftarrow \theta + \alpha \hat{g}\\) We can interpret this update as follows: - \\(\nabla_\theta log \pi_\theta(a_t|s_t)\\) is the direction of **steepest increase of the (log) probability** of selecting action at from state st. This tells us **how we should change the weights of policy** if we want to increase/decrease the log probability of selecting action \\(a_t\\) at state \\(s_t\\). - \\(R(\tau)\\): is the scoring function: - If the return is high, it will **push up the probabilities** of the (state, action) combinations. - Otherwise, if the return is low, it will **push down the probabilities** of the (state, action) combinations. We can also **collect multiple episodes (trajectories)** to estimate the gradient: <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit6/policy_gradient_multiple.png" alt="Policy Gradient"/> </figure>
deep-rl-class/units/en/unit4/policy-gradient.mdx/0
{ "file_path": "deep-rl-class/units/en/unit4/policy-gradient.mdx", "repo_id": "deep-rl-class", "token_count": 2365 }
90
# Introduction [[introduction]] <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit8/thumbnail.png" alt="Thumbnail"/> In unit 4, we learned about our first Policy-Based algorithm called **Reinforce**. In Policy-Based methods, **we aim to optimize the policy directly without using a value function**. More precisely, Reinforce is part of a subclass of *Policy-Based Methods* called *Policy-Gradient methods*. This subclass optimizes the policy directly by **estimating the weights of the optimal policy using Gradient Ascent**. We saw that Reinforce worked well. However, because we use Monte-Carlo sampling to estimate return (we use an entire episode to calculate the return), **we have significant variance in policy gradient estimation**. Remember that the policy gradient estimation is **the direction of the steepest increase in return**. In other words, how to update our policy weights so that actions that lead to good returns have a higher probability of being taken. The Monte Carlo variance, which we will further study in this unit, **leads to slower training since we need a lot of samples to mitigate it**. So today we'll study **Actor-Critic methods**, a hybrid architecture combining value-based and Policy-Based methods that helps to stabilize the training by reducing the variance using: - *An Actor* that controls **how our agent behaves** (Policy-Based method) - *A Critic* that measures **how good the taken action is** (Value-Based method) We'll study one of these hybrid methods, Advantage Actor Critic (A2C), **and train our agent using Stable-Baselines3 in robotic environments**. We'll train: - A robotic arm 🦾 to move to the correct position. Sound exciting? Let's get started!
deep-rl-class/units/en/unit6/introduction.mdx/0
{ "file_path": "deep-rl-class/units/en/unit6/introduction.mdx", "repo_id": "deep-rl-class", "token_count": 427 }
91
# Hands-on: advanced Deep Reinforcement Learning. Using Sample Factory to play Doom from pixels <CourseFloatingBanner classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit8/unit8_part2.ipynb"} ]} askForHelpUrl="http://hf.co/join/discord" /> The colab notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit8/unit8_part2.ipynb) # Unit 8 Part 2: Advanced Deep Reinforcement Learning. Using Sample Factory to play Doom from pixels <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/thumbnail2.png" alt="Thumbnail"/> In this notebook, we will learn how to train a Deep Neural Network to collect objects in a 3D environment based on the game of Doom, a video of the resulting policy is shown below. We train this policy using [Sample Factory](https://www.samplefactory.dev/), an asynchronous implementation of the PPO algorithm. Please note the following points: * [Sample Factory](https://www.samplefactory.dev/) is an advanced RL framework and **only functions on Linux and Mac** (not Windows). * The framework performs best on a **GPU machine with many CPU cores**, where it can achieve speeds of 100k interactions per second. The resources available on a standard Colab notebook **limit the performance of this library**. So the speed in this setting **does not reflect the real-world performance**. * Benchmarks for Sample Factory are available in a number of settings, check out the [examples](https://github.com/alex-petrenko/sample-factory/tree/master/sf_examples) if you want to find out more. ```python from IPython.display import HTML HTML( """<video width="640" height="480" controls> <source src="https://huggingface.co/edbeeching/doom_health_gathering_supreme_3333/resolve/main/replay.mp4" type="video/mp4">Your browser does not support the video tag.</video>""" ) ``` To validate this hands-on for the [certification process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process), you need to push one model: - `doom_health_gathering_supreme` get a result of >= 5. To find your result, go to the [leaderboard](https://huggingface.co/spaces/huggingface-projects/Deep-Reinforcement-Learning-Leaderboard) and find your model, **the result = mean_reward - std of reward** If you don't find your model, **go to the bottom of the page and click on the refresh button** For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process ## Set the GPU 💪 - To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type` <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step1.jpg" alt="GPU Step 1"> - `Hardware Accelerator > GPU` <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step2.jpg" alt="GPU Step 2"> Before starting to train our agent, let's **study the library and environments we're going to use**. ## Sample Factory [Sample Factory](https://www.samplefactory.dev/) is one of the **fastest RL libraries focused on very efficient synchronous and asynchronous implementations of policy gradients (PPO)**. Sample Factory is thoroughly **tested, used by many researchers and practitioners**, and is actively maintained. Our implementation is known to **reach SOTA performance in a variety of domains while minimizing RL experiment training time and hardware requirements**. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/samplefactoryenvs.png" alt="Sample factory"/> ### Key features - Highly optimized algorithm [architecture](https://www.samplefactory.dev/06-architecture/overview/) for maximum learning throughput - [Synchronous and asynchronous](https://www.samplefactory.dev/07-advanced-topics/sync-async/) training regimes - [Serial (single-process) mode](https://www.samplefactory.dev/07-advanced-topics/serial-mode/) for easy debugging - Optimal performance in both CPU-based and [GPU-accelerated environments](https://www.samplefactory.dev/09-environment-integrations/isaacgym/) - Single- & multi-agent training, self-play, supports [training multiple policies](https://www.samplefactory.dev/07-advanced-topics/multi-policy-training/) at once on one or many GPUs - Population-Based Training ([PBT](https://www.samplefactory.dev/07-advanced-topics/pbt/)) - Discrete, continuous, hybrid action spaces - Vector-based, image-based, dictionary observation spaces - Automatically creates a model architecture by parsing action/observation space specification. Supports [custom model architectures](https://www.samplefactory.dev/03-customization/custom-models/) - Designed to be imported into other projects, [custom environments](https://www.samplefactory.dev/03-customization/custom-environments/) are first-class citizens - Detailed [WandB and Tensorboard summaries](https://www.samplefactory.dev/05-monitoring/metrics-reference/), [custom metrics](https://www.samplefactory.dev/05-monitoring/custom-metrics/) - [HuggingFace 🤗 integration](https://www.samplefactory.dev/10-huggingface/huggingface/) (upload trained models and metrics to the Hub) - [Multiple](https://www.samplefactory.dev/09-environment-integrations/mujoco/) [example](https://www.samplefactory.dev/09-environment-integrations/atari/) [environment](https://www.samplefactory.dev/09-environment-integrations/vizdoom/) [integrations](https://www.samplefactory.dev/09-environment-integrations/dmlab/) with tuned parameters and trained models All of the above policies are available on the 🤗 hub. Search for the tag [sample-factory](https://huggingface.co/models?library=sample-factory&sort=downloads) ### How sample-factory works Sample-factory is one of the **most highly optimized RL implementations available to the community**. It works by **spawning multiple processes that run rollout workers, inference workers and a learner worker**. The *workers* **communicate through shared memory, which lowers the communication cost between processes**. The *rollout workers* interact with the environment and send observations to the *inference workers*. The *inferences workers* query a fixed version of the policy and **send actions back to the rollout worker**. After *k* steps the rollout works send a trajectory of experience to the learner worker, **which it uses to update the agent’s policy network**. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/samplefactory.png" alt="Sample factory"/> ### Actor Critic models in Sample-factory Actor Critic models in Sample Factory are composed of three components: - **Encoder** - Process input observations (images, vectors) and map them to a vector. This is the part of the model you will most likely want to customize. - **Core** - Intergrate vectors from one or more encoders, can optionally include a single- or multi-layer LSTM/GRU in a memory-based agent. - **Decoder** - Apply additional layers to the output of the model core before computing the policy and value outputs. The library has been designed to automatically support any observation and action spaces. Users can easily add their custom models. You can find out more in the [documentation](https://www.samplefactory.dev/03-customization/custom-models/#actor-critic-models-in-sample-factory). ## ViZDoom [ViZDoom](https://vizdoom.cs.put.edu.pl/) is an **open-source python interface for the Doom Engine**. The library was created in 2016 by Marek Wydmuch, Michal Kempka at the Institute of Computing Science, Poznan University of Technology, Poland. The library enables the **training of agents directly from the screen pixels in a number of scenarios**, including team deathmatch, shown in the video below. Because the ViZDoom environment is based on a game the was created in the 90s, it can be run on modern hardware at accelerated speeds, **allowing us to learn complex AI behaviors fairly quickly**. The library includes feature such as: - Multi-platform (Linux, macOS, Windows), - API for Python and C++, - [OpenAI Gym](https://www.gymlibrary.dev/) environment wrappers - Easy-to-create custom scenarios (visual editors, scripting language, and examples available), - Async and sync single-player and multiplayer modes, - Lightweight (few MBs) and fast (up to 7000 fps in sync mode, single-threaded), - Customizable resolution and rendering parameters, - Access to the depth buffer (3D vision), - Automatic labeling of game objects visible in the frame, - Access to the audio buffer - Access to the list of actors/objects and map geometry, - Off-screen rendering and episode recording, - Time scaling in async mode. ## We first need to install some dependencies that are required for the ViZDoom environment Now that our Colab runtime is set up, we can start by installing the dependencies required to run ViZDoom on linux. If you are following on your machine on Mac, you will want to follow the installation instructions on the [github page](https://github.com/Farama-Foundation/ViZDoom/blob/master/doc/Quickstart.md#-quickstart-for-macos-and-anaconda3-python-36). ```python # Install ViZDoom deps from # https://github.com/mwydmuch/ViZDoom/blob/master/doc/Building.md#-linux apt-get install build-essential zlib1g-dev libsdl2-dev libjpeg-dev \ nasm tar libbz2-dev libgtk2.0-dev cmake git libfluidsynth-dev libgme-dev \ libopenal-dev timidity libwildmidi-dev unzip ffmpeg # Boost libraries apt-get install libboost-all-dev # Lua binding dependencies apt-get install liblua5.1-dev ``` ## Then we can install Sample Factory and ViZDoom - This can take 7min ```bash pip install sample-factory pip install vizdoom ``` ## Setting up the Doom Environment in sample-factory ```python import functools from sample_factory.algo.utils.context import global_model_factory from sample_factory.cfg.arguments import parse_full_cfg, parse_sf_args from sample_factory.envs.env_utils import register_env from sample_factory.train import run_rl from sf_examples.vizdoom.doom.doom_model import make_vizdoom_encoder from sf_examples.vizdoom.doom.doom_params import add_doom_env_args, doom_override_defaults from sf_examples.vizdoom.doom.doom_utils import DOOM_ENVS, make_doom_env_from_spec # Registers all the ViZDoom environments def register_vizdoom_envs(): for env_spec in DOOM_ENVS: make_env_func = functools.partial(make_doom_env_from_spec, env_spec) register_env(env_spec.name, make_env_func) # Sample Factory allows the registration of a custom Neural Network architecture # See https://github.com/alex-petrenko/sample-factory/blob/master/sf_examples/vizdoom/doom/doom_model.py for more details def register_vizdoom_models(): global_model_factory().register_encoder_factory(make_vizdoom_encoder) def register_vizdoom_components(): register_vizdoom_envs() register_vizdoom_models() # parse the command line args and create a config def parse_vizdoom_cfg(argv=None, evaluation=False): parser, _ = parse_sf_args(argv=argv, evaluation=evaluation) # parameters specific to Doom envs add_doom_env_args(parser) # override Doom default values for algo parameters doom_override_defaults(parser) # second parsing pass yields the final configuration final_cfg = parse_full_cfg(parser, argv) return final_cfg ``` Now that the setup if complete, we can train the agent. We have chosen here to learn a ViZDoom task called `Health Gathering Supreme`. ### The scenario: Health Gathering Supreme <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/Health-Gathering-Supreme.png" alt="Health-Gathering-Supreme"/> The objective of this scenario is to **teach the agent how to survive without knowing what makes it survive**. The Agent know only that **life is precious** and death is bad so **it must learn what prolongs its existence and that its health is connected with survival**. The map is a rectangle containing walls and with a green, acidic floor which **hurts the player periodically**. Initially there are some medkits spread uniformly over the map. A new medkit falls from the skies every now and then. **Medkits heal some portions of player's health** - to survive, the agent needs to pick them up. The episode finishes after the player's death or on timeout. Further configuration: - Living_reward = 1 - 3 available buttons: turn left, turn right, move forward - 1 available game variable: HEALTH - death penalty = 100 You can find out more about the scenarios available in ViZDoom [here](https://github.com/Farama-Foundation/ViZDoom/tree/master/scenarios). There are also a number of more complex scenarios that have been create for ViZDoom, such as the ones detailed on [this github page](https://github.com/edbeeching/3d_control_deep_rl). ## Training the agent - We're going to train the agent for 4000000 steps. It will take approximately 20min ```python ## Start the training, this should take around 15 minutes register_vizdoom_components() # The scenario we train on today is health gathering # other scenarios include "doom_basic", "doom_two_colors_easy", "doom_dm", "doom_dwango5", "doom_my_way_home", "doom_deadly_corridor", "doom_defend_the_center", "doom_defend_the_line" env = "doom_health_gathering_supreme" cfg = parse_vizdoom_cfg( argv=[f"--env={env}", "--num_workers=8", "--num_envs_per_worker=4", "--train_for_env_steps=4000000"] ) status = run_rl(cfg) ``` ## Let's take a look at the performance of the trained policy and output a video of the agent. ```python from sample_factory.enjoy import enjoy cfg = parse_vizdoom_cfg( argv=[f"--env={env}", "--num_workers=1", "--save_video", "--no_render", "--max_num_episodes=10"], evaluation=True ) status = enjoy(cfg) ``` ## Now lets visualize the performance of the agent ```python from base64 import b64encode from IPython.display import HTML mp4 = open("/content/train_dir/default_experiment/replay.mp4", "rb").read() data_url = "data:video/mp4;base64," + b64encode(mp4).decode() HTML( """ <video width=640 controls> <source src="%s" type="video/mp4"> </video> """ % data_url ) ``` The agent has learned something, but its performance could be better. We would clearly need to train for longer. But let's upload this model to the Hub. ## Now lets upload your checkpoint and video to the Hugging Face Hub To be able to share your model with the community there are three more steps to follow: 1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join 2️⃣ Sign in and get your authentication token from the Hugging Face website. - Create a new token (https://huggingface.co/settings/tokens) **with write role** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/create-token.jpg" alt="Create HF Token"> - Copy the token - Run the cell below and paste the token If you don't want to use Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` ```python from huggingface_hub import notebook_login notebook_login() !git config --global credential.helper store ``` ```python from sample_factory.enjoy import enjoy hf_username = "ThomasSimonini" # insert your HuggingFace username here cfg = parse_vizdoom_cfg( argv=[ f"--env={env}", "--num_workers=1", "--save_video", "--no_render", "--max_num_episodes=10", "--max_num_frames=100000", "--push_to_hub", f"--hf_repository={hf_username}/rl_course_vizdoom_health_gathering_supreme", ], evaluation=True, ) status = enjoy(cfg) ``` ## Let's load another model This agent's performance was good, but we can do better! Let's download and visualize an agent trained for 10B timesteps from the hub. ```bash #download the agent from the hub python -m sample_factory.huggingface.load_from_hub -r edbeeching/doom_health_gathering_supreme_2222 -d ./train_dir ``` ```bash ls train_dir/doom_health_gathering_supreme_2222 ``` ```python env = "doom_health_gathering_supreme" cfg = parse_vizdoom_cfg( argv=[ f"--env={env}", "--num_workers=1", "--save_video", "--no_render", "--max_num_episodes=10", "--experiment=doom_health_gathering_supreme_2222", "--train_dir=train_dir", ], evaluation=True, ) status = enjoy(cfg) ``` ```python mp4 = open("/content/train_dir/doom_health_gathering_supreme_2222/replay.mp4", "rb").read() data_url = "data:video/mp4;base64," + b64encode(mp4).decode() HTML( """ <video width=640 controls> <source src="%s" type="video/mp4"> </video> """ % data_url ) ``` ## Some additional challenges 🏆: Doom Deathmatch Training an agent to play a Doom deathmatch **takes many hours on a more beefy machine than is available in Colab**. Fortunately, we have have **already trained an agent in this scenario and it is available in the 🤗 Hub!** Let’s download the model and visualize the agent’s performance. ```python # Download the agent from the hub python -m sample_factory.huggingface.load_from_hub -r edbeeching/doom_deathmatch_bots_2222 -d ./train_dir ``` Given the agent plays for a long time the video generation can take **10 minutes**. ```python from sample_factory.enjoy import enjoy register_vizdoom_components() env = "doom_deathmatch_bots" cfg = parse_vizdoom_cfg( argv=[ f"--env={env}", "--num_workers=1", "--save_video", "--no_render", "--max_num_episodes=1", "--experiment=doom_deathmatch_bots_2222", "--train_dir=train_dir", ], evaluation=True, ) status = enjoy(cfg) mp4 = open("/content/train_dir/doom_deathmatch_bots_2222/replay.mp4", "rb").read() data_url = "data:video/mp4;base64," + b64encode(mp4).decode() HTML( """ <video width=640 controls> <source src="%s" type="video/mp4"> </video> """ % data_url ) ``` You **can try to train your agent in this environment** using the code above, but not on colab. **Good luck 🤞** If you prefer an easier scenario, **why not try training in another ViZDoom scenario such as `doom_deadly_corridor` or `doom_defend_the_center`.** --- This concludes the last unit. But we are not finished yet! 🤗 The following **bonus section include some of the most interesting, advanced, and cutting edge work in Deep Reinforcement Learning**. ## Keep learning, stay awesome 🤗
deep-rl-class/units/en/unit8/hands-on-sf.mdx/0
{ "file_path": "deep-rl-class/units/en/unit8/hands-on-sf.mdx", "repo_id": "deep-rl-class", "token_count": 5955 }
92
# Generalization in Reinforcement Learning Generalization plays a pivotal role in the realm of Reinforcement Learning. While **RL algorithms demonstrate good performance in controlled environments**, the real world presents a **unique challenge due to its non-stationary and open-ended nature**. As a result, the development of RL algorithms that stay robust in the face of environmental variations, coupled with the capability to transfer and adapt to uncharted yet analogous tasks and settings, becomes fundamental for real world application of RL. If you're interested to dive deeper into this research subject, we recommend exploring the following resource: - [Generalization in Reinforcement Learning by Robert Kirk](https://robertkirk.github.io/2022/01/17/generalisation-in-reinforcement-learning-survey.html): this comprehensive survey provides an insightful **overview of the concept of generalization in RL**, making it an excellent starting point for your exploration. - [Improving Generalization in Reinforcement Learning using Policy Similarity Embeddings](https://blog.research.google/2021/09/improving-generalization-in.html?m=1)
deep-rl-class/units/en/unitbonus3/generalisation.mdx/0
{ "file_path": "deep-rl-class/units/en/unitbonus3/generalisation.mdx", "repo_id": "deep-rl-class", "token_count": 250 }
93
cff-version: 1.2.0 title: 'Diffusers: State-of-the-art diffusion models' message: >- If you use this software, please cite it using the metadata from this file. type: software authors: - given-names: Patrick family-names: von Platen - given-names: Suraj family-names: Patil - given-names: Anton family-names: Lozhkov - given-names: Pedro family-names: Cuenca - given-names: Nathan family-names: Lambert - given-names: Kashif family-names: Rasul - given-names: Mishig family-names: Davaadorj - given-names: Dhruv family-names: Nair - given-names: Sayak family-names: Paul - given-names: Steven family-names: Liu - given-names: William family-names: Berman - given-names: Yiyi family-names: Xu - given-names: Thomas family-names: Wolf repository-code: 'https://github.com/huggingface/diffusers' abstract: >- Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves as a modular toolbox for inference and training of diffusion models. keywords: - deep-learning - pytorch - image-generation - hacktoberfest - diffusion - text2image - image2image - score-based-generative-modeling - stable-diffusion - stable-diffusion-diffusers license: Apache-2.0 version: 0.12.1
diffusers/CITATION.cff/0
{ "file_path": "diffusers/CITATION.cff", "repo_id": "diffusers", "token_count": 460 }
94
import argparse import sys sys.path.append(".") from base_classes import TextToImageBenchmark, TurboTextToImageBenchmark # noqa: E402 ALL_T2I_CKPTS = [ "runwayml/stable-diffusion-v1-5", "segmind/SSD-1B", "stabilityai/stable-diffusion-xl-base-1.0", "kandinsky-community/kandinsky-2-2-decoder", "warp-ai/wuerstchen", "stabilityai/sdxl-turbo", ] if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--ckpt", type=str, default="runwayml/stable-diffusion-v1-5", choices=ALL_T2I_CKPTS, ) parser.add_argument("--batch_size", type=int, default=1) parser.add_argument("--num_inference_steps", type=int, default=50) parser.add_argument("--model_cpu_offload", action="store_true") parser.add_argument("--run_compile", action="store_true") args = parser.parse_args() benchmark_cls = None if "turbo" in args.ckpt: benchmark_cls = TurboTextToImageBenchmark else: benchmark_cls = TextToImageBenchmark benchmark_pipe = benchmark_cls(args) benchmark_pipe.benchmark(args)
diffusers/benchmarks/benchmark_text_to_image.py/0
{ "file_path": "diffusers/benchmarks/benchmark_text_to_image.py", "repo_id": "diffusers", "token_count": 480 }
95
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ControlNet The ControlNet model was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: *We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.* ## Loading from the original format By default the [`ControlNetModel`] should be loaded with [`~ModelMixin.from_pretrained`], but it can also be loaded from the original format using [`FromOriginalControlnetMixin.from_single_file`] as follows: ```py from diffusers import StableDiffusionControlNetPipeline, ControlNetModel url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path controlnet = ControlNetModel.from_single_file(url) url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ``` ## ControlNetModel [[autodoc]] ControlNetModel ## ControlNetOutput [[autodoc]] models.controlnet.ControlNetOutput ## FlaxControlNetModel [[autodoc]] FlaxControlNetModel ## FlaxControlNetOutput [[autodoc]] models.controlnet_flax.FlaxControlNetOutput
diffusers/docs/source/en/api/models/controlnet.md/0
{ "file_path": "diffusers/docs/source/en/api/models/controlnet.md", "repo_id": "diffusers", "token_count": 770 }
96
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Kandinsky 3 Kandinsky 3 is created by [Vladimir Arkhipkin](https://github.com/oriBetelgeuse),[Anastasia Maltseva](https://github.com/NastyaMittseva),[Igor Pavlov](https://github.com/boomb0om),[Andrei Filatov](https://github.com/anvilarth),[Arseniy Shakhmatov](https://github.com/cene555),[Andrey Kuznetsov](https://github.com/kuznetsoffandrey),[Denis Dimitrov](https://github.com/denndimitrov), [Zein Shaheen](https://github.com/zeinsh) The description from it's Github page: *Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively.* Its architecture includes 3 main components: 1. [FLAN-UL2](https://huggingface.co/google/flan-ul2), which is an encoder decoder model based on the T5 architecture. 2. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. 3. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at [ai-forever/Kandinsky-3](https://github.com/ai-forever/Kandinsky-3). <Tip> Check out the [Kandinsky Community](https://huggingface.co/kandinsky-community) organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. </Tip> <Tip> Make sure to check out the schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. </Tip> ## Kandinsky3Pipeline [[autodoc]] Kandinsky3Pipeline - all - __call__ ## Kandinsky3Img2ImgPipeline [[autodoc]] Kandinsky3Img2ImgPipeline - all - __call__
diffusers/docs/source/en/api/pipelines/kandinsky3.md/0
{ "file_path": "diffusers/docs/source/en/api/pipelines/kandinsky3.md", "repo_id": "diffusers", "token_count": 766 }
97
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/). It is used to enhance the resolution of input images by a factor of 4. <Tip> Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you're interested in using one of the official checkpoints for a task, explore the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations! </Tip> ## StableDiffusionUpscalePipeline [[autodoc]] StableDiffusionUpscalePipeline - all - __call__ - enable_attention_slicing - disable_attention_slicing - enable_xformers_memory_efficient_attention - disable_xformers_memory_efficient_attention ## StableDiffusionPipelineOutput [[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
diffusers/docs/source/en/api/pipelines/stable_diffusion/upscale.md/0
{ "file_path": "diffusers/docs/source/en/api/pipelines/stable_diffusion/upscale.md", "repo_id": "diffusers", "token_count": 475 }
98
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # DPMSolverSinglestepScheduler `DPMSolverSinglestepScheduler` is a single step scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at [LuChengTHU/dpm-solver](https://github.com/LuChengTHU/dpm-solver). ## Tips It is recommended to set `solver_order` to 2 for guide sampling, and `solver_order=3` for unconditional sampling. Dynamic thresholding from [Imagen](https://huggingface.co/papers/2205.11487) is supported, and for pixel-space diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. ## DPMSolverSinglestepScheduler [[autodoc]] DPMSolverSinglestepScheduler ## SchedulerOutput [[autodoc]] schedulers.scheduling_utils.SchedulerOutput
diffusers/docs/source/en/api/schedulers/singlestep_dpm_solver.md/0
{ "file_path": "diffusers/docs/source/en/api/schedulers/singlestep_dpm_solver.md", "repo_id": "diffusers", "token_count": 574 }
99