Spaces:
Runtime error
Runtime error
File size: 3,849 Bytes
79a579c 45abc63 5dcfc72 79a579c 45abc63 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
title: Burmese Tokenizers Comparison
emoji: 🐬
colorFrom: blue
colorTo: pink
sdk: streamlit
sdk_version: 1.34.0
app_file: app.py
pinned: false
license: cc-by-nc-sa-4.0
---
# Burmese Tokenizers Comparison
This project is a tool for comparing the performance of various tokenizers on the Burmese language. It visualizes how different tokenizers process the same Burmese text, highlighting disparities in tokenization.
## Inspiration
This project is inspired by the insights from the article ["All languages are NOT created (tokenized) equal!"](https://www.artfish.ai/p/all-languages-are-not-created-tokenized) by Yennie Jun. The original article explores token length for various LLM tokenizers on many different languages and emphasizes the non-uniformity of tokenization across languages.
## Dataset
The project uses the Burmese subset of the [MASSIVE dataset](https://arxiv.org/abs/2204.08582), a parallel dataset introduced by Amazon consisting of 1 million realistic, parallel short texts translated across 52 languages and 18 domains. Specifically, the dev split of the dataset, which consists of 2,033 texts translated into each of the languages, is utilized. The dataset is available on [Hugging Face](https://huggingface.co/datasets/AmazonScience/massive) under the CC BY 4.0 license.
## Features
- Select from a wide range of tokenizers to compare their performance on Burmese text.
- Visualize the number of tokens generated by each tokenizer for a randomly sampled Burmese text.
- Explore the distribution density and variability of token counts across selected tokenizers.
## Distribution Density and Variability
The project provides two visualizations to analyze the distribution and variability of token counts across the selected tokenizers:
1. **Distribution Density Plot**: This plot shows the probability density function (PDF) of the token counts for each tokenizer. It allows you to compare the most likely token counts and the spread of the distribution across tokenizers. A higher peak indicates a more common range of token counts, while a wider curve suggests greater variability.
2. **Box Plot**: This plot displays the variability or dispersion of token counts for each tokenizer. It summarizes the distribution using the median, quartiles, and outliers. A taller box indicates greater variability in token counts, while the position of the median line provides information about the skewness of the distribution.
By examining these plots, you can gain insights into which tokenizers tend to generate more or fewer tokens, how the token counts are distributed, and the level of consistency or variability in tokenization across different tokenizers.
## Usage
1. Select the desired tokenizers from the sidebar. You can choose to select all tokenizers or manually pick specific ones.
2. The app will display a randomly sampled Burmese text from the MASSIVE dataset.
3. The number of tokens generated by each selected tokenizer for the sampled text will be shown in a table.
4. A distribution density plot and a box plot will visualize the token count distribution and variability across the selected tokenizers.
## Acknowledgements
- The original inspiration and insights for this project come from the article ["All languages are NOT created (tokenized) equal!"](https://www.artfish.ai/p/all-languages-are-not-created-tokenized) by Yennie Jun.
- The MASSIVE dataset used in this project was introduced by Amazon and is available on [Hugging Face](https://huggingface.co/datasets/AmazonScience/massive).
- The code for the Streamlit app is adapted from the [Tokenizers Languages](https://huggingface.co/spaces/yenniejun/tokenizers-languages) Hugging Face Space by Yennie Jun.
## License
This project is licensed under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
|