--- license: odc-by task_categories: - text-generation language: - ar size_categories: - 10K📚 ArabicWeb24
ArabicWeb24 Dataset
> **More than 39 billion tokens of high quality Arabic web content 🌐.** ## What is ArabicWeb24 ? The ArabicWeb24 dataset consists of more than 28 billion tokens of cleaned and deduplicated Arabic web data from a customized crawl. This was processed using the large scale data processing library [datatrove](https://github.com/huggingface/datatrove). ## What is being released ? We are releasing two datasets versions: ***ArabicWeb24***: dataset version 1 (v1) underwent extensive processing through all the pre-processing pipelines we had available. Check the blog for more details. ***ArabicWeb24-no-sentence-dedup***: dataset version 5 (v5) where only sentence deduplication step was not taken into consideration. For more details about the preprocessing steps and data versions, you can check the blogpost on our website [here](https://www.lighton.ai/lighton-blogs/arabicweb24) and our HuggingFace community blogpost [here](https://huggingface.co/blog/MayFarhat/arabicweb24). Along with the datasets, we are also sharing the [code](https://github.com/lightonai/datatrove/tree/arabic-web/arabicweb) needed to fully recreate the processing setup using the datatrove library. Also, we are publishing the small ablation models that were trained on the v1 and v5 datasets. You will find them in this [collection](https://huggingface.co/collections/lightonai/arabicweb24-ablation-models-66b0a7ccfe68a13a7893d74e) ## What does a sample from the ArabicWeb24 dataset look like? ### Data Samples The following is an example sample from the dataset. It is part of the main `ArabicWeb` and was crawled on `2024-02-11T00:13:40Z`. ```json { "data_id": "urn:uid:bb4aad5a-38e5-55b9-247a-4f514f4cdcfc", "metadata": { "source": "https://alfaheedgroup.com/?page_id=586", "date": "2024-02-11T00:13:40Z", "labels": { "language": "ar", "language_score": 0.9973166584968567 }, "token_count": 184 }, "text": "فندق الحمرا تم تأسيس مجموعة الفهيد للتجارة في العام بهدف إنشاء كيان إستثماري تجاري عملاق يضم بين طياته العديد من الأنشطة التجارية المختلفة و مقرها بمدينة جدة بالمملكة العربية السعودية ، و كانت البداية بإنشاء قاعة … والتي سريعا ما كُلل المجهود المبذل من طاقمها بالنجاح الذي كان صداه محفزاً لتتابع سلسلة قاعات الإحتفالات فأتت رويال للإحتفالات و الفيصل للإحتفالات و من ثم كان مزيج الفخامة و العصرية في القاعة الكبرى للإحتفالات و المؤتمرات. و لكون التميز غايتنا و لما نمارسه من مسؤولة تقديم الأفضل . تم إنشاء بيت العروس لتموين الحفلات ، بتجهيزات حديثة دائماً و خبرات طهاة عالميين. و في مجال المراكز التجارية . كان موقع جازان مول المنفرد و الواقع بقلب المدينة سبباً في تميزه بجانب التصميم الفريد الذي روعي فيه توفير سهولة التسوق و تنوعه. أما في مجال إدارة و تشغيل الفنادق و المنتزهات." } ``` ### Data Fields - `data_id` (string): A unique identifier for this sample, represented as a URN (Uniform Resource Name). - `metadata` (object): Contains various metadata fields: - `source` (string): The URL of the original webpage where the text content was found. - `date` (string): The timestamp when this data was crawled, in ISO 8601 format (e.g., "2024-02-11T00:13:40Z"). - `labels` (object): - `language` (string): The identified language of the text, represented by a language code (e.g., "ar" for Arabic). - `language_score` (float): A confidence score for the language identification, ranging from 0.0 to 1.0 as reported by the [fastText language classifier](https://github.com/huggingface/datatrove/blob/main/src/datatrove/pipeline/filters/language_filter.py) - `token_count` (integer): number of tokens when applying the `aragpt2` tokenizer to this sample - `text` (string): The main text content of the sample, preserved with its original formatting including line breaks. ## How to download and use ArabicWeb24? To load the ArabicWeb24 dataset, use one of the following code snippets: ### Most cleaned & deduplicated ArabicWeb24 ```python dataset = load_dataset('lightonai/ArabicWeb24', data_files='ArabicWeb24/**/*.arrow', split='train') ``` ### ArabicWeb24 without sentence deduplication ```python dataset = load_dataset('lightonai/ArabicWeb24', data_files='ArabicWeb24-no-sentence-dedup/**/*.arrow', split='train') ``` ## Citation Information To reference this publication in your work, please use the following BibTeX entry: ``` @misc{ArabicWeb24, title={ArabicWeb24: Creating a High Quality Arabic Web-only Pre-training Dataset}, author={Farhat, May and Taghadouini, Said and Hallström, Oskar and Hajri-Gabouj, Sonja}, organization={LightOn, INSAT}, url={www.lighton.ai/lighton-blogs/arabicweb24}, year={2024} } ``` > May Farhat completed her work on the Arabic24 project during her internship tenure at LightOn. Throughout this period, she was under the academic supervision of Ms. Sonia Hajri-Gabouj from INSAT, and the professional guidance of Mr. Oskar Hallström, her designated supervisor at LightOn. Said Taghadouini is working at LightOn. >