dollcerberoom logo

DOLLcerberOOM: 3 x Dolly ๐Ÿ‘ + BLOOMz ๐Ÿ’ฎ

Adapter Description

This adapter was created with the PEFT library and allowed the base model BigScience/BLOOMz 7B1 to be fine-tuned on the Dolly's Dataset (tanslated to Spanish, French and German by Argilla) by using the method LoRA.

Model Description

Instruction Tuned version of BigScience Large Open-science Open-access Multilingual.

BLOOMz 7B1 MT

Training data

This collection of datasets are machine-translated (and soon curated) versions of the databricks-dolly-15k dataset originally created by Databricks, Inc. in 2023.

The goal is to give practitioners a starting point for training open-source instruction-following models beyond English. However, as the translation quality will not be perfect, we highly recommend dedicating time to curate and fix translation issues. Below we explain how to load the datasets into Argilla for data curation and fixing. Additionally, we'll be improving the datasets made available here, with the help of different communities.

We highly recommend dataset curation beyond proof-of-concept experiments.

Supported Tasks and Leaderboards

TBA

Training procedure

TBA

How to use

TBA

Citation

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train mrm8488/dollcerberoom