Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

aya-23-35B - GGUF

Name Quant method Size
aya-23-35B.Q2_K.gguf Q2_K 12.87GB
aya-23-35B.IQ3_XS.gguf IQ3_XS 14.05GB
aya-23-35B.IQ3_S.gguf IQ3_S 14.77GB
aya-23-35B.Q3_K_S.gguf Q3_K_S 9.23GB
aya-23-35B.IQ3_M.gguf IQ3_M 15.55GB
aya-23-35B.Q3_K.gguf Q3_K 7.16GB
aya-23-35B.Q3_K_M.gguf Q3_K_M 2.41GB
aya-23-35B.Q3_K_L.gguf Q3_K_L 1.44GB
aya-23-35B.IQ4_XS.gguf IQ4_XS 0.61GB
aya-23-35B.Q4_0.gguf Q4_0 0.01GB
aya-23-35B.IQ4_NL.gguf IQ4_NL 0.0GB
aya-23-35B.Q4_K_S.gguf Q4_K_S 0.0GB
aya-23-35B.Q4_K.gguf Q4_K 0.0GB
aya-23-35B.Q4_K_M.gguf Q4_K_M 0.0GB
aya-23-35B.Q4_1.gguf Q4_1 0.0GB
aya-23-35B.Q5_0.gguf Q5_0 0.0GB
aya-23-35B.Q5_K_S.gguf Q5_K_S 0.0GB
aya-23-35B.Q5_K.gguf Q5_K 0.0GB
aya-23-35B.Q5_K_M.gguf Q5_K_M 0.0GB
aya-23-35B.Q5_1.gguf Q5_1 0.0GB
aya-23-35B.Q6_K.gguf Q6_K 0.0GB
aya-23-35B.Q8_0.gguf Q8_0 0.0GB

Original model description:

inference: false library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0

Model Card for Aya-23-35B

Try Aya 23

You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space here.

Model Summary

Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained Command family of models with the recently released Aya Collection. The result is a powerful multilingual large language model serving 23 languages.

This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find here.

We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese

Developed by: Cohere For AI and Cohere

Usage

Please install transformers from the source repository that includes the necessary changes for this model

# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/aya-23-35B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>

gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
    )

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)

Example Notebook

This notebook showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with QLoRA.

Model Details

Input: Models input text only.

Output: Models generate text only.

Model Architecture: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.

Languages covered: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese

Context length: 8192

Evaluation

multilingual benchmarks average win rates

Please refer to the Aya 23 technical report for further details about the base model, data, instruction tuning, and evaluation.

Model Card Contact

For errors or additional questions about details in this model card, contact [email protected].

Terms of Use

We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a CC-BY-NC License with an acceptable use addendum, and also requires adhering to C4AI's Acceptable Use Policy.

Try the model today

You can try Aya 23 in the Cohere playground here. You can also use it in our dedicated Hugging Face Space here.

Citation info

@misc{aryabumi2024aya,
      title={Aya 23: Open Weight Releases to Further Multilingual Progress}, 
      author={Viraat Aryabumi and John Dang and Dwarak Talupuru and Saurabh Dash and David Cairuz and Hangyu Lin and Bharat Venkitesh and Madeline Smith and Kelly Marchisio and Sebastian Ruder and Acyr Locatelli and Julia Kreutzer and Nick Frosst and Phil Blunsom and Marzieh Fadaee and Ahmet Üstün and Sara Hooker},
      year={2024},
      eprint={2405.15032},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
767
GGUF

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .