RichardErkhov commited on
Commit
bfe9316
1 Parent(s): 0dab22c

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +123 -0
README.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GGUF quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ RakutenAI-7B-chat - GGUF
11
+ - Model creator: https://huggingface.co/Rakuten/
12
+ - Original model: https://huggingface.co/Rakuten/RakutenAI-7B-chat/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [RakutenAI-7B-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q2_K.gguf) | Q2_K | 2.6GB |
18
+ | [RakutenAI-7B-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ3_XS.gguf) | IQ3_XS | 2.89GB |
19
+ | [RakutenAI-7B-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ3_S.gguf) | IQ3_S | 3.04GB |
20
+ | [RakutenAI-7B-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q3_K_S.gguf) | Q3_K_S | 3.02GB |
21
+ | [RakutenAI-7B-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ3_M.gguf) | IQ3_M | 3.14GB |
22
+ | [RakutenAI-7B-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q3_K.gguf) | Q3_K | 3.35GB |
23
+ | [RakutenAI-7B-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q3_K_M.gguf) | Q3_K_M | 3.35GB |
24
+ | [RakutenAI-7B-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q3_K_L.gguf) | Q3_K_L | 3.64GB |
25
+ | [RakutenAI-7B-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ4_XS.gguf) | IQ4_XS | 3.76GB |
26
+ | [RakutenAI-7B-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_0.gguf) | Q4_0 | 3.91GB |
27
+ | [RakutenAI-7B-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ4_NL.gguf) | IQ4_NL | 3.95GB |
28
+ | [RakutenAI-7B-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_K_S.gguf) | Q4_K_S | 3.94GB |
29
+ | [RakutenAI-7B-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_K.gguf) | Q4_K | 4.15GB |
30
+ | [RakutenAI-7B-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_K_M.gguf) | Q4_K_M | 4.15GB |
31
+ | [RakutenAI-7B-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_1.gguf) | Q4_1 | 4.33GB |
32
+ | [RakutenAI-7B-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_0.gguf) | Q5_0 | 4.75GB |
33
+ | [RakutenAI-7B-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_K_S.gguf) | Q5_K_S | 4.75GB |
34
+ | [RakutenAI-7B-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_K.gguf) | Q5_K | 4.87GB |
35
+ | [RakutenAI-7B-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_K_M.gguf) | Q5_K_M | 4.87GB |
36
+ | [RakutenAI-7B-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_1.gguf) | Q5_1 | 5.16GB |
37
+ | [RakutenAI-7B-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q6_K.gguf) | Q6_K | 5.63GB |
38
+
39
+
40
+
41
+ Original model description:
42
+ ---
43
+ license: apache-2.0
44
+ ---
45
+ # RakutenAI-7B-chat
46
+ ## Model Description
47
+ RakutenAI-7B is a systematic initiative that brings the latest technologies to the world of Japanese LLMs. RakutenAI-7B achieves the best scores on the Japanese language understanding benchmarks while maintaining a competitive performance on the English test sets among similar models such as OpenCalm, Elyza, Youri, Nekomata and Swallow. RakutenAI-7B leverages the Mistral model architecture and is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) pre-trained checkpoint, exemplifying a successful retrofitting of the pre-trained model weights. Moreover, we extend Mistral's vocabulary from 32k to 48k to offer a better character-per-token rate for Japanese.
48
+
49
+ *The technical report can be accessed at [arXiv](https://arxiv.org/abs/2403.15484).*
50
+
51
+ *If you are looking for a foundation model, check [RakutenAI-7B](https://huggingface.co/Rakuten/RakutenAI-7B)*.
52
+
53
+ *If you are looking for an instruction-tuned model, check [RakutenAI-7B-instruct](https://huggingface.co/Rakuten/RakutenAI-7B-instruct)*.
54
+
55
+ An independent evaluation by Kamata et.al. for [Nejumi LLMリーダーボード Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-Neo--Vmlldzo2MTkyMTU0#総合評価) using a weighted average of [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) and [Japanese MT-bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge) also confirms the highest performance of instruct/chat versions of RakutenAI-7B.
56
+
57
+ ## Usage
58
+
59
+ ```python
60
+ from transformers import AutoModelForCausalLM, AutoTokenizer
61
+
62
+ model_path = "Rakuten/RakutenAI-7B-chat"
63
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
64
+ model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto", device_map="auto")
65
+ model.eval()
66
+
67
+ requests = [
68
+ "「馬が合う」はどう言う意味ですか",
69
+ "How to make an authentic Spanish Omelette?",
70
+ ]
71
+
72
+ system_message = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user_input} ASSISTANT:"
73
+
74
+ for req in requests:
75
+ input_req = system_message.format(user_input=req)
76
+ input_ids = tokenizer.encode(input_req, return_tensors="pt").to(device=model.device)
77
+ tokens = model.generate(
78
+ input_ids,
79
+ max_new_tokens=1024,
80
+ do_sample=True,
81
+ pad_token_id=tokenizer.eos_token_id,
82
+ )
83
+ out = tokenizer.decode(tokens[0][len(input_ids[0]):], skip_special_tokens=True)
84
+ print("USER:\n" + req)
85
+ print("ASSISTANT:\n" + out)
86
+ print()
87
+ print()
88
+ ```
89
+
90
+ ## Model Details
91
+
92
+ * **Developed by**: [Rakuten Group, Inc.](https://ai.rakuten.com/)
93
+ * **Language(s)**: Japanese, English
94
+ * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
95
+ * **Instruction-Tuning Dataset**: We fine-tune our foundation model to create RakutenAI-7B-instruct and RakutenAI-7B-chat using a mix of open source and internally hand-crafted datasets. We use `train` part of the following datasets (CC by-SA License) for instruction-tuned and chat-tuned models:
96
+ - [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
97
+ - [RTE](https://nlp.ist.i.kyoto-u.ac.jp/?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)
98
+ - [KUCI](https://nlp.ist.i.kyoto-u.ac.jp/?KUCI)
99
+ - [BELEBELE](https://huggingface.co/datasets/facebook/belebele)
100
+ - [JCS](https://aclanthology.org/2022.lrec-1.317/)
101
+ - [JNLI](https://aclanthology.org/2022.lrec-1.317/)
102
+ - [Dolly-15K](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
103
+ - [OpenAssistant1](https://huggingface.co/datasets/OpenAssistant/oasst1)
104
+
105
+
106
+ ### Limitations and Bias
107
+
108
+ The suite of RakutenAI-7B models is capable of generating human-like text on a wide range of topics. However, like all LLMs, they have limitations and can produce biased, inaccurate, or unsafe outputs. Please exercise caution and judgement while interacting with them.
109
+
110
+ ## Citation
111
+ For citing our work on the suite of RakutenAI-7B models, please use:
112
+
113
+ ```
114
+ @misc{rakutengroup2024rakutenai7b,
115
+ title={RakutenAI-7B: Extending Large Language Models for Japanese},
116
+ author={{Rakuten Group, Inc.} and Aaron Levine and Connie Huang and Chenguang Wang and Eduardo Batista and Ewa Szymanska and Hongyi Ding and Hou Wei Chou and Jean-François Pessiot and Johanes Effendi and Justin Chiu and Kai Torben Ohlhus and Karan Chopra and Keiji Shinzato and Koji Murakami and Lee Xiong and Lei Chen and Maki Kubota and Maksim Tkachenko and Miroku Lee and Naoki Takahashi and Prathyusha Jwalapuram and Ryutaro Tatsushima and Saurabh Jain and Sunil Kumar Yadav and Ting Cai and Wei-Te Chen and Yandi Xia and Yuki Nakayama and Yutaka Higashiyama},
117
+ year={2024},
118
+ eprint={2403.15484},
119
+ archivePrefix={arXiv},
120
+ primaryClass={cs.CL}
121
+ }
122
+ ```
123
+