SuperkingbasSKB commited on
Commit
aa1dfab
1 Parent(s): 0a9615b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -56
README.md CHANGED
@@ -18,32 +18,12 @@ tags:
18
  - medical
19
  - text-generation-inference
20
  ---
21
- # OpenThaiLLM-: Thai & China Large Language Model (Instruct)
22
- **OpenThaiLLM-DoodNiLT-Instruct** is an 7 billion parameter instruct model designed for Thai 🇹🇭 & China 🇨🇳 language.
23
- It demonstrates competitive performance with GPT-3.5-turbo and llama-3-typhoon-v1.5-8b-instruct, and is optimized for application use cases, Retrieval-Augmented Generation (RAG),
24
  constrained generation, and reasoning tasks.is a Thai 🇹🇭 & China 🇨🇳 large language model with 7 billion parameters, and it is based on Qwen2-7B.
25
- ## Introduction
26
 
27
- Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
28
-
29
- - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
30
- - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
31
- - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
32
- - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
33
-
34
- **This repo contains the base 7B Qwen2.5 model**, which has the following features:
35
- - Type: Causal Language Models
36
- - Training Stage: Pretraining
37
- - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
38
- - Number of Parameters: 7.61B
39
- - Number of Paramaters (Non-Embedding): 6.53B
40
- - Number of Layers: 28
41
- - Number of Attention Heads (GQA): 28 for Q and 4 for KV
42
- - Context Length: 131,072 tokens
43
-
44
- **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
45
-
46
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
47
 
48
  ## Requirements
49
 
@@ -61,38 +41,6 @@ We pretrained the models with a large amount of data, and we post-trained the mo
61
 
62
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
63
 
64
- ```python
65
- from transformers import AutoModelForCausalLM, AutoTokenizer
66
- device = "cuda" # the device to load the model onto
67
-
68
- model = AutoModelForCausalLM.from_pretrained(
69
- "nectec/OpenThaiLLM-DoodNiLT-V1.0.0-Beta-7B-Instruct",
70
- torch_dtype="auto",
71
- device_map="auto"
72
- )
73
- tokenizer = AutoTokenizer.from_pretrained("nectec/OpenThaiLLM-DoodNiLT-V1.0.0-Beta-7B-Instruct")
74
-
75
- prompt = "บริษัท A มีต้นทุนคงที่ 100,000 บาท และต้นทุนผันแปรต่อหน่วย 50 บาท ขายสินค้าได้ในราคา 150 บาทต่อหน่วย ต้องขายสินค้าอย่างน้อยกี่หน่วยเพื่อให้ถึงจุดคุ้มทุน?"
76
- messages = [
77
- {"role": "system", "content": "คุณคือ DoodNiLT Assistant จงตอบคำถามอธิบายเป็นภาษาไทย"},
78
- {"role": "user", "content": prompt}
79
- ]
80
- text = tokenizer.apply_chat_template(
81
- messages,
82
- tokenize=False,
83
- add_generation_prompt=True
84
- )
85
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
86
-
87
- generated_ids = model.generate(
88
- model_inputs.input_ids,
89
- max_new_tokens=4096,
90
- repetition_penalty=1.2
91
- )
92
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
93
- print(response)
94
- ```
95
-
96
  ## Evaluation Performance
97
  | Model | ONET | IC | TGAT | TPAT-1 | A-Level | Average (ThaiExam) | M3Exam (1 shot) | MMLU |
98
  | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
 
18
  - medical
19
  - text-generation-inference
20
  ---
21
+ # OpenThaiLLM-Prebuilt-7B: Thai & China & English Large Language Model
22
+ **OpenThaiLLM-Prebuilt-7B** is an 7 billion parameter instruct model designed for Thai 🇹🇭 & China 🇨🇳 language.
23
+ It demonstrates competitive performance with llama-3-typhoon-v1.5-8b-instruct, and is optimized for application use cases, Retrieval-Augmented Generation (RAG),
24
  constrained generation, and reasoning tasks.is a Thai 🇹🇭 & China 🇨🇳 large language model with 7 billion parameters, and it is based on Qwen2-7B.
 
25
 
26
+ For release notes, please see our [blog](https://medium.com/@superkingbasskb/openthaillm-prebuilt-release-f1b0e22be6a5).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
  ## Requirements
29
 
 
41
 
42
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  ## Evaluation Performance
45
  | Model | ONET | IC | TGAT | TPAT-1 | A-Level | Average (ThaiExam) | M3Exam (1 shot) | MMLU |
46
  | :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |