Gugugo-koen-7B-V1.1
Detail repo: https://github.com/jwj7140/Gugugo
Base Model: Llama-2-ko-7b
Training Dataset: sharegpt_deepl_ko_translation.
I trained with 1x A6000 GPUs for 90 hours.
Prompt Template
KO->EN
### ํ๊ตญ์ด: {sentence}</๋>
### ์์ด:
EN->KO
### ์์ด: {sentence}</๋>
### ํ๊ตญ์ด:
Implementation Code
from vllm import LLM, SamplingParams
def make_prompt(data):
prompts = []
for line in data:
prompts.append(f"### ์์ด: {line}</๋>\n### ํ๊ตญ์ด:")
return prompts
texts = [
"Hello world!",
"Nice to meet you!"
]
prompts = make_prompt(texts)
sampling_params = SamplingParams(temperature=0.01, stop=["</๋>"], max_tokens=700)
llm = LLM(model="squarelike/Gugugo-koen-7B-V1.1-AWQ", quantization="awq", dtype="half")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
print(output.outputs[0].text)
- Downloads last month
- 27
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.