Input Models input text only.

Output Models generate text only.

Base Model beomi/Yi-Ko-6B

Training Dataset

Implementation Code

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "ifuseok/yi-ko-playtus-instruct-v0.2"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Prompt Example

<|system|>
μ‹œμŠ€ν…œ λ©”μ‹œμ§€ μž…λ‹ˆλ‹€. <|endoftext|>
<|user|>
μœ μ €  μž…λ‹ˆλ‹€.<|endoftext|>
<|assistant|>
μ–΄μ‹œμŠ€ν„΄νŠΈ μž…λ‹ˆλ‹€.<|endoftext|>
Downloads last month
2,029
Safetensors
Model size
6.18B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ifuseok/yi-ko-playtus-instruct-v0.2

Quantizations
1 model

Datasets used to train ifuseok/yi-ko-playtus-instruct-v0.2