Uploaded model
- Developed by: AashishKumar
- License: apache-2.0
- Finetuned from model : cognitivecomputations/dolphin-2.9.3-llama-3-8b
from transformers import AutoTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained("otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAk")
tokenizer = AutoTokenizer.from_pretrained("otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAK")
prompt = "ky tumhe la la land pasand hai?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAk")
pipe(messages)
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAk
Base model
meta-llama/Meta-Llama-3-8B