Text Generation
Transformers
PyTorch
Safetensors
English
hf_olmo
custom_code

Prompting to OLMo

#8
by herambpatil2004 - opened

Somehow, I am not able to get good responses when I run the inference using the code given in model card, is there anything I can improve?

Ai2 org

hey @herambpatil2004 ; this is a base model, not a chat-tuned one. It is expected not to do well when prompting.

soldni changed discussion status to closed

this is a base model, but you could try some prompts such as URIAL https://github.com/Re-Align/URIAL?tab=readme-ov-file#content to make it chat with you for now @herambpatil2004

Sign up or log in to comment