--- language: en license: mit tags: - b1ade datasets: - Open-Orca/OpenOrca - WizardLM/WizardLM_evol_instruct_V2_196k widget: - text: "context: \n question: \n answer: <" example_title: Math - text: "context: \n question: \n answer: <" example_title: Sentiment - inference: - parameters: - max_new_tokens: 512 - top_p=0.99 --- # B1ade Stable revision: ``` from transformers import AutoTokenizer model = AutoModelForCausalLM.from_pretrained("w601sxs/b1ade-1b", torch_dtype=torch.bfloat16, device_map="auto", revision='b4b0fd71589e6590089e1ec14a840ecab10894ae') ```