Text Generation
Transformers
Safetensors
English
mistral
conversational
text-generation-inference
Inference Endpoints
CreitinGameplays commited on
Commit
c6b329b
·
verified ·
1 Parent(s): eb4c27c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - CreitinGameplays/r1_annotated_math-mistral
5
+ - CreitinGameplays/DeepSeek-R1-Distill-Qwen-32B_NUMINA_train_amc_aime-mistral
6
+ language:
7
+ - en
8
+ base_model:
9
+ - mistralai/Mistral-Nemo-Instruct-2407
10
+ pipeline_tag: text-generation
11
+ library_name: transformers
12
+ ---
13
+
14
+ Run the model:
15
+ ```python
16
+ import torch
17
+ from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
18
+
19
+ def main():
20
+ model_id = "CreitinGameplays/Mistral-Nemo-12B-R1-alpha"
21
+
22
+ # Load the tokenizer.
23
+ tokenizer = AutoTokenizer.from_pretrained(model_id, add_eos_token=True)
24
+
25
+ # Load the model using bitsandbytes 8-bit quantization if CUDA is available.
26
+ if torch.cuda.is_available():
27
+ model = AutoModelForCausalLM.from_pretrained(
28
+ model_id,
29
+ load_in_8bit=True,
30
+ device_map="auto"
31
+ )
32
+ device = torch.device("cuda")
33
+ else:
34
+ model = AutoModelForCausalLM.from_pretrained(model_id)
35
+ device = torch.device("cpu")
36
+
37
+ # Define the generation parameters.
38
+ generation_kwargs = {
39
+ "max_new_tokens": 8192,
40
+ "do_sample": True,
41
+ "temperature": 0.8,
42
+ "top_p": 0.9,
43
+ "top_k": 40,
44
+ "repetition_penalty": 1.12,
45
+ "num_return_sequences": 1,
46
+ "forced_eos_token_id": tokenizer.eos_token_id,
47
+ "pad_token_id": tokenizer.eos_token_id
48
+ }
49
+
50
+ print("Enter your prompt (type 'exit' to quit):")
51
+ while True:
52
+ # Get user input.
53
+ user_input = input("Input> ")
54
+ if user_input.lower().strip() in ("exit", "quit"):
55
+ break
56
+
57
+ # Construct the prompt in your desired format.
58
+ prompt = f"""
59
+ [INST]You are an AI assistant named Mistral Nemo. Always output your response after </think> block!\n\n{user_input}[/INST]<think>
60
+ """
61
+
62
+ # Tokenize the prompt and send to the selected device.
63
+ input_ids = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=True).to(device)
64
+
65
+ # Create a new TextStreamer instance for streaming responses.
66
+ streamer = TextStreamer(tokenizer)
67
+ generation_kwargs["streamer"] = streamer
68
+
69
+ print("\nAssistant Response:")
70
+ # Generate the text (tokens will stream to stdout via the streamer).
71
+ outputs = model.generate(input_ids, **generation_kwargs)
72
+
73
+ if __name__ == "__main__":
74
+ main()
75
+ ```