Triangle104 commited on
Commit
77612cd
·
verified ·
1 Parent(s): 995aad8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -18,6 +18,55 @@ tags:
18
  This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-1B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
19
  Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1) for more details on the model.
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ## Use with llama.cpp
22
  Install llama.cpp through brew (works on Mac and Linux)
23
 
 
18
  This model was converted to GGUF format from [`prithivMLmods/Bellatrix-Tiny-1B-R1`](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
19
  Refer to the [original model card](https://huggingface.co/prithivMLmods/Bellatrix-Tiny-1B-R1) for more details on the model.
20
 
21
+ ---
22
+ Bellatrix is based on a reasoning-based model designed for the DeepSeek-R1 synthetic dataset entries. The pipeline's instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. These models outperform many of the available open-source options. Bellatrix is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions utilize supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF).
23
+ Use with transformers
24
+
25
+ Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
26
+
27
+ Make sure to update your transformers installation via pip install --upgrade transformers.
28
+
29
+ import torch
30
+ from transformers import pipeline
31
+
32
+ model_id = "prithivMLmods/Bellatrix-Tiny-1B-R1"
33
+ pipe = pipeline(
34
+ "text-generation",
35
+ model=model_id,
36
+ torch_dtype=torch.bfloat16,
37
+ device_map="auto",
38
+ )
39
+ messages = [
40
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
41
+ {"role": "user", "content": "Who are you?"},
42
+ ]
43
+ outputs = pipe(
44
+ messages,
45
+ max_new_tokens=256,
46
+ )
47
+ print(outputs[0]["generated_text"][-1])
48
+
49
+ Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantised and more at huggingface-llama-recipes
50
+ Intended Use
51
+
52
+ Bellatrix is designed for applications that require advanced reasoning and multilingual dialogue capabilities. It is particularly suitable for:
53
+
54
+ Agentic Retrieval: Enabling intelligent retrieval of relevant information in a dialogue or query-response system.
55
+ Summarization Tasks: Condensing large bodies of text into concise summaries for easier comprehension.
56
+ Multilingual Use Cases: Supporting conversations in multiple languages with high accuracy and coherence.
57
+ Instruction-Based Applications: Following complex, context-aware instructions to generate precise outputs in a variety of scenarios.
58
+
59
+ Limitations
60
+
61
+ Despite its capabilities, Bellatrix has some limitations:
62
+
63
+ Domain Specificity: While it performs well on general tasks, its performance may degrade with highly specialized or niche datasets.
64
+ Dependence on Training Data: It is only as good as the quality and diversity of its training data, which may lead to biases or inaccuracies.
65
+ Computational Resources: The model’s optimized transformer architecture can be resource-intensive, requiring significant computational power for fine-tuning and inference.
66
+ Language Coverage: While multilingual, some languages or dialects may have limited support or lower performance compared to widely used ones.
67
+ Real-World Contexts: It may struggle with understanding nuanced or ambiguous real-world scenarios not covered during training.
68
+
69
+ ---
70
  ## Use with llama.cpp
71
  Install llama.cpp through brew (works on Mac and Linux)
72