Xclbr7 commited on
Commit
88794cb
·
verified ·
1 Parent(s): ce040b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -6
README.md CHANGED
@@ -11,12 +11,43 @@ language:
11
  - en
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** aixonlab
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** aixonlab/Zara-14b-v1.1
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - en
12
  ---
13
 
14
+ ![Zara-14b Banner](https://cdn-uploads.huggingface.co/production/uploads/66dcee3321f901b049f48002/uBEgIFXSUhUFwRaVuXYvV.png)
15
 
16
+ # Zara 14b v1.2 🧙‍♂️
 
 
17
 
 
18
 
19
+ Zara 14b tries to become the perfect companion for any chat which involves multiple roles. The ability to understand context is pretty awesome and excels in creativity and storytelling.
20
+ It is built on Lamarck 14B v0.7 and trained on different datasets as well as some layer merges to ehance its capabilities.
21
+
22
+ ## Model Details 📊
23
+
24
+ - **Developed by:** Aixon Lab
25
+ - **Model type:** Causal Language Model
26
+ - **Language(s):** English (primarily), may support other languages
27
+ - **License:** Apache 2.0
28
+ - **Repository:** https://huggingface.co/aixonlab/Zara-14b-v1.2
29
+
30
+ ## Quantization
31
+ - **GGUF:** https://huggingface.co/mradermacher/Zara-14b-v1.2-GGUF
32
+
33
+ ## Model Architecture 🏗️
34
+
35
+ - **Base model:** aixonlab/Zara-14b-v1.1
36
+ - **Parameter count:** ~14 billion
37
+ - **Architecture specifics:** Transformer-based language model
38
+
39
+ ## Intended Use 🎯
40
+ As an advanced language model for various natural language processing tasks, including but not limited to text generation (excels in chat), question-answering, and analysis.
41
+
42
+ ## Ethical Considerations 🤔
43
+ As a model based on multiple sources, Zara 14b may inherit biases and limitations from its constituent models. Users should be aware of potential biases in generated content and use the model responsibly.
44
+
45
+ ## Performance and Evaluation
46
+ Performance metrics and evaluation results for Zara 14b are yet to be determined. Users are encouraged to contribute their findings and benchmarks.
47
+
48
+ ## Limitations and Biases
49
+ The model may exhibit biases present in its training data and constituent models. It's crucial to critically evaluate the model's outputs and use them in conjunction with human judgment.
50
+
51
+ ## Additional Information
52
+ For more details on the base model and constituent models, please refer to their respective model cards and documentation.
53
+