aigeek0x0 commited on
Commit
b660644
·
verified ·
1 Parent(s): bec6dac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
  <img src="https://huggingface.co/aigeek0x0/radiantloom-mixtral-8x7b-fusion/resolve/main/Radiantloom-Mixtral-8x7B-Fusion.png" alt="Radiantloom Mixtral 8X7B Fusion" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
13
 
14
  ## Radiantloom Mixtral 8X7B Fusion
15
- The Radiantloom Mixtral 8X7B Fusion, a large language model (LLM) developed by AI Geek Labs, features approximately 47 billion parameters and employs a Mixture of Experts (MoE) architecture. With a context length of 4096 tokens, this model is suitable for commercial use.
16
 
17
  From vibes-check evaluations, the Radiantloom Mixtral 8X7B Fusion demonstrates exceptional performance in various applications like creative writing, multi-turn conversations, in-context learning through Retrieval Augmented Generation (RAG), and coding tasks. Its out-of-the-box performance already delivers impressive results, particularly in writing tasks. This model produces longer form content and provides detailed explanations of its actions. To maximize its potential, consider implementing instruction tuning and Reinforcement Learning with Human Feedback (RLHF) techniques for further refinement. Alternatively, you can utilize it in its current form.
18
 
@@ -156,7 +156,7 @@ We are encouraged by the initial assessments conducted using the [LLM-as-a-Judge
156
  ## Ethical Considerations and Limitations
157
  Radiantloom Mixtral 8X7B Fusion, a powerful AI language model, can produce factually incorrect output and content not suitable for work (NSFW). It should not be relied upon to provide factually accurate information and should be used with caution. Due to the limitations of its pre-trained model and the finetuning datasets, it may generate lewd, biased, or otherwise offensive content. Consequently, developers should conduct thorough safety testing prior to implementing any applications of this model.
158
 
159
- ## About Radiantloom
160
  Radiantloom trains open-source large language models tailored for specific business tasks such as copilots, email assistance, customer support, and database operations.
161
 
162
  Learn more about Radiantloom by visiting our [website](https://radiantloom.com). Follow us on Twitter at [Radiantloom](https://twitter.com/radiantloom) to gain early access to upcoming Radiantloom large language models.
 
12
  <img src="https://huggingface.co/aigeek0x0/radiantloom-mixtral-8x7b-fusion/resolve/main/Radiantloom-Mixtral-8x7B-Fusion.png" alt="Radiantloom Mixtral 8X7B Fusion" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
13
 
14
  ## Radiantloom Mixtral 8X7B Fusion
15
+ The Radiantloom Mixtral 8X7B Fusion, a large language model (LLM) developed by Radiantloom AI, features approximately 47 billion parameters and employs a Mixture of Experts (MoE) architecture. With a context length of 4096 tokens, this model is suitable for commercial use.
16
 
17
  From vibes-check evaluations, the Radiantloom Mixtral 8X7B Fusion demonstrates exceptional performance in various applications like creative writing, multi-turn conversations, in-context learning through Retrieval Augmented Generation (RAG), and coding tasks. Its out-of-the-box performance already delivers impressive results, particularly in writing tasks. This model produces longer form content and provides detailed explanations of its actions. To maximize its potential, consider implementing instruction tuning and Reinforcement Learning with Human Feedback (RLHF) techniques for further refinement. Alternatively, you can utilize it in its current form.
18
 
 
156
  ## Ethical Considerations and Limitations
157
  Radiantloom Mixtral 8X7B Fusion, a powerful AI language model, can produce factually incorrect output and content not suitable for work (NSFW). It should not be relied upon to provide factually accurate information and should be used with caution. Due to the limitations of its pre-trained model and the finetuning datasets, it may generate lewd, biased, or otherwise offensive content. Consequently, developers should conduct thorough safety testing prior to implementing any applications of this model.
158
 
159
+ ## About Radiantloom AI
160
  Radiantloom trains open-source large language models tailored for specific business tasks such as copilots, email assistance, customer support, and database operations.
161
 
162
  Learn more about Radiantloom by visiting our [website](https://radiantloom.com). Follow us on Twitter at [Radiantloom](https://twitter.com/radiantloom) to gain early access to upcoming Radiantloom large language models.