prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -80,12 +80,6 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
80 |
6. **Extended Content Generation**:
|
81 |
Capable of generating long-form content (over 8K tokens), useful for writing reports, articles, and creating detailed instructional guides.
|
82 |
|
83 |
-
7. **Interactive Role-Playing and Chatbots**:
|
84 |
-
Enhanced capabilities for role-playing and condition-setting, making it ideal for interactive chatbots, virtual assistants, and entertainment purposes.
|
85 |
-
|
86 |
-
8. **Large-Context Tasks**:
|
87 |
-
With a context window of up to 128K tokens, it is ideal for analyzing or generating large documents, books, or datasets in a single session.
|
88 |
-
|
89 |
# **Limitations**
|
90 |
1. **Hardware Requirements**:
|
91 |
Due to its 20B parameter size and support for long-context inputs, running the model requires significant computational resources, including high-memory GPUs or TPUs.
|
@@ -103,10 +97,4 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
103 |
In generating long texts, minor errors in early outputs can sometimes propagate, reducing the overall coherence and accuracy of the response.
|
104 |
|
105 |
6. **Dependency on High-Quality Prompts**:
|
106 |
-
Performance may depend on the quality and specificity of the input prompt, requiring users to carefully design queries for optimal results.
|
107 |
-
|
108 |
-
7. **Sensitivity to Adversarial Inputs**:
|
109 |
-
The model may struggle with adversarial or ambiguous inputs, leading to incorrect or irrelevant outputs.
|
110 |
-
|
111 |
-
8. **Ethical and Safety Concerns**:
|
112 |
-
Potential misuse in generating misleading, harmful, or offensive content remains a concern, and guardrails must be implemented to ensure responsible use.
|
|
|
80 |
6. **Extended Content Generation**:
|
81 |
Capable of generating long-form content (over 8K tokens), useful for writing reports, articles, and creating detailed instructional guides.
|
82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
# **Limitations**
|
84 |
1. **Hardware Requirements**:
|
85 |
Due to its 20B parameter size and support for long-context inputs, running the model requires significant computational resources, including high-memory GPUs or TPUs.
|
|
|
97 |
In generating long texts, minor errors in early outputs can sometimes propagate, reducing the overall coherence and accuracy of the response.
|
98 |
|
99 |
6. **Dependency on High-Quality Prompts**:
|
100 |
+
Performance may depend on the quality and specificity of the input prompt, requiring users to carefully design queries for optimal results.
|
|
|
|
|
|
|
|
|
|
|
|