Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ datasets:
|
|
18 |
- internlm/Agent-FLAN
|
19 |
---
|
20 |
|
21 |
-
# Dolphin 2.9.1 Llama 3
|
22 |
|
23 |
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
|
24 |
|
@@ -27,18 +27,19 @@ Discord: https://discord.gg/cognitivecomputations
|
|
27 |
|
28 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
|
29 |
|
30 |
-
We have retrained our LLama-3-
|
31 |
|
32 |
Our appreciation for the sponsors of Dolphin 2.9.1:
|
33 |
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node
|
|
|
34 |
|
35 |
-
This model is based on Llama-3-
|
36 |
|
37 |
The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
|
38 |
|
39 |
-
It took
|
40 |
|
41 |
-
This model was trained FFT on
|
42 |
|
43 |
example:
|
44 |
|
|
|
18 |
- internlm/Agent-FLAN
|
19 |
---
|
20 |
|
21 |
+
# Dolphin 2.9.1 Llama 3 70b 🐬
|
22 |
|
23 |
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
|
24 |
|
|
|
27 |
|
28 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
|
29 |
|
30 |
+
We have retrained our LLama-3-70b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be *too* reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9.
|
31 |
|
32 |
Our appreciation for the sponsors of Dolphin 2.9.1:
|
33 |
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node
|
34 |
+
- [OnDemand](https://on-demand.io/) - provided inference sponsorship
|
35 |
|
36 |
+
This model is based on Llama-3-70b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
|
37 |
|
38 |
The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.
|
39 |
|
40 |
+
It took 3 days on an 8x H100 provided by Crusoe Cloud
|
41 |
|
42 |
+
This model was trained FFT on parameters selected by [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py), using ChatML prompt template format.
|
43 |
|
44 |
example:
|
45 |
|