Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
license: unknown
|
3 |
---
|
4 |
|
5 |
-
This is a merge of [LongAlpaca-70B-lora](https://huggingface.co/Yukang/LongAlpaca-70B-lora) into [Aetheria-L2-70B](https://huggingface.co/royallab/Aetheria-L2-70B), replacing the embed and norm layers as described in the [LongLoRA repo](https://github.com/dvlab-research/LongLoRA), and removing the extra row and pad token so that the vocabularies match.
|
6 |
|
7 |
There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).
|
8 |
|
|
|
2 |
license: unknown
|
3 |
---
|
4 |
|
5 |
+
This is a merge of [LongAlpaca-70B-lora](https://huggingface.co/Yukang/LongAlpaca-70B-lora) into royallb's [Aetheria-L2-70B](https://huggingface.co/royallab/Aetheria-L2-70B), replacing the embed and norm layers as described in the [LongLoRA repo](https://github.com/dvlab-research/LongLoRA), and removing the extra row and pad token so that the vocabularies match.
|
6 |
|
7 |
There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).
|
8 |
|