aws-kh
commited on
Commit
•
a608366
1
Parent(s):
26912c3
corrected table formatting error for topic retrieval results
Browse files
README.md
CHANGED
@@ -26,9 +26,10 @@ Although the performance of the models on long context was fairly competitive on
|
|
26 |
there were some limitations on its performance on longer context. Motivated by improving its performance on longer context, we finetuned the Mistral 7B model, and produced `Mistrallite`. The model managed to `significantly boost the performance of long context handling` over Mistral-7B-Instruct-v0.1. The detailed `long context evalutaion results` are as below:
|
27 |
|
28 |
1. [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/)
|
|
|
29 |
|Model Name|Input length| Input length | Input length| Input length| Input length|
|
30 |
|----------|-------------:|-------------:|------------:|-----------:|-----------:|
|
31 |
-
| | 2851| 5568 |8313 | 11044 | 13780
|
32 |
| Mistral-7B-Instruct-v0.1 | 100% | 50% | 2% | 0% | 0% |
|
33 |
| MistralLite | **100%** | **100%** | **100%** | **100%** | **98%** |
|
34 |
|
|
|
26 |
there were some limitations on its performance on longer context. Motivated by improving its performance on longer context, we finetuned the Mistral 7B model, and produced `Mistrallite`. The model managed to `significantly boost the performance of long context handling` over Mistral-7B-Instruct-v0.1. The detailed `long context evalutaion results` are as below:
|
27 |
|
28 |
1. [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/)
|
29 |
+
|
30 |
|Model Name|Input length| Input length | Input length| Input length| Input length|
|
31 |
|----------|-------------:|-------------:|------------:|-----------:|-----------:|
|
32 |
+
| | 2851| 5568 |8313 | 11044 | 13780 |
|
33 |
| Mistral-7B-Instruct-v0.1 | 100% | 50% | 2% | 0% | 0% |
|
34 |
| MistralLite | **100%** | **100%** | **100%** | **100%** | **98%** |
|
35 |
|