Mistral-7B-Base-V1 / README.md
Moses25's picture
Update README.md
1d29072 verified
|
raw
history blame
324 Bytes
metadata
license: apache-2.0

This model is trained on 60G chinese and english instruction,a base model which enhances chiense capacity. You can dowload Moses25/Instruct-dataset11M dataset to train your own model with 32K context length. My Moses25/Mistral-7B-chat-32k is trained based on Moses25/Instruct-dataset11M dataset