metadata
license: apache-2.0
This model is trained on 60G chinese and english instruction,a base model which enhances chiense capacity. You can dowload Moses25/Instruct-dataset11M dataset to train your own model with 32K context length. My Moses25/Mistral-7B-chat-32k is trained based on Moses25/Instruct-dataset11M dataset