Edit model card

hongyin/chat-self-management-1.5b

Warning: There are some problems with the tokenizer of this model, which will be corrected in the next version of the model (chat-informer-1b).

We are honored to introduce a lightweight Chinese-English conversation assistant designed to reduce the cost of inference. It is trained from scratch, based on the LLAMA2 architecture, with 150 million parameters and a completely new vocabulary. The training process consists of two parts: (1) NTP task. (2) Instruction tuning. The model improves data quality for pre-training and instruction tuning.

Human: Paraphrasing the  sentence: I love you.
Assistant: Sure, I love you.

Bibtex entry and citation info

Please cite if you find it helpful.

@article{zhu2023metaaid,
  title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
  author={Zhu, Hongyin},
  journal={arXiv preprint arXiv:2302.13173},
  year={2023}
}

license: other

Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.