license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
widget:
- text: |-
Below is an instruction that describes a task.
Write a response that appropriately completes the request.
### Instruction:
how can I become more healthy?
### Response:
example_title: example
LaMini-GPT-774M
This model is one of our LaMini model series in paper "LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions".
This model is a fine-tuned version of cerebras/Cerebras-GPT-1.3B on LaMini dataset that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our project repository.
You can view other LaMini model series as follow. Note that not all models are performing as well. Models with ✩ are those with the best overall performance given their size/architecture. More details can be seen in our paper.
Base model | LaMini series (#parameters) | |||
---|---|---|---|---|
T5 | LaMini-T5-61M | LaMini-T5-223M | LaMini-T5-738M | |
Flan-T5 | LaMini-Flan-T5-77M✩ | LaMini-Flan-T5-248M✩ | LaMini-Flan-T5-783M✩ | |
Cerebras-GPT | LaMini-Cerebras-111M | LaMini-Cerebras-256M | LaMini-Cerebras-590M | LaMini-Cerebras-1.3B |
GPT-2 | LaMini-GPT-124M✩ | LaMini-GPT-774M✩ | LaMini-GPT-1.5B✩ | |
GPT-Neo | LaMini-Neo-125M | LaMini-Neo-1.3B | ||
GPT-J | coming soon | |||
LLaMA | coming soon |
Use
Intended use
We recommend using the model to respond to human instructions written in natural language. Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance. See the example on the right or the code below.
We now show you how to load and use our model using HuggingFace pipline()
.
# pip install -q transformers
from transformers import pipeline
checkpoint = "{model_name}"
model = pipeline('text-generation', model=checkpoint, use_auth_token=True)
instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"'
input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
generated_text = generator(input_prompt, max_length=512, do_sample=True)[0]['generated_text']
print("Response": generated_text)
Training Procedure
We initialize with cerebras/Cerebras-GPT-1.3B and fine-tune it on our LaMini dataset. Its total number of parameters is 77M.
Training Hyperparameters
Evaluation
We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our paper.
Limitations
More information needed
Citation
@misc{lamini,
title={LaMini: A Diverse Herd of Distilled Models from Large-Scale Instructions},
author={},
year={2023},
publisher = {GitHub},
journal = {GitHub repository},
}