File size: 1,219 Bytes
3647464 590bd92 3647464 058ab3a 3647464 058ab3a a299b72 3647464 8f0cf73 3647464 c9cd702 3647464 058ab3a 3647464 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
language:
- en
tags:
- en
- english
- gpt2
- gpt3
- text-generation
- lm
- nlp
datasets:
- cnn_dailymail
widget:
- text: "Ever noticed how plane seats appear to be getting smaller and smaller? "
inference:
parameters:
max_length: 120
do_sample: True
temperature: 0.8
---
# GPT-3 small
Pretrained GPT-3 small, continuing the development of GPT NEO, with architecture that purposefully mimics that of GPT-3, model was trained on CNN Daily Mail News dataset for text generation.
# How to use the model
~~~~
from transformers import GPT2Tokenizer, GPTNeoForCausalLM
tokenizer = GPT2Tokenizer.from_pretrained('minhtoan/gpt3-small-finetune-cnndaily-news')
model = GPTNeoForCausalLM.from_pretrained('minhtoan/gpt3-small-finetune-cnndaily-news')
text = "Ever noticed how plane seats appear to be getting smaller and smaller? "
input_ids = tokenizer.encode(text, return_tensors='pt')
max_length = 150
sample_outputs = model.generate(input_ids, do_sample=True, max_length=max_length,temperature = 0.8)
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist())))
print('\n---')
~~~~
## Author
`
Phan Minh Toan
` |