|
--- |
|
language: en |
|
license: mit |
|
tags: |
|
- causal-lm |
|
datasets: |
|
- The Pile |
|
--- |
|
|
|
### Quantized EleutherAI/gpt-neo-2.7B with 8-bit weights |
|
|
|
|
|
This is a version of [BigScience's T0](https://huggingface.co/bigscience/T0_3B) with 3 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit). |
|
|
|
Here's how to run it: [](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) |
|
|
|
## Model Description |
|
|
|
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model. |
|
|
|
|
|
## Links |
|
|
|
* [EleutherAI](https://www.eleuther.ai) |
|
* [Hivemind](https://training-transformers-together.github.io/) |
|
* [Gustave Cortal](https://twitter.com/gustavecortal) |
|
|
|
## BibTeX entry and citation info |
|
|
|
To cite this model, use |
|
```bibtex |
|
@software{gpt-neo, |
|
author = {Black, Sid and |
|
Leo, Gao and |
|
Wang, Phil and |
|
Leahy, Connor and |
|
Biderman, Stella}, |
|
title = {{GPT-Neo: Large Scale Autoregressive Language |
|
Modeling with Mesh-Tensorflow}}, |
|
month = mar, |
|
year = 2021, |
|
note = {{If you use this software, please cite it using |
|
these metadata.}}, |
|
publisher = {Zenodo}, |
|
version = {1.0}, |
|
doi = {10.5281/zenodo.5297715}, |
|
url = {https://doi.org/10.5281/zenodo.5297715} |
|
} |
|
|
|
@article{gao2020pile, |
|
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling}, |
|
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others}, |
|
journal={arXiv preprint arXiv:2101.00027}, |
|
year={2020} |
|
} |