--- license: apache-2.0 language: - en pipeline_tag: text-generation inference: false datasets: - the_pile_books3 --- # mpt-7b-storywriter: sharded This is a version of the [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) model, sharded to 2 GB chunks for low-RAM loading (i.e. Colab). The weights are stored in `bfloat16` so in theory you can run this on CPU, though it may take forever. This model was downloaded from the original repo under Apache-2.0 and is redistributed under the same license. --- > More details/usage to be added later