Text loading
Could you guys give us the python code for loading 'Great Gatsby' and getting the output ?
Would also be nice if you specified what kind of resources (memory/cpu/gpu) one needs to run the models ?
I would love a little demo as well!
I want to be able to load in TVTropes somehow so that it can know how to apply one.
Should have something for you sometime soon (days, not weeks)
Still planning on updating our library's generation tooling to support the Great Gatsby demo (assuming your hardware can handle it). Will keep this issue open until that's out.
In the meantime, you can have fun w/ StoryWriter up to 10k tokens at https://huggingface.co/spaces/mosaicml/mpt-7b-storywriter
Hi, may I ask for the same, that is, the 'Great Gatsby' input sample and the python code for loading it? Thank you!
If you'd like to try using mosaicml/mpt-7b-storywriter
to ingest large documents, I would recommend using this script hf_generate.py
in our LLM Foundry: https://github.com/mosaicml/llm-foundry/tree/main/scripts/inference#interactive-generation-with-hf-models
You can pass a prompt file that will be ingested as a single document, like --prompt file::path/to/gatsby.txt
. See here for code