Adding `safetensors` variant of this model
#4
by
SFconvertbot
- opened
This is an automated PR created with https://huggingface.co/spaces/safetensors/convert
This new file is equivalent to pytorch_model.bin
but safe in the sense that
no arbitrary code can be put into it.
These files also happen to load much faster than their pytorch counterpart:
https://colab.research.google.com/github/huggingface/notebooks/blob/main/safetensors_doc/en/speed.ipynb
The widgets on your model page will run using this model even if this is not merged
making sure the file actually works.
If you find any issues: please report here: https://huggingface.co/spaces/safetensors/convert/discussions
Feel free to ignore this PR.
I tested this PR with candle (requires https://github.com/huggingface/candle/pull/903):
$ cargo run --release --example t5 -- --model-id "grammarly/coedit-large" --revision "refs/pr/4" --prompt "Fix the grammar: When I grow up, I start to understand what he said is quite right." --temperature 0 --decode
Compiling candle-transformers v0.2.3 (/Users/jbochi/dev/github/huggingface/candle/candle-transformers)
Compiling candle-examples v0.2.3 (/Users/jbochi/dev/github/huggingface/candle/candle-examples)
Finished release [optimized] target(s) in 6.58s
Running `target/release/examples/t5 --model-id grammarly/coedit-large --revision refs/pr/4 --prompt 'Fix the grammar: When I grow up, I start to understand what he said is quite right.' --temperature 0 --decode`
Running on CPU, to run on GPU, build this example with `--features cuda`
When I grow up, I will start to understand what he said is quite right.
19 tokens generated (4.72 token/s)
machineteacher
changed pull request status to
merged