abhinavkulkarni commited on
Commit
b84a594
1 Parent(s): 4a02fc6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -83,6 +83,23 @@ output = model.generate(
83
  print(tokenizer.decode(output[0]))
84
  ```
85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
  ## Acknowledgements
87
 
88
  The MPT model was originally finetuned by Sam Havens and the MosaicML NLP team. Please cite this model using the following format:
 
83
  print(tokenizer.decode(output[0]))
84
  ```
85
 
86
+ ## Evaluation
87
+ [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct)
88
+
89
+ | Task |Version| Metric | Value | |Stderr|
90
+ |--------|------:|---------------|------:|---|------|
91
+ |wikitext| 1|word_perplexity|10.8864| | |
92
+ | | |byte_perplexity| 1.5628| | |
93
+ | | |bits_per_byte | 0.6441| | |
94
+
95
+ [MPT-7B-Instruct (4-bit 128-group AWQ)](https://huggingface.co/abhinavkulkarni/mpt-7b-instruct-w4-g128-awq)
96
+
97
+ | Task |Version| Metric | Value | |Stderr|
98
+ |--------|------:|---------------|------:|---|------|
99
+ |wikitext| 1|word_perplexity|11.2696| | |
100
+ | | |byte_perplexity| 1.5729| | |
101
+ | | |bits_per_byte | 0.6535| | |
102
+
103
  ## Acknowledgements
104
 
105
  The MPT model was originally finetuned by Sam Havens and the MosaicML NLP team. Please cite this model using the following format: