Ranjanunicode commited on
Commit
4f19d31
1 Parent(s): cd59429

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -22
README.md CHANGED
@@ -41,33 +41,16 @@ Output Models generate text only.
41
  - Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
42
  - To get the expected features and performance for the chat versions, a specific formatting needs to be followed,including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between (we recommend calling strip() on inputs to avoid double-spaces). See our reference code in github for details: chat_completion.
43
 
 
 
44
 
45
- ### Out-of-Scope Use
46
-
47
- - Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
48
-
49
-
50
- ## Evaluation
51
 
52
- - In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
53
-
54
- | Model | Size | Code | Commonsense Reasoning | | World Knowledge |Reading Comprehension | Math |MMLU | BBH | AGI Eval |
55
- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
56
- | Llama 1 | 7B | 14.1 | 60.8 | 46.2 | 58.5 | 6.95 | 35.1 | 30.3 | 23.9 |
57
 
 
58
 
 
59
 
60
- ---
61
- | Model | | Size | | Code | | Commonsense Reasoning | | World Knowledge | | Reading Comprehension | | Math | | MMLU | | BBH | | AGI Eval |
62
- |-------||-------||-------||-------||-------||-------||-------||-------||-------||-------||-------|
63
- | Llama 1 | 7B 14.1 60.8 46.2 58.5 6.95 35.1 30.3 23.9
64
- | Llama 1 | 13B 18.9 66.1 52.6 62.3 10.9 46.9 37.0 33.9
65
- | Llama 1 | 33B 26.0 70.0 58.4 67.6 21.4 57.8 39.8 41.7
66
- | Llama 1 | 65B 30.7 70.7 60.5 68.6 30.8 63.4 43.5 47.6
67
- | Llama 2 | 7B 16.8 63.9 48.9 61.3 14.6 45.3 32.6 29.3
68
- | Llama 2 | 13B 24.5 66.9 55.4 65.8 28.7 54.8 39.4 39.1
69
- | Llama 2 | 70B 37.5 71.9 63.6 69.4 35.2 68.9 51.2 54.2
70
- ---
71
 
72
  ### Compute Infrastructure
73
  - Google collab Tesla T4 Gpu.
 
41
  - Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
42
  - To get the expected features and performance for the chat versions, a specific formatting needs to be followed,including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between (we recommend calling strip() on inputs to avoid double-spaces). See our reference code in github for details: chat_completion.
43
 
44
+ - ```
45
+ print("Hi there!")
46
 
47
+ ```
 
 
 
 
 
48
 
 
 
 
 
 
49
 
50
+ ### Out-of-Scope Use
51
 
52
+ - Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
53
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ### Compute Infrastructure
56
  - Google collab Tesla T4 Gpu.