doberst commited on
Commit
f7d35e5
1 Parent(s): 236ad01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -6,7 +6,7 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a falcon-rw-1b base model.
10
 
11
  BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with
12
  the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even
@@ -17,7 +17,7 @@ without using any advanced quantization optimizations.
17
  <!-- Provide a longer summary of what this model is. -->
18
 
19
  - **Developed by:** llmware
20
- - **Model type:** GPTNeoX instruct-trained decoder
21
  - **Language(s) (NLP):** English
22
  - **License:** Apache 2.0
23
  - **Finetuned from model [optional]:** princeton-nlp/Sheared-LLaMA-1.3B
@@ -53,7 +53,7 @@ without the need for a lot of complex instruction verbiage - provide a text pass
53
 
54
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
55
 
56
- BLING has not been designed for end consumer-oriented applications, and there has not been any focus in training on safeguards to mitigate potential bias. We would strongly discourage any use of BLING for any 'chatbot' use case.
57
 
58
 
59
  ## How to Get Started with the Model
@@ -67,7 +67,7 @@ model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0
67
 
68
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
69
 
70
- full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\: "
71
 
72
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
73
 
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a Sheared-LLaMA-1.3B base model.
10
 
11
  BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with
12
  the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even
 
17
  <!-- Provide a longer summary of what this model is. -->
18
 
19
  - **Developed by:** llmware
20
+ - **Model type:** Instruct-trained decoder
21
  - **Language(s) (NLP):** English
22
  - **License:** Apache 2.0
23
  - **Finetuned from model [optional]:** princeton-nlp/Sheared-LLaMA-1.3B
 
53
 
54
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
55
 
56
+ Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
57
 
58
 
59
  ## How to Get Started with the Model
 
67
 
68
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
69
 
70
+ full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
71
 
72
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
73