TheBloke commited on
Commit
0f0b8c3
1 Parent(s): 64991ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -2
README.md CHANGED
@@ -1,6 +1,13 @@
1
  ---
2
  inference: false
3
  license: other
 
 
 
 
 
 
 
4
  ---
5
 
6
  <!-- header start -->
@@ -29,6 +36,26 @@ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQi
29
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/fin-llama-33B-GGML)
30
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bavest/fin-llama-33b-merged)
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ## How to easily download and use this model in text-generation-webui
33
 
34
  Please make sure you're using the latest version of text-generation-webui
@@ -73,8 +100,8 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
73
  quantize_config=None)
74
 
75
  prompt = "Tell me about AI"
76
- prompt_template=f'''### Human: {prompt}
77
- ### Assistant:'''
78
 
79
  print("\n\n*** Generate:")
80
 
 
1
  ---
2
  inference: false
3
  license: other
4
+ datasets:
5
+ - bavest/fin-llama-dataset
6
+ tags:
7
+ - finance
8
+ - llm
9
+ - llama
10
+ - trading
11
  ---
12
 
13
  <!-- header start -->
 
36
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/fin-llama-33B-GGML)
37
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bavest/fin-llama-33b-merged)
38
 
39
+ ## Prompt template
40
+
41
+ Standard Alpaca prompting:
42
+
43
+ ```
44
+ A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
45
+ ### Instruction: prompt
46
+
47
+ ### Response:
48
+ ```
49
+ or
50
+ ```
51
+ A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
52
+ ### Instruction: prompt
53
+
54
+ ### Input:
55
+
56
+ ### Response:
57
+ ```
58
+
59
  ## How to easily download and use this model in text-generation-webui
60
 
61
  Please make sure you're using the latest version of text-generation-webui
 
100
  quantize_config=None)
101
 
102
  prompt = "Tell me about AI"
103
+ prompt_template=f'''### Instruction: {prompt}
104
+ ### Response:'''
105
 
106
  print("\n\n*** Generate:")
107