doberst commited on
Commit
e6241ff
·
verified ·
1 Parent(s): f4d28cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -9
README.md CHANGED
@@ -7,22 +7,22 @@ inference: false
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
- **slim-sentiment** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
11
 
12
- slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
13
 
14
- &nbsp;&nbsp;&nbsp;&nbsp;`{"sentiment": ["positive"]}`
15
 
16
 
17
  SLIM models are designed to provide a flexible natural language generative model that can be used as part of a multi-step, multi-model LLM-based automation workflow.
18
 
19
- Each slim model has a 'quantized tool' version, e.g., [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool).
20
 
21
 
22
  ## Prompt format:
23
 
24
  `function = "classify"`
25
- `params = "sentiment"`
26
  `prompt = "<human> " + {text} + "\n" + `
27
  &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
28
 
@@ -30,13 +30,20 @@ Each slim model has a 'quantized tool' version, e.g., [**'slim-sentiment-tool'*
30
  <details>
31
  <summary>Transformers Script </summary>
32
 
33
- model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
34
- tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")
35
 
36
  function = "classify"
37
- params = "sentiment"
38
 
39
- text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
 
 
 
 
 
 
 
40
 
41
  prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
42
 
 
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
+ **slim-nli** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
11
 
12
+ slim-sentiment has been fine-tuned for **natural language inference (nli)** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
13
 
14
+ &nbsp;&nbsp;&nbsp;&nbsp;`{"evidence": ["contradicts"]}`
15
 
16
 
17
  SLIM models are designed to provide a flexible natural language generative model that can be used as part of a multi-step, multi-model LLM-based automation workflow.
18
 
19
+ Each slim model has a 'quantized tool' version, e.g., [**'slim-nli-tool'**](https://huggingface.co/llmware/slim-nli-tool).
20
 
21
 
22
  ## Prompt format:
23
 
24
  `function = "classify"`
25
+ `params = "nli"`
26
  `prompt = "<human> " + {text} + "\n" + `
27
  &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
28
 
 
30
  <details>
31
  <summary>Transformers Script </summary>
32
 
33
+ model = AutoModelForCausalLM.from_pretrained("llmware/slim-nli")
34
+ tokenizer = AutoTokenizer.from_pretrained("llmware/slim-nli")
35
 
36
  function = "classify"
37
+ params = "evidence"
38
 
39
+ # expects two statements - the first is evidence, and the second is a conclusion
40
+
41
+ text1 = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
42
+ text2 = "Investors are positive about the market."
43
+
44
+ # the two statements are concatenated with optional/helpful "Evidence: " and "Conclusion: " added
45
+
46
+ text = "Evidence: " + text1 + "\n" + "Conclusion: " + text2
47
 
48
  prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
49