doberst113080 commited on
Commit
b00713f
·
1 Parent(s): bb7ca2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -43
README.md CHANGED
@@ -1,59 +1,37 @@
1
  ---
2
- license: other
3
- license_link: https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE
4
- license_name: yi-license
5
- model_creator: 01-ai
6
- model_name: Yi 6B
7
- model_type: yi
8
  ---
9
 
10
  # Model Card for Model ID
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
 
14
- dragon-yi-6b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Yi-6B base model.
15
-
16
- DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation.
17
-
18
 
19
  ### Benchmark Tests
20
 
21
- Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
22
- Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
23
 
24
- --**Accuracy Score**: **99.5** correct out of 100
25
- --Not Found Classification: 90.0%
26
- --Boolean: 87.5%
27
- --Math/Logic: 77.5%
28
- --Complex Questions (1-5): 4 (Above Average)
29
- --Summarization Quality (1-5): 4 (Above Average)
30
- --Hallucinations: No hallucinations observed in test runs.
31
-
32
- For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
33
 
34
  ### Model Description
35
 
36
  <!-- Provide a longer summary of what this model is. -->
37
 
38
  - **Developed by:** llmware
39
- - **Model type:** Yi
40
  - **Language(s) (NLP):** English
41
- - **License:** Yi License [Link](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE)
42
- - **Finetuned from model:** Yi-6B
43
 
44
 
45
  ### Direct Use
46
 
47
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
 
49
- DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
50
- legal and regulatory industries with complex information sources.
51
-
52
- DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
53
- without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
54
-
55
- This model is licensed according to the terms of the license of the base model, Yi-6B, at this [link](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE).
56
-
57
 
58
  ## Bias, Risks, and Limitations
59
 
@@ -64,27 +42,27 @@ Any model can provide inaccurate or incomplete information, and should be used i
64
 
65
  ## How to Get Started with the Model
66
 
67
- The fastest way to get started with BLING is through direct import in transformers:
68
 
69
  from transformers import AutoTokenizer, AutoModelForCausalLM
70
- tokenizer = AutoTokenizer.from_pretrained("dragon-yi-6b-v0")
71
- model = AutoModelForCausalLM.from_pretrained("dragon-yi-6b-v0")
72
 
73
- Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
74
 
75
- The DRAGON model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
76
 
77
  full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
78
 
79
- The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
80
 
81
- 1. Text Passage Context, and
82
  2. Specific question or instruction based on the text passage
83
 
84
- To get the best results, package "my_prompt" as follows:
85
-
86
- my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
87
 
 
88
 
89
  If you are using a HuggingFace generation script:
90
 
@@ -111,4 +89,4 @@ If you are using a HuggingFace generation script:
111
 
112
  ## Model Card Contact
113
 
114
- Darren Oberst & llmware team
 
1
  ---
2
+ license: apache-2.0
 
 
 
 
 
3
  ---
4
 
5
  # Model Card for Model ID
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ slim-sql-1b-v0 is part of the slim model series.
 
 
 
10
 
11
  ### Benchmark Tests
12
 
13
+ Evaluated against 100 test SQL queries with under 100 characters. 1 point given for exact string match, 0 given for incorrect answer.
 
14
 
15
+ --**Accuracy Score**: **86** correct out of 100
16
+ - 8 incorrect answers attributed to query structure ordering or naming convention differences
17
+ - 6 incorrect answers attributed to incorrect variable selection or aggregate function use
 
 
 
 
 
 
18
 
19
  ### Model Description
20
 
21
  <!-- Provide a longer summary of what this model is. -->
22
 
23
  - **Developed by:** llmware
24
+ - **Model type:** TinyLlama
25
  - **Language(s) (NLP):** English
26
+ - **License:** apache-2.0
27
+ - **Finetuned from model:** [TinyLlama-1.1b - 2.5T checkpoint](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T)
28
 
29
 
30
  ### Direct Use
31
 
32
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
33
 
34
+ slim is designed for...
 
 
 
 
 
 
 
35
 
36
  ## Bias, Risks, and Limitations
37
 
 
42
 
43
  ## How to Get Started with the Model
44
 
45
+ The fastest way to get started with slim is through direct import in transformers:
46
 
47
  from transformers import AutoTokenizer, AutoModelForCausalLM
48
+ tokenizer = AutoTokenizer.from_pretrained("slim-sql-1b-v0")
49
+ model = AutoModelForCausalLM.from_pretrained("slim-sql-1b-v0")
50
 
51
+ Please refer to the generation_test.py files in the Files repository, which includes 100 samples and script to test the model.
52
 
53
+ The sql-slim model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
54
 
55
  full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
56
 
57
+ The prompt consists of two sub-parts:
58
 
59
+ 1. Table creation prompt providing table name, variables, and variable type.
60
  2. Specific question or instruction based on the text passage
61
 
62
+ Training sample example: "text": "<human>: CREATE TABLE table_name_8 ( partner VARCHAR, date VARCHAR )\nName the partner for may 2, 1993\n<bot>:SELECT partner FROM table_name_8 WHERE date = \"may 2, 1993\"</s>"}
63
+ {"text": "<human>: CREATE TABLE table_name_97 ( Id VARCHAR )\nName the 2012 when 2011 is qf\n<bot>:SELECT 2012 FROM table_name_97 WHERE 2011 = \"qf\"</s>"
 
64
 
65
+ Test samples are provided in this repo ("sql-slim-1b_test_questions")
66
 
67
  If you are using a HuggingFace generation script:
68
 
 
89
 
90
  ## Model Card Contact
91
 
92
+ Dylan Oberst & llmware team