Update model card metadata: pipeline tag, license, and add Github link
#1
by
nielsr
HF staff
- opened
README.md
CHANGED
@@ -1,10 +1,8 @@
|
|
1 |
-
|
2 |
---
|
3 |
-
|
4 |
-
pipeline_tag: text-generation
|
5 |
base_model: TableLLM-13b
|
6 |
library_name: transformers
|
7 |
-
|
|
|
8 |
---
|
9 |
|
10 |
[](https://hf.co/QuantFactory)
|
@@ -17,7 +15,6 @@ This is quantized version of [RUCKBReasoning/TableLLM-13b](https://huggingface.c
|
|
17 |
|
18 |
---
|
19 |
|
20 |
-
license: llama2
|
21 |
datasets:
|
22 |
- RUCKBReasoning/TableLLM-SFT
|
23 |
language:
|
@@ -29,7 +26,7 @@ tags:
|
|
29 |
|
30 |
---
|
31 |
|
32 |
-
[ | 48.8 | 49.6 | 67.7 | 61.5 | β | β | β | 56.9 |
|
60 |
| CodeLlama (13B) | 43.4 | 47.2 | 57.2 | 49.7 | 38.3 | 21.9 | 47.6 | 43.6 |
|
@@ -62,8 +59,8 @@ We evaluate the code solution generation ability of TableLLM on three benchmarks
|
|
62 |
| StructGPT (GPT3.5) | 52.5 | 27.5 | 11.8 | 14.0 | 67.8 |**84.8**| / | 48.9 |
|
63 |
| Binder (GPT3.5) | 61.6 | 12.8 | 6.8 | 5.1 | 78.6 | 52.6 | / | 42.5 |
|
64 |
| DATER (GPT3.5) | 53.4 | 28.4 | 18.3 | 13.0 | 58.2 | 26.5 | / | 37.0 |
|
65 |
-
| TableLLM-7B (Ours) | 58.8 | 66.9 | 72.6
|
66 |
-
| TableLLM-13B (Ours)
|
67 |
|
68 |
## Prompt Template
|
69 |
The prompts we used for generating code solutions and text answers are introduced below.
|
@@ -120,5 +117,4 @@ The prompt template for direct text answer generation on short tables.
|
|
120 |
### [Solution][INST/]
|
121 |
````
|
122 |
|
123 |
-
For more details about how to use TableLLM, please refer to our GitHub page: <https://github.com/TableLLM/TableLLM>
|
124 |
-
|
|
|
|
|
1 |
---
|
|
|
|
|
2 |
base_model: TableLLM-13b
|
3 |
library_name: transformers
|
4 |
+
pipeline_tag: table-question-answering
|
5 |
+
license: llama2
|
6 |
---
|
7 |
|
8 |
[](https://hf.co/QuantFactory)
|
|
|
15 |
|
16 |
---
|
17 |
|
|
|
18 |
datasets:
|
19 |
- RUCKBReasoning/TableLLM-SFT
|
20 |
language:
|
|
|
26 |
|
27 |
---
|
28 |
|
29 |
+
[](https://hf.co/QuantFactory)
|
30 |
|
31 |
|
32 |
# QuantFactory/TableLLM-13b-GGUF
|
|
|
49 |
| Model | WikiTQ | TAT-QA | FeTaQA | OTTQA | WikiSQL | Spider | Self-created | Average |
|
50 |
| :------------------- | :----: | :----: | :----: | :-----: | :-----: | :----: | :----------: | :-----: |
|
51 |
| TaPEX | 38.5 | β | β | β | 83.9 | 15.0 | / | 45.8 |
|
52 |
+
| TaPas | 31.5 | β | β | 74.2 | 23.1 | / | 42.92 |
|
53 |
| TableLlama | 24.0 | 22.2 | 20.5 | 6.4 | 43.7 | 9.0 | / | 20.7 |
|
54 |
+
| GPT3.5 | 58.5 | 72.1 | 71.2 | 60.8 | 81.7 | 67.4 | 77.1 | 69.8 |
|
55 |
| GPT4 |**74.1**|**77.1**|**78.4**|**69.5** | 84.0 | 69.5 | 77.8 | **75.8**|
|
56 |
| Llama2-Chat (13B) | 48.8 | 49.6 | 67.7 | 61.5 | β | β | β | 56.9 |
|
57 |
| CodeLlama (13B) | 43.4 | 47.2 | 57.2 | 49.7 | 38.3 | 21.9 | 47.6 | 43.6 |
|
|
|
59 |
| StructGPT (GPT3.5) | 52.5 | 27.5 | 11.8 | 14.0 | 67.8 |**84.8**| / | 48.9 |
|
60 |
| Binder (GPT3.5) | 61.6 | 12.8 | 6.8 | 5.1 | 78.6 | 52.6 | / | 42.5 |
|
61 |
| DATER (GPT3.5) | 53.4 | 28.4 | 18.3 | 13.0 | 58.2 | 26.5 | / | 37.0 |
|
62 |
+
| TableLLM-7B (Ours) | 58.8 | 66.9 | 72.6 | 63.1 | 86.6| 82.6 | 78.8| 72.8 |
|
63 |
+
| TableLLM-13B (Ours) | 62.4| 68.2 | 74.5| 62.5 | **90.7**| 83.4| **80.8** | 74.7|
|
64 |
|
65 |
## Prompt Template
|
66 |
The prompts we used for generating code solutions and text answers are introduced below.
|
|
|
117 |
### [Solution][INST/]
|
118 |
````
|
119 |
|
120 |
+
For more details about how to use TableLLM, please refer to our GitHub page: <https://github.com/TableLLM/TableLLM>
|
|