|
--- |
|
language: |
|
- en |
|
tags: |
|
- table-to-text |
|
- tabular |
|
datasets: |
|
- totto |
|
--- |
|
|
|
# BLOOM (0.56B) fine-tuned on Totto for Table-to-text |
|
|
|
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the **Totto** [dataset](https://huggingface.co/datasets/totto). |
|
|
|
|
|
## Usage |
|
|
|
```py |
|
from datasets import load_dataset |
|
from transformers import BloomTokenizerFast, BloomForCausalLM |
|
|
|
valid_dataset = load_dataset('totto', split='validation') |
|
|
|
from preprocess import preprocess # This file is included in the repo |
|
|
|
# Now we linearize the tables |
|
valid_dataset = valid_dataset.map(preprocess) |
|
|
|
model_ckpt = "mrm8488/bloom-560m-finetuned-totto-table-to-text" |
|
|
|
tokenizer = BloomTokenizerFast.from_pretrained(ckpt) |
|
model = BloomForCausalLM.from_pretrained(ckpt).to("cuda") |
|
|
|
|
|
def explain_hl_cells(text): |
|
inputs = tokenizer(text, return_tensors='pt') |
|
input_ids = inputs.input_ids.to("cuda") |
|
attention_mask = inputs.attention_mask.to("cuda") |
|
output = model.generate(input_ids, attention_mask=attention_mask, max_length=2048, eos_token_id=tokenizer.eos_token_id) # num_beams=3, temperature=1.9 |
|
|
|
return tokenizer.decode(output[0], skip_special_tokens=False) |
|
|
|
example = valid_dataset[1] |
|
|
|
print(explain_hl_cells(example['linearized_table']) |
|
``` |
|
|
|
### Evaluation results |
|
|
|
| Metric | Value | |
|
|:-------:|:-----:| |
|
| rouge1 | 0.56 | |
|
| rouge2 | 0.33 | |
|
| rougeL | 0.48 | |
|
| rougeLsum | 0.48 | |
|
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.21.2 |
|
- Pytorch 1.12.1+cu113 |
|
- Datasets 2.4.0 |
|
- Tokenizers 0.12.1 |
|
|