File size: 1,580 Bytes
2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 2940520 603d0e1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
library_name: transformers
tags:
- tapas
- table
- question
license: mit
language:
- en
base_model:
- google/tapas-base-finetuned-wtq
pipeline_tag: table-question-answering
---
This is an experimental model fine-tuned on various balance sheets collected from financial services. The fine-tuning process was designed to adapt the TAPAS model to handle large numeric values and complex financial data structures commonly found in balance sheets.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# !pip install sugardata
from transformers import TapasTokenizer, TapasForQuestionAnswering
from sugardata.utility.tapas import generate_financial_balance_sheet, get_real_tapas_answer
# generate a financial balance sheet and ask a question
table = generate_financial_balance_sheet()
question = "What was the reported value of Total Debt in 2021?"
# load the model and tokenizer
model_name = "yeniguno/tapas-base-wtq-balance-sheet-tuned"
model = TapasForQuestionAnswering.from_pretrained(model_name)
tokenizer = TapasTokenizer.from_pretrained(model_name)
inputs = tokenizer(table=table, queries=[question], padding="max_length", return_tensors="pt")
# get the answer
answer = get_real_tapas_answer(table, model, tokenizer, inputs)
# 8873000.0
```
## Training Details
Epoch [1/5] Train Loss: 0.1514 Val Loss: 0.0107
Epoch [2/5] Train Loss: 0.0135 Val Loss: 0.0098
Epoch [3/5] Train Loss: 0.0116 Val Loss: 0.0081
Epoch [4/5] Train Loss: 0.0081 Val Loss: 0.0071
Epoch [5/5] Train Loss: 0.0049 Val Loss: 0.0043
|