Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Jam-Contextsum
|
2 |
+
|
3 |
+
Jam-Contextsum is a GPT2-like model finetuned to generate summary on why the method exists.
|
4 |
+
|
5 |
+
## Jam-Contextsum Training Details
|
6 |
+
|
7 |
+
- ckpt_pretrain is the file that we use to finetune the model for generating the summary on why the method exists
|
8 |
+
- Our [GitHub repo](https://github.com/apcl-research/jam-contextsum) contains the code for reproduction using the same [data](https://huggingface.co/datasets/apcl/jam_contextsum).
|
9 |
+
|
10 |
+
|
11 |
+
## ckpt_pretrain.pt
|
12 |
+
| Hyperparameter | Description | Value |
|
13 |
+
| ----------- | ----------- |------------|
|
14 |
+
|e | embedding dimensions | 512 |
|
15 |
+
|L | number of layers | 4 |
|
16 |
+
|h | attention heads | 4 |
|
17 |
+
|c | block size / context length | 1,024 |
|
18 |
+
|b | batch size | 4 |
|
19 |
+
|a | accumulation steps | 32 |
|
20 |
+
|d | dropout | 0.20 |
|
21 |
+
|r | learning rate | 3e-5 |
|
22 |
+
|y | iterations | 1e-5 |
|
23 |
+
|iter | number of iterations after pretraing | 137,900 |
|