chtmp223 commited on
Commit
05dcf8b
·
verified ·
1 Parent(s): dbb0dfe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - princeton-nlp/Llama-3-8B-ProLong-512k-Instruct
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ datasets:
8
+ - chtmp223/CLIPPER
9
+ ---
10
+
11
+ # ProLong-512k-8B-CLIPPER
12
+ ProLong-512k-8B-CLIPPER is a fine-tuned version of princeton-nlp/Llama-3-8B-ProLong-512k-Instruct using supervised finetuning over chtmp223/CLIPPER dataset.
13
+ Please check [our paper](https://arxiv.org/abs/2502.14854) for more details on the method.
14
+
15
+ ## 📒 Model Details
16
+
17
+ ### Model Description
18
+
19
+ - **Language(s) (NLP):** English
20
+ - **License:** Apache-2.0
21
+ - **Finetuned from model:** princeton-nlp/Llama-3-8B-ProLong-512k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct)
22
+
23
+ ### Model Sources
24
+
25
+ - **Repository:** [Github repository](https://github.com/chtmp223/CLIPPER).
26
+ - **Paper:** [https://arxiv.org/abs/2502.14854](https://arxiv.org/abs/2502.14854)
27
+
28
+
29
+ ## 💻 Training Details
30
+
31
+ ### Training Data
32
+
33
+ [chtmp223/CLIPPER](https://huggingface.co/datasets/chtmp223/CLIPPER)
34
+
35
+ ### Training Procedure
36
+
37
+ | **Configurations** | **Values** |
38
+ |----------------------------------|--------------|
39
+ | Hardware (Training and Inference)| 8xA100s |
40
+ | Tracking | wandb |
41
+ | batch size | 16 |
42
+ | gradient_checkpointing | True |
43
+ | learning_rate | 1.0e-6 |
44
+ | lr_scheduler_type | cosine |
45
+ | max_length | 131072 |
46
+ | num_train_epochs | 1 |
47
+ | optim | adamw_torch |
48
+
49
+ #### Software
50
+
51
+ Training code is adapted from [https://github.com/princeton-nlp/ProLong](https://github.com/princeton-nlp/ProLong).
52
+
53
+ ## 🤗 Inference
54
+ Inference is done with [vLLM](https://github.com/vllm-project/vllm) on 1 A100-80GB.
55
+
56
+
57
+ ## 📜 Citation
58
+
59
+ ```
60
+ @misc{pham2025clippercompressionenableslongcontext,
61
+ title={CLIPPER: Compression enables long-context synthetic data generation},
62
+ author={Chau Minh Pham and Yapei Chang and Mohit Iyyer},
63
+ year={2025},
64
+ eprint={2502.14854},
65
+ archivePrefix={arXiv},
66
+ primaryClass={cs.CL},
67
+ url={https://arxiv.org/abs/2502.14854},
68
+ }
69
+ ```