purewhite42 commited on
Commit
6ff92ec
·
verified ·
1 Parent(s): b5e9749

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -135
README.md CHANGED
@@ -1,144 +1,69 @@
1
  ---
2
- datasets: []
3
- language: []
4
- library_name: sentence-transformers
 
 
5
  pipeline_tag: sentence-similarity
 
6
  tags:
7
- - sentence-transformers
8
- - sentence-similarity
9
- - feature-extraction
10
- widget: []
11
  ---
12
 
13
- # SentenceTransformer
14
-
15
- This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
16
-
17
- ## Model Details
18
-
19
- ### Model Description
20
- - **Model Type:** Sentence Transformer
21
- <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
22
- - **Maximum Sequence Length:** 8192 tokens
23
- - **Output Dimensionality:** 1024 tokens
24
- - **Similarity Function:** Cosine Similarity
25
- <!-- - **Training Dataset:** Unknown -->
26
- <!-- - **Language:** Unknown -->
27
- <!-- - **License:** Unknown -->
28
-
29
- ### Model Sources
30
-
31
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
32
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
33
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
34
-
35
- ### Full Model Architecture
36
-
 
 
 
 
 
 
 
 
 
 
 
37
  ```
38
- SentenceTransformer(
39
- (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
40
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
41
- (2): Normalize()
42
- )
 
 
 
 
 
 
43
  ```
44
 
45
- ## Usage
46
-
47
- ### Direct Usage (Sentence Transformers)
48
-
49
- First install the Sentence Transformers library:
50
-
51
- ```bash
52
- pip install -U sentence-transformers
53
- ```
54
-
55
- Then you can load this model and run inference.
56
- ```python
57
- from sentence_transformers import SentenceTransformer
58
-
59
- # Download from the 🤗 Hub
60
- model = SentenceTransformer("sentence_transformers_model_id")
61
- # Run inference
62
- sentences = [
63
- 'The weather is lovely today.',
64
- "It's so sunny outside!",
65
- 'He drove to the stadium.',
66
- ]
67
- embeddings = model.encode(sentences)
68
- print(embeddings.shape)
69
- # [3, 1024]
70
-
71
- # Get the similarity scores for the embeddings
72
- similarities = model.similarity(embeddings, embeddings)
73
- print(similarities.shape)
74
- # [3, 3]
75
- ```
76
-
77
- <!--
78
- ### Direct Usage (Transformers)
79
-
80
- <details><summary>Click to see the direct usage in Transformers</summary>
81
-
82
- </details>
83
- -->
84
-
85
- <!--
86
- ### Downstream Usage (Sentence Transformers)
87
-
88
- You can finetune this model on your own dataset.
89
-
90
- <details><summary>Click to expand</summary>
91
-
92
- </details>
93
- -->
94
-
95
- <!--
96
- ### Out-of-Scope Use
97
-
98
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
99
- -->
100
-
101
- <!--
102
- ## Bias, Risks and Limitations
103
-
104
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
105
- -->
106
-
107
- <!--
108
- ### Recommendations
109
-
110
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
111
- -->
112
-
113
- ## Training Details
114
-
115
- ### Framework Versions
116
- - Python: 3.11.9
117
- - Sentence Transformers: 3.0.1
118
- - Transformers: 4.44.2
119
- - PyTorch: 2.4.0+cu121
120
- - Accelerate: 0.33.0
121
- - Datasets: 2.21.0
122
- - Tokenizers: 0.19.1
123
-
124
- ## Citation
125
-
126
- ### BibTeX
127
-
128
- <!--
129
- ## Glossary
130
-
131
- *Clearly define terms in order to be accessible across audiences.*
132
- -->
133
-
134
- <!--
135
- ## Model Card Authors
136
-
137
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
138
- -->
139
-
140
- <!--
141
- ## Model Card Contact
142
 
143
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
144
- -->
 
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - BAAI/bge-m3
7
  pipeline_tag: sentence-similarity
8
+ library_name: transformers
9
  tags:
10
+ - lean4
11
+ - dependency-retrieval
12
+ - formal-mathematics
 
13
  ---
14
 
15
+ <div align="center">
16
+ <h1 style="font-size: 1.5em;">[ICLR'25 Spotlight] Rethinking and Improving Autoformalization: Towards a Faithful Metric and a Dependency Retrieval-based Approach</h1>
17
+ <div style="display: flex; justify-content: center; gap: 8px; flex-wrap: wrap;">
18
+ <a href="https://choosealicense.com/licenses/apache-2.0/"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License: Apache 2.0"></a>
19
+ <a href="https://github.com/leanprover-community/mathlib4"><img src="https://img.shields.io/badge/Lean-4-orange" alt="Lean 4"></a>
20
+ <a href="https://github.com/Purewhite2019/rethinking_autoformalization"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white" alt="GitHub"></a>
21
+ </div>
22
+ <h2>Qi Liu, Xinhao Zheng, Xudong Lu, Qinxiang Cao, Junchi Yan* (* indicates Correspondence author)</h2>
23
+ <h2>Sch. of Computer Science & Sch. of Artificial Intelligence, Shanghai Jiao Tong University</h2>
24
+ </div>
25
+
26
+ Please refer to the [📺GitHub repo](https://github.com/Purewhite2019/rethinking_autoformalization) and
27
+ [📃Paper](https://openreview.net/pdf?id=hUb2At2DsQ) for more details.
28
+
29
+
30
+ ## 📈 Performance
31
+ | **Bench**     | **Fmt** | **Method** | **Recall@5** | **Precision@5** |
32
+ |---------------|---------|------------|--------------|-----------------|
33
+ | **ProofNet** | F    | BM25       | 0.16%        | 0.11%           |
34
+ |               | F    | DR         | 35.52%       | 22.89%          |
35
+ |               | F+IF  | BM25       | 0.00%        | 0.00%           |
36
+ |               | F+IF  | DR         | 32.47%       | 20.32%          |
37
+ |   **Con-NF**  | F    | BM25       | 4.41%        | 2.37%           |
38
+ |               | F    | DR         | 24.32%       | 14.05%          |
39
+ |               | F+IF  | BM25       | 9.86%        | 6.95%           |
40
+ |               | F+IF  | DR         | 27.91%       | 17.57%          |
41
+
42
+ ## ⚙️ Usage
43
+ - [🤗`purewhite42/dependency_retriever_f`](https://huggingface.co/purewhite42/dependency_retriever_f): Dependency retriever whose inputs are formatted using only formal declarations , SFTed from [🤗`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3)
44
+ - [🤗`purewhite42/dependency_retriever_f_if`](https://huggingface.co/purewhite42/dependency_retriever_f_if): w/ oracle dependency retrieval whose inputs are formatted using both formal declarations and informal descriptions, SFTed from [🤗`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3)
45
+ ```shell
46
+ python -m retriever.retrieve_dr \
47
+ --model_path /path/to/the/model \
48
+ --save_path /path/to/output/results \
49
+ --eval_set ... # {proofnet, connf}
50
  ```
51
+ ## 📚 Citation
52
+ If you find our work useful in your research, please cite
53
+ ```bibtex
54
+ @inproceedings{
55
+ liu2025rethinking,
56
+ title={Rethinking and improving autoformalization: towards a faithful metric and a Dependency Retrieval-based approach},
57
+ author={Qi Liu and Xinhao Zheng and Xudong Lu and Qinxiang Cao and Junchi Yan},
58
+ booktitle={The Thirteenth International Conference on Learning Representations},
59
+ year={2025},
60
+ url={https://openreview.net/forum?id=hUb2At2DsQ}
61
+ }
62
  ```
63
 
64
+ # License
65
+ This project is released under the Apache 2.0 license. Please see the [LICENSE](https://github.com/Purewhite2019/rethinking_autoformalization/blob/main/LICENSE) file for more information.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
+ # Contact
68
+ Feel free to discuss the paper/data/code with us through issues/emails!
69
+ - Qi Liu: [email protected]