nielsr HF staff commited on
Commit
8ef09e4
·
verified ·
1 Parent(s): 3b068e5

Add library_name, pipeline_tag, and project page link

Browse files

This PR adds the metadata `library_name: transformers` and `pipeline_tag: text-generation` to the model card.
I also added a link to the project page.

Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -1,7 +1,13 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
4
  # Universal-PRM-7B
 
 
 
5
  ## 1. Overview
6
  Universal-PRM is trained using Qwen2.5-Math-7B-Instruct as the base. The training process incorporates diverse policy distributions, ensemble prompting, and reverse verification to enhance generalization and robustness. It achieves state-of-the-art performance on ProcessBench and the internally developed UniversalBench.
7
  ## 2. Experiments
@@ -75,5 +81,4 @@ with torch.no_grad():
75
  judge_list_infer.append(reward)
76
 
77
  print(judge_list_infer) # [0.73828125, 0.7265625, 0.73046875, 0.73828125, 0.734375]
78
-
79
- ```
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
6
+
7
  # Universal-PRM-7B
8
+
9
+ Project page: https://auroraprm.github.io/
10
+
11
  ## 1. Overview
12
  Universal-PRM is trained using Qwen2.5-Math-7B-Instruct as the base. The training process incorporates diverse policy distributions, ensemble prompting, and reverse verification to enhance generalization and robustness. It achieves state-of-the-art performance on ProcessBench and the internally developed UniversalBench.
13
  ## 2. Experiments
 
81
  judge_list_infer.append(reward)
82
 
83
  print(judge_list_infer) # [0.73828125, 0.7265625, 0.73046875, 0.73828125, 0.734375]
84
+ ```