samagra14wefi
commited on
Commit
•
c1bd7b1
1
Parent(s):
2e8e831
Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,7 @@ language:
|
|
7 |
library_name: keras
|
8 |
tags:
|
9 |
- evaluations
|
|
|
10 |
---
|
11 |
|
12 |
# PreferED: Preference Evaluation DeBERTa Model
|
@@ -155,8 +156,4 @@ trainer.train()
|
|
155 |
|
156 |
### Loss Function Consideration
|
157 |
|
158 |
-
Anthropic recommends using the loss function L<sub>PM</sub> = log(1 + e^(r<sub>bad</sub> - r<sub>good</sub>)) for preference models. However, this PreferED model was trained using binary cross-entropy loss, and therefore changing the loss functions might increase the training time to converge. For more details on preference models and loss functions, you may refer to the paper by Askell et al., 2021: [A General Language Assistant as a Laboratory for Alignment](https://arxiv.org/abs/2112.00861).
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
|
|
7 |
library_name: keras
|
8 |
tags:
|
9 |
- evaluations
|
10 |
+
pipeline_tag: text-classification
|
11 |
---
|
12 |
|
13 |
# PreferED: Preference Evaluation DeBERTa Model
|
|
|
156 |
|
157 |
### Loss Function Consideration
|
158 |
|
159 |
+
Anthropic recommends using the loss function L<sub>PM</sub> = log(1 + e^(r<sub>bad</sub> - r<sub>good</sub>)) for preference models. However, this PreferED model was trained using binary cross-entropy loss, and therefore changing the loss functions might increase the training time to converge. For more details on preference models and loss functions, you may refer to the paper by Askell et al., 2021: [A General Language Assistant as a Laboratory for Alignment](https://arxiv.org/abs/2112.00861).
|
|
|
|
|
|
|
|