re-bert-large / test_eval.txt
HHansi's picture
Add model
ce67f0d
raw
history blame
939 Bytes
Default classification report:
precision recall f1-score support
equal 0.7647 0.8667 0.8125 15
greater 1.0000 0.9091 0.9524 11
greater-equal 0.6207 0.8571 0.7200 21
less 1.0000 0.5000 0.6667 2
less-equal 0.7647 0.9286 0.8387 14
necessity 0.8062 0.8883 0.8453 206
none 0.7834 0.6150 0.6891 200
not-part-of 1.0000 1.0000 1.0000 2
part-of 0.7143 0.8025 0.7558 162
selection 0.8259 0.7940 0.8096 233
accuracy 0.7829 866
macro avg 0.8280 0.8161 0.8090 866
weighted avg 0.7865 0.7829 0.7805 866
mcc = 0.7252961407864995
precision(macro) = 0.8279886882633785
recall(macro) = 0.8161281928146451
f1_score(macro) = 0.8090040477874731