jeevb commited on
Commit
7c37490
1 Parent(s): f5c7989

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -69,3 +69,7 @@ Each pair of models is subject to 2 matches, with the positions of the respectiv
69
  it wins both matches.
70
 
71
  For each prompt, we then compute Bradley-Terry scores for the respective models using the same [method](https://github.com/lm-sys/FastChat/blob/f2e6ca964af7ad0585cadcf16ab98e57297e2133/fastchat/serve/monitor/elo_analysis.py#L57) as that used in the [LMSYS Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard). Finally, we normalize all scores to a scale from 0 to 1 for interoperability with other weighted ranking systems.
 
 
 
 
 
69
  it wins both matches.
70
 
71
  For each prompt, we then compute Bradley-Terry scores for the respective models using the same [method](https://github.com/lm-sys/FastChat/blob/f2e6ca964af7ad0585cadcf16ab98e57297e2133/fastchat/serve/monitor/elo_analysis.py#L57) as that used in the [LMSYS Chatbot Arena Leaderboard](https://chat.lmsys.org/?leaderboard). Finally, we normalize all scores to a scale from 0 to 1 for interoperability with other weighted ranking systems.
72
+
73
+ ## Model
74
+
75
+ The embedding model was generated by first fine-tuning [`BAAI/bge-base-en-v1.5`](https://huggingface.co/BAAI/bge-base-en-v1.5) with the intent categories from the dataset above, using contrastive learning with cosine similarity loss, and subsequently merging the resultant model with the base model at a 3:2 ratio.