ronak commited on
Commit
2b1309f
1 Parent(s): d527306

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: llama2
4
+ language:
5
+ - en
6
+ tags:
7
+ - information retrieval
8
+ - reranker
9
+ ---
10
+
11
+ # RankVicuna (No Data Augmentation) Model Card
12
+
13
+ ## Model Details
14
+
15
+ RankVicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.
16
+
17
+ - **Developed by:** [Castorini](https://github.com/castorini)
18
+ - **Model type:** An auto-regressive language model based on the transformer architecture
19
+ - **License:** Llama 2 Community License Agreement
20
+ - **Finetuned from base model:** [Llama 2](https://arxiv.org/abs/2307.09288)
21
+
22
+ This specific model is a 7B variant and is trained without data augmentation.
23
+
24
+ ### Model Sources
25
+
26
+ - **Repository:** https://github.com/castorini/rank_llm
27
+ - **Paper:** https://arxiv.org/abs/2309.15088
28
+
29
+ ## Uses
30
+
31
+ The primary use of RankVicuna is research at the intersection large language models and retrieval.
32
+ The primary intended users of the model are researchers and hobbyists in natural language processing and information retrieval.
33
+
34
+ ## Training Details
35
+
36
+ RankVicuna is finetuned from `lmsys/vicuna-7b-v1.5` with supervised instruction fine-tuning.
37
+
38
+ ## Evaluation
39
+
40
+ RankVicuna is currently evaluated on DL19/DL20. See more details in our [paper](https://arxiv.org/pdf/2309.15088.pdf).