File size: 1,226 Bytes
234c993
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cabe046
234c993
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
inference: false
license: llama2
language:
- en
tags:
- information retrieval
- reranker
---

# RankVicuna Model Card

## Model Details

RankVicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT.

- **Developed by:** [Castorini](https://github.com/castorini)
- **Model type:** An auto-regressive language model based on the transformer architecture
- **License:** Llama 2 Community License Agreement	
- **Finetuned from base model:** [Llama 2](https://arxiv.org/abs/2307.09288)

This specific model is a 7B variant and is trained with data augmentation.

### Model Sources

- **Repository:** https://github.com/castorini/rank_llm
- **Paper:** https://arxiv.org/abs/2309.15088

## Uses

The primary use of RankVicuna is research at the intersection of large language models and retrieval.
The primary intended users of the model are researchers and hobbyists in natural language processing and information retrieval.

## Training Details

RankVicuna is finetuned from `lmsys/vicuna-7b-v1.5` with supervised instruction fine-tuning.

## Evaluation

RankVicuna is currently evaluated on DL19/DL20. See more details in our [paper](https://arxiv.org/pdf/2309.15088.pdf).