DrRos commited on
Commit
c260cdf
·
verified ·
1 Parent(s): 4d32282

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - zh
6
+ tags:
7
+ - mteb
8
+ - llama-cpp
9
+ - gguf-my-repo
10
+ pipeline_tag: feature-extraction
11
+ base_model: BAAI/bge-reranker-large
12
+ model-index:
13
+ - name: bge-reranker-base
14
+ results:
15
+ - task:
16
+ type: Reranking
17
+ dataset:
18
+ name: MTEB CMedQAv1
19
+ type: C-MTEB/CMedQAv1-reranking
20
+ config: default
21
+ split: test
22
+ revision: None
23
+ metrics:
24
+ - type: map
25
+ value: 81.27206722525007
26
+ - type: mrr
27
+ value: 84.14238095238095
28
+ - task:
29
+ type: Reranking
30
+ dataset:
31
+ name: MTEB CMedQAv2
32
+ type: C-MTEB/CMedQAv2-reranking
33
+ config: default
34
+ split: test
35
+ revision: None
36
+ metrics:
37
+ - type: map
38
+ value: 84.10369934291236
39
+ - type: mrr
40
+ value: 86.79376984126984
41
+ - task:
42
+ type: Reranking
43
+ dataset:
44
+ name: MTEB MMarcoReranking
45
+ type: C-MTEB/Mmarco-reranking
46
+ config: default
47
+ split: dev
48
+ revision: None
49
+ metrics:
50
+ - type: map
51
+ value: 35.4600511272538
52
+ - type: mrr
53
+ value: 34.60238095238095
54
+ - task:
55
+ type: Reranking
56
+ dataset:
57
+ name: MTEB T2Reranking
58
+ type: C-MTEB/T2Reranking
59
+ config: default
60
+ split: dev
61
+ revision: None
62
+ metrics:
63
+ - type: map
64
+ value: 67.27728847727172
65
+ - type: mrr
66
+ value: 77.1315192743764
67
+ ---
68
+
69
+ # DrRos/bge-reranker-large-Q4_K_M-GGUF
70
+ This model was converted to GGUF format from [`BAAI/bge-reranker-large`](https://huggingface.co/BAAI/bge-reranker-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
71
+ Refer to the [original model card](https://huggingface.co/BAAI/bge-reranker-large) for more details on the model.
72
+
73
+ ## Use with llama.cpp
74
+ Install llama.cpp through brew (works on Mac and Linux)
75
+
76
+ ```bash
77
+ brew install llama.cpp
78
+
79
+ ```
80
+ Invoke the llama.cpp server or the CLI.
81
+
82
+ ### CLI:
83
+ ```bash
84
+ llama-cli --hf-repo DrRos/bge-reranker-large-Q4_K_M-GGUF --hf-file bge-reranker-large-q4_k_m.gguf -p "The meaning to life and the universe is"
85
+ ```
86
+
87
+ ### Server:
88
+ ```bash
89
+ llama-server --hf-repo DrRos/bge-reranker-large-Q4_K_M-GGUF --hf-file bge-reranker-large-q4_k_m.gguf -c 2048
90
+ ```
91
+
92
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
93
+
94
+ Step 1: Clone llama.cpp from GitHub.
95
+ ```
96
+ git clone https://github.com/ggerganov/llama.cpp
97
+ ```
98
+
99
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
100
+ ```
101
+ cd llama.cpp && LLAMA_CURL=1 make
102
+ ```
103
+
104
+ Step 3: Run inference through the main binary.
105
+ ```
106
+ ./llama-cli --hf-repo DrRos/bge-reranker-large-Q4_K_M-GGUF --hf-file bge-reranker-large-q4_k_m.gguf -p "The meaning to life and the universe is"
107
+ ```
108
+ or
109
+ ```
110
+ ./llama-server --hf-repo DrRos/bge-reranker-large-Q4_K_M-GGUF --hf-file bge-reranker-large-q4_k_m.gguf -c 2048
111
+ ```