Edit model card

Model Card for passage-ranker.chocolate

This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results.

Model name: passage-ranker.chocolate

Supported Languages

The model was trained and tested in the following languages:

  • English

Scores

Metric Value
Relevance (NDCG@10) 0.484

Note that the relevance score is computed as an average over 14 retrieval datasets (see details below).

Inference Times

GPU Quantization type Batch size 1 Batch size 32
NVIDIA A10 FP16 1 ms 5 ms
NVIDIA A10 FP32 2 ms 22 ms
NVIDIA T4 FP16 1 ms 13 ms
NVIDIA T4 FP32 3 ms 66 ms
NVIDIA L4 FP16 2 ms 6 ms
NVIDIA L4 FP32 3 ms 30 ms

Gpu Memory usage

Quantization type Memory
FP16 300 MiB
FP32 550 MiB

Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.

Requirements

  • Minimal Sinequa version: 11.10.0
  • Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
  • Cuda compute capability: above 5.0 (above 6.0 for FP16 use)

Model Details

Overview

Training Data

Evaluation Metrics

To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the BEIR benchmark. Note that all these datasets are in English.

Dataset NDCG@10
Average 0.486
Arguana 0.554
CLIMATE-FEVER 0.209
DBPedia Entity 0.367
FEVER 0.744
FiQA-2018 0.339
HotpotQA 0.685
MS MARCO 0.412
NFCorpus 0.352
NQ 0.454
Quora 0.818
SCIDOCS 0.158
SciFact 0.658
TREC-COVID 0.674
Webis-Touche-2020 0.345
Downloads last month
555
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.