Edit model card

Model Card for answer-finder.yuzu

This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to the start token and end token of an answer.

Model name: answer-finder.yuzu

Supported Languages

The model was trained and tested in the following languages:

  • Japanese

Besides the aforementioned languages, basic support can be expected for the 104 languages that were used during the pretraining of the base model (See original repository).

Scores

Metric Value
F1 Score on JSQuAD with Hugging Face evaluation pipeline 92.1
F1 Score on JSQuAD with Haystack evaluation pipeline 91.5

Inference Time

GPU Quantization type Batch size 1 Batch size 32
NVIDIA A10 FP16 17 ms 27 ms
NVIDIA A10 FP32 4 ms 88 ms
NVIDIA T4 FP16 3 ms 64 ms
NVIDIA T4 FP32 15 ms 374 ms
NVIDIA L4 FP16 3 ms 39 ms
NVIDIA L4 FP32 5 ms 125 ms

Note that the Answer Finder models are only used at query time.

Gpu Memory usage

Quantization type Memory
FP16 950 MiB
FP32 1350 MiB

Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.

Requirements

  • Minimal Sinequa version: 11.10.0
  • Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
  • Cuda compute capability: above 5.0 (above 6.0 for FP16 use)

Model Details

Overview

Training Data

  • JSQuAD see Paper
  • Japanese translation of SQuAD v2 "impossible" query-passage pairs
Downloads last month
316
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.