Note

This an updated version of KennethTM/MiniLM-L6-danish-reranker. This version is just trained on more data (GooAQ dataset translated to Danish) and is otherwise the same

MiniLM-L6-danish-reranker-v2

This is a lightweight (~22 M parameters) sentence-transformers model for Danish NLP: It takes two sentences as input and outputs a relevance score. Therefore, the model can be used for information retrieval, e.g. given a query and candidate matches, rank the candidates by their relevance.

The maximum sequence length is 512 tokens (for both passages).

The model was not pre-trained from scratch but adapted from the English version of cross-encoder/ms-marco-MiniLM-L-6-v2 with a Danish tokenizer.

Trained on ELI5 and SQUAD data machine translated from English to Danish.

Usage with Transformers

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import torch.nn.functional as F

model = AutoModelForSequenceClassification.from_pretrained('KennethTM/MiniLM-L6-danish-reranker-v2')
tokenizer = AutoTokenizer.from_pretrained('KennethTM/MiniLM-L6-danish-reranker-v2')

# Two examples where the first is a positive example and the second is a negative example
queries = ['Kører der cykler på vejen?', 
           'Kører der cykler på vejen?']
passages = ['I Danmark er cykler et almindeligt transportmiddel, og de har lige så stor ret til at bruge vejene som bilister. Cyklister skal dog følge færdselsreglerne og vise hensyn til andre trafikanter.', 
            'Solen skinner, og himlen er blå. Der er ingen vind, og temperaturen er perfekt. Det er den perfekte dag til at tage en tur på landet og nyde den friske luft.']

features = tokenizer(queries, passages, padding=True, truncation=True, return_tensors="pt")

model.eval()
with torch.no_grad():
    scores = model(**features).logits

# The scores are raw logits, these can be transformed into probabilities using the sigmoid function
# Higher values are higher relevance
print(scores)
print(F.sigmoid(scores))

Usage with SentenceTransformers

The usage becomes easier when you have SentenceTransformers installed. Then, you can use the pre-trained models like this:

from sentence_transformers import CrossEncoder
import numpy as np

sigmoid_numpy = lambda x: 1/(1 + np.exp(-x))

# Provide examples as a list of query-passage tuples
pairs = [('Kører der cykler på vejen?', 
          'I Danmark er cykler et almindeligt transportmiddel, og de har lige så stor ret til at bruge vejene som bilister. Cyklister skal dog følge færdselsreglerne og vise hensyn til andre trafikanter.'),
         ('Kører der cykler på vejen?', 
          'Solen skinner, og himlen er blå. Der er ingen vind, og temperaturen er perfekt. Det er den perfekte dag til at tage en tur på landet og nyde den friske luft.')]

model = CrossEncoder('KennethTM/MiniLM-L6-danish-reranker-v2', max_length=512)
scores = model.predict(pairs)

print(scores)
print(sigmoid_numpy(scores))
Downloads last month
51
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train KennethTM/MiniLM-L6-danish-reranker-v2