Open-Source AI Cookbook documentation

Semantic reranking with Elasticsearch and Hugging Face

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Open In Colab

Semantic reranking with Elasticsearch and Hugging Face

Authored by: Liam Thompson

In this notebook we will learn how to implement semantic reranking in Elasticsearch by uploading a model from Hugging Face into an Elasticsearch cluster. We’ll use the retriever abstraction, a simpler Elasticsearch syntax for crafting queries and combining different search operations.

You will:

  • Choose a cross-encoder model from Hugging Face to perform semantic reranking
  • Upload the model to your Elasticsearch deployment using Eland— a Python client for machine learning with Elasticsearch
  • Create an inference endpoint to manage your rerank task
  • Query your data using the text_similarity_rerank retriever

🧰 Requirements

For this example, you will need:

  • An Elastic deployment on version 8.15.0 or above (for non-serverless deployments)
  • You’ll need to find your deployment’s Cloud ID and create an API key. Learn more.

Install and import packages

ℹ️ The eland installation will take a couple of minutes.

!pip install -qU elasticsearch
!pip install eland[pytorch]
from elasticsearch import Elasticsearch, helpers

Initialize Elasticsearch Python client

First you need to connect to your Elasticsearch instance.

>>> from getpass import getpass

>>> # https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#finding-your-cloud-id
>>> ELASTIC_CLOUD_ID = getpass("Elastic Cloud ID: ")

>>> # https://www.elastic.co/search-labs/tutorials/install-elasticsearch/elastic-cloud#creating-an-api-key
>>> ELASTIC_API_KEY = getpass("Elastic Api Key: ")

>>> # Create the client instance
>>> client = Elasticsearch(
...     # For local development
...     # hosts=["http://localhost:9200"]
...     cloud_id=ELASTIC_CLOUD_ID,
...     api_key=ELASTIC_API_KEY,
... )
Elastic Cloud ID: ··········
Elastic Api Key: ··········

Test connection

Confirm that the Python client has connected to your Elasticsearch instance with this test.

print(client.info())

This examples uses a small dataset of movies.

>>> from urllib.request import urlopen
>>> import json
>>> import time

>>> url = "https://huggingface.co/datasets/leemthompo/small-movies/raw/main/small-movies.json"
>>> response = urlopen(url)

>>> # Load the response data into a JSON object
>>> data_json = json.loads(response.read())

>>> # Prepare the documents to be indexed
>>> documents = []
>>> for doc in data_json:
...     documents.append(
...         {
...             "_index": "movies",
...             "_source": doc,
...         }
...     )

>>> # Use helpers.bulk to index
>>> helpers.bulk(client, documents)

>>> print("Done indexing documents into `movies` index!")
>>> time.sleep(3)
Done indexing documents into `movies` index!

Upload Hugging Face model using Eland

Now we’ll use Eland’s eland_import_hub_model command to upload the model to Elasticsearch. For this example we’ve chosen the cross-encoder/ms-marco-MiniLM-L-6-v2 text similarity model.

>>> !eland_import_hub_model \
...   --cloud-id $ELASTIC_CLOUD_ID \
...   --es-api-key $ELASTIC_API_KEY \
...   --hub-model-id cross-encoder/ms-marco-MiniLM-L-6-v2 \
...   --task-type text_similarity \
...   --clear-previous \
...   --start
2024-08-13 17:04:12,386 INFO : Establishing connection to Elasticsearch
2024-08-13 17:04:12,567 INFO : Connected to serverless cluster 'bd8c004c050e4654ad32fb86ab159889'
2024-08-13 17:04:12,568 INFO : Loading HuggingFace transformer tokenizer and model 'cross-encoder/ms-marco-MiniLM-L-6-v2'
/usr/local/lib/python3.10/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
tokenizer_config.json: 100% 316/316 [00:00<00:00, 1.81MB/s]
config.json: 100% 794/794 [00:00<00:00, 4.09MB/s]
vocab.txt: 100% 232k/232k [00:00<00:00, 2.37MB/s]
special_tokens_map.json: 100% 112/112 [00:00<00:00, 549kB/s]
pytorch_model.bin: 100% 90.9M/90.9M [00:00<00:00, 135MB/s]
STAGE:2024-08-13 17:04:15 1454:1454 ActivityProfilerController.cpp:312] Completed Stage: Warm Up
STAGE:2024-08-13 17:04:15 1454:1454 ActivityProfilerController.cpp:318] Completed Stage: Collection
STAGE:2024-08-13 17:04:15 1454:1454 ActivityProfilerController.cpp:322] Completed Stage: Post Processing
2024-08-13 17:04:18,789 INFO : Creating model with id 'cross-encoder__ms-marco-minilm-l-6-v2'
2024-08-13 17:04:21,123 INFO : Uploading model definition
100% 87/87 [00:55<00:00,  1.57 parts/s]
2024-08-13 17:05:16,416 INFO : Uploading model vocabulary
2024-08-13 17:05:16,987 INFO : Starting model deployment
2024-08-13 17:05:18,238 INFO : Model successfully imported with id 'cross-encoder__ms-marco-minilm-l-6-v2'

Create inference endpoint

Next we’ll create an inference endpoint for the rerank task to deploy and manage our model and, if necessary, spin up the necessary ML resources behind the scenes.

client.inference.put(
    task_type="rerank",
    inference_id="my-msmarco-minilm-model",
    inference_config={
        "service": "elasticsearch",
        "service_settings": {
            "model_id": "cross-encoder__ms-marco-minilm-l-6-v2",
            "num_allocations": 1,
            "num_threads": 1,
        },
    },
)

Run the following command to confirm your inference endpoint is deployed.

client.inference.get()

⚠️ When you deploy your model, you might need to sync your ML saved objects in the Kibana (or Serverless) UI. Go to Trained Models and select Synchronize saved objects.

Lexical queries

First let’s use a standard retriever to test out some lexical (or full-text) searches and then we’ll compare the improvements when we layer in semantic reranking.

Lexical match with query_string query

Let’s say we vaguely remember that there is a famous movie about a killer who eats his victims. For the sake of argument, pretend we’ve momentarily forgotten the word “cannibal”.

Let’s perform a query_string query to find the phrase “flesh-eating bad guy” in the plot fields of our Elasticsearch documents.

>>> resp = client.search(
...     index="movies",
...     retriever={
...         "standard": {
...             "query": {
...                 "query_string": {
...                     "query": "flesh-eating bad guy",
...                     "default_field": "plot",
...                 }
...             }
...         }
...     },
... )

>>> if resp["hits"]["hits"]:
...     for hit in resp["hits"]["hits"]:
...         title = hit["_source"]["title"]
...         plot = hit["_source"]["plot"]
...         print(f"Title: {title}\nPlot: {plot}\n")
>>> else:
...     print("No search results found")
No search results found

No results! Unfortunately we don’t have any near exact matches for “flesh-eating bad guy”. Because we don’t have any more specific information about the exact phrasing in the Elasticsearch data, we’ll need to cast our search net wider.

Simple multi_match query

This lexical query performs a standard keyword search for the term “crime” within the “plot” and “genre” fields of our Elasticsearch documents.

>>> resp = client.search(
...     index="movies",
...     retriever={"standard": {"query": {"multi_match": {"query": "crime", "fields": ["plot", "genre"]}}}},
... )

>>> for hit in resp["hits"]["hits"]:
...     title = hit["_source"]["title"]
...     plot = hit["_source"]["plot"]
...     print(f"Title: {title}\nPlot: {plot}\n")
Title: The Godfather
Plot: An organized crime dynasty's aging patriarch transfers control of his clandestine empire to his reluctant son.

Title: Goodfellas
Plot: The story of Henry Hill and his life in the mob, covering his relationship with his wife Karen Hill and his mob partners Jimmy Conway and Tommy DeVito in the Italian-American crime syndicate.

Title: The Silence of the Lambs
Plot: A young F.B.I. cadet must receive the help of an incarcerated and manipulative cannibal killer to help catch another serial killer, a madman who skins his victims.

Title: Pulp Fiction
Plot: The lives of two mob hitmen, a boxer, a gangster and his wife, and a pair of diner bandits intertwine in four tales of violence and redemption.

Title: Se7en
Plot: Two detectives, a rookie and a veteran, hunt a serial killer who uses the seven deadly sins as his motives.

Title: The Departed
Plot: An undercover cop and a mole in the police attempt to identify each other while infiltrating an Irish gang in South Boston.

Title: The Usual Suspects
Plot: A sole survivor tells of the twisty events leading up to a horrific gun battle on a boat, which began when five criminals met at a seemingly random police lineup.

Title: The Dark Knight
Plot: When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.

That’s better! At least we’ve got some results now. We broadened our search criteria to increase the chances of finding relevant results.

But these results aren’t very precise in the context of our original query “flesh-eating bad guy”. We can see that “The Silence of the Lambs” is returned in the middle of the results set with this generic match query. Let’s see if we can use our semantic reranking model to get closer to the searcher’s original intent.

Semantic reranker

In the following retriever syntax, we wrap our standard query retriever in a text_similarity_reranker. This allows us to leverage the NLP model we deployed to Elasticsearch to rerank the results based on the phrase “flesh-eating bad guy”.

>>> resp = client.search(
...     index="movies",
...     retriever={
...         "text_similarity_reranker": {
...             "retriever": {"standard": {"query": {"multi_match": {"query": "crime", "fields": ["plot", "genre"]}}}},
...             "field": "plot",
...             "inference_id": "my-msmarco-minilm-model",
...             "inference_text": "flesh-eating bad guy",
...         }
...     },
... )

>>> for hit in resp["hits"]["hits"]:
...     title = hit["_source"]["title"]
...     plot = hit["_source"]["plot"]
...     print(f"Title: {title}\nPlot: {plot}\n")
Title: The Silence of the Lambs
Plot: A young F.B.I. cadet must receive the help of an incarcerated and manipulative cannibal killer to help catch another serial killer, a madman who skins his victims.

Title: Pulp Fiction
Plot: The lives of two mob hitmen, a boxer, a gangster and his wife, and a pair of diner bandits intertwine in four tales of violence and redemption.

Title: Se7en
Plot: Two detectives, a rookie and a veteran, hunt a serial killer who uses the seven deadly sins as his motives.

Title: Goodfellas
Plot: The story of Henry Hill and his life in the mob, covering his relationship with his wife Karen Hill and his mob partners Jimmy Conway and Tommy DeVito in the Italian-American crime syndicate.

Title: The Dark Knight
Plot: When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.

Title: The Godfather
Plot: An organized crime dynasty's aging patriarch transfers control of his clandestine empire to his reluctant son.

Title: The Departed
Plot: An undercover cop and a mole in the police attempt to identify each other while infiltrating an Irish gang in South Boston.

Title: The Usual Suspects
Plot: A sole survivor tells of the twisty events leading up to a horrific gun battle on a boat, which began when five criminals met at a seemingly random police lineup.

Success! “The Silence of the Lambs” is our top result. Semantic reranking helped us find the most relevant result by parsing a natural language query, overcoming the limitations of lexical search which relies more on exact matching.

Semantic reranking enables semantic search in a few steps, without the need for generating and storing embeddings. Being able to use open source models hosted on Hugging Face natively in your Elasticsearch cluster is great for prototyping, testing, and building search experiences.

Learn more

< > Update on GitHub