Doesn't work?

#7
by endolith - opened

It just says "Preparing Space" endlessly? Is there a static version somewhere?

It has already happened since yesterday, at least. I still wait for it until now. (20230502)

Massive Text Embedding Benchmark org

Sorry! A simple restart fixed it, no idea what the issue was.

For future reference, you can run the leaderboard locally via:

git clone https://huggingface.co/spaces/mteb/leaderboard
# pip install gradio huggingface-hub pandas datasets
python leaderboard/app.py
This comment has been hidden

How do you add the results from your model to be displayed?

Massive Text Embedding Benchmark org

How do you add the results from your model to be displayed?

https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md

How do you add the results from your model to be displayed?

https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md

I am running the leaderboard locally and doing as shown in the above link doesn't work.

Massive Text Embedding Benchmark org

How do you add the results from your model to be displayed?

https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md

I am running the leaderboard locally and doing as shown in the above link doesn't work.

That's odd; It should work even locally. Can you share the model where you have added the metadata? Maybe there is a mistake in the metadata.
Otherwise, there is the option of adding the results via PR here: https://huggingface.co/datasets/mteb/results

The model is also local, not on hugging face. Is there no way to see your results locally? I just want to see the average as it's unclear to me how it's calculated as I see different kinds of metrics.

Massive Text Embedding Benchmark org

If you just want to compute the average for MTEB, it is just a regular average across the 56 datasets, you can e.g. use this script: https://github.com/embeddings-benchmark/mteb/pull/858/files

To run the leaderboard locally with your own results, you need to

  • Clone https://huggingface.co/datasets/mteb/results
  • Put your results in there and update the paths.json file as explained in the results.py file
  • Edit the app.py code of the leaderboard to instead point to you local clone of the results repo
  • Add your model configs in the leaderboard yaml files
  • Run the leaderboard & your model should show up
Massive Text Embedding Benchmark org

@Muennighoff seems like we might want to create some CLI for computing averages across benchmarks - @daniwes if this is something you would be interested in feel free to open an issue on the github

Sign up or log in to comment