Spaces:
Running
on
CPU Upgrade
models not being evaluated ?
I submitted i think 6 modles to be evaluated and they did not get evaluated despite saying they were being in the queue ?
They said evaluating even they took perhaps 10 days then i could not find them on the final evaluated list?
hat happened did you drop a load of models ?
I resubmitted 10 models ??? I never submitted so many models before ? is this too much models ? was it 5 per week before ? as the board seems overloaded with specific people ... leaving it impossible to get a benchmark result ???
Do you only benchmark the paid models ? Or enterprise accounts now ?
Im lost Help out !
https://huggingface.co/datasets/open-llm-leaderboard/requests/tree/main/LeroyDyer
Look here what happened to eval.
For results https://huggingface.co/datasets/open-llm-leaderboard/results/tree/main/LeroyDyer
It's take some time to results appear on leaderboard, instead you can use https://huggingface.co/spaces/open-llm-leaderboard/comparator right away if results are ready.
Thanks friend . For those links . But the models that I had submitted did not àpwar on the list . Currently the models I submitted seem to be on the evaluating link on the submit models . As I had resubmitted the models plus some more to see if the board had crashed .. it accepted them so they obviously did not get evaluated last time ....plus there was a massive rush on the board ....? So I hope.tjwse model will.indeed be evaluated so I can see if these new rewards styled training works ....now I understand now to make rewards for the target formats ....they might still score quite low on benchmarking but are truly great performers ....