I am really enjoying this version of Cinder. More information coming. Training data similar to openhermes2.5 with some added math, STEM, and reasoning mostly from OpenOrca. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.
Chat example from LM Studio:
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 58.86 |
AI2 Reasoning Challenge (25-Shot) | 58.28 |
HellaSwag (10-Shot) | 74.04 |
MMLU (5-Shot) | 54.46 |
TruthfulQA (0-shot) | 44.50 |
Winogrande (5-shot) | 74.66 |
GSM8k (5-shot) | 47.23 |
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 10.86 |
IFEval (0-Shot) | 23.57 |
BBH (3-Shot) | 22.45 |
MATH Lvl 5 (4-Shot) | 0.00 |
GPQA (0-shot) | 4.25 |
MuSR (0-shot) | 1.97 |
MMLU-PRO (5-shot) | 12.90 |
- Downloads last month
- 200
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard58.280
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard74.040
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard54.460
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard44.500
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard74.660
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard47.230
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard23.570
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard22.450
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.000
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.250