CogitoZ / README.md
Daemontatox's picture
Adding Evaluation Results
da021dc verified
|
raw
history blame
7.32 kB
metadata
base_model:
  - Qwen/QwQ-32B-Preview
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - qwen2
  - trl
  - Chain-of-thought
  - Reasoning
license: apache-2.0
language:
  - en
new_version: Daemontatox/CogitoZ
library_name: transformers
datasets:
  - PJMixers/Math-Multiturn-100K-ShareGPT
model-index:
  - name: CogitoZ
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: wis-k/instruction-following-eval
          split: train
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 39.67
            name: averaged accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FCogitoZ
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: SaylorTwift/bbh
          split: test
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 53.89
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FCogitoZ
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: lighteval/MATH-Hard
          split: test
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 46.3
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FCogitoZ
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          split: train
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 19.35
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FCogitoZ
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 19.94
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FCogitoZ
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 51.03
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FCogitoZ
          name: Open LLM Leaderboard

image

CogitoZ - 32B

Model Overview

CogitoZ - 32B is a state-of-the-art large language model fine-tuned to excel in advanced reasoning and real-time decision-making tasks. This enhanced version was trained using Unsloth, achieving a 2x faster training process. Leveraging Hugging Face's TRL (Transformers Reinforcement Learning) library, CogitoZ combines efficiency with exceptional reasoning performance.


Key Features

  1. Fast Training: Optimized with Unsloth, achieving a 2x faster training cycle without compromising model quality.
  2. Enhanced Reasoning: Utilizes advanced chain-of-thought (CoT) reasoning for solving complex problems.
  3. Quantization Ready: Supports 8-bit and 4-bit quantization for deployment on resource-constrained devices.
  4. Scalable Inference: Seamless integration with text-generation-inference tools for real-time applications.

Intended Use

Primary Use Cases

  • Education: Real-time assistance for complex problem-solving, especially in mathematics and logic.
  • Business: Supports decision-making, financial modeling, and operational strategy.
  • Healthcare: Enhances diagnostic accuracy and supports structured clinical reasoning.
  • Legal Analysis: Simplifies complex legal documents and constructs logical arguments.

Limitations

  • May produce biased outputs if the input prompts contain prejudicial or harmful content.
  • Should not be used for real-time, high-stakes autonomous decisions (e.g., robotics or autonomous vehicles).

Technical Details

  • Training Framework: Hugging Face's Transformers and TRL libraries.
  • Optimization Framework: Unsloth for faster and efficient training.
  • Language Support: English.
  • Quantization: Compatible with 8-bit and 4-bit inference modes for deployment on edge devices.

Deployment Example

Using Hugging Face Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Daemontatox/CogitoZ"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Explain the Pythagorean theorem step-by-step:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Optimized Inference:

Install the transformers and text-generation-inference libraries. Deploy on servers or edge devices using quantized models for optimal performance. Training Data The fine-tuning process utilized reasoning-specific datasets, including:

MATH Dataset: Focused on logical and mathematical problems.

Custom Corpora: Tailored datasets for multi-domain reasoning and structured problem-solving.

Ethical Considerations

Bias Awareness -> The model reflects biases present in the training data. Users should carefully evaluate outputs in sensitive contexts.

Safe Deployment -> Not recommended for generating harmful or unethical content.

Acknowledgments

This model was developed with contributions from Daemontatox and the Unsloth team, utilizing state-of-the-art techniques in fine-tuning and optimization.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here! Summarized results can be found here!

Metric Value (%)
Average 38.36
IFEval (0-Shot) 39.67
BBH (3-Shot) 53.89
MATH Lvl 5 (4-Shot) 46.30
GPQA (0-shot) 19.35
MuSR (0-shot) 19.94
MMLU-PRO (5-shot) 51.03