prithivMLmods's picture
Update README.md
8b0054c verified
|
raw
history blame
3.1 kB
metadata
license: apache-2.0
language:
  - en
base_model:
  - Qwen/Qwen2.5-14B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
  - qwen
  - opus

opus.gif

Calcium-Opus-14B-Elite

Calcium-Opus-14B-Elite is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. These models have proven effective in context understanding, reasoning, and mathematical problem-solving.It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets, with a focus on chain-of-thought (CoT) reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.

Key improvements include:

  1. Enhanced Knowledge and Expertise: The model demonstrates significantly more knowledge and greatly improved capabilities in coding and mathematics, thanks to specialized expert models in these domains.
  2. Improved Instruction Following: It shows significant advancements in following instructions, generating long texts (over 8K tokens), understanding structured data (e.g., tables), and producing structured outputs, especially in JSON format.
  3. Better Adaptability: The model is more resilient to diverse system prompts, enabling enhanced role-playing implementations and condition-setting for chatbots.
  4. Long-Context Support: It offers long-context support of up to 128K tokens and can generate up to 8K tokens in a single output.
  5. Multilingual Proficiency: The model supports over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

Quickstart with transformers

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Calcium-Opus-14B-Elite"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]