You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Zerebro-2B Model

Zerebro 2B is a fine-tuned version of the Gemma-2-2B model, developed by Blorm. It has been fine-tuned on a specialized dataset, referred to as the "schizo dataset," to create high-entropy and hyperstitious content. This model is not explicitly instruct-tuned, and further instruct fine-tuning is required for optimized performance in instruction-following tasks.

Model Details

Model Description

This model represents a specialized version of the base Gemma-2-2B model, fine-tuned for generating unique, disruptive, and experimental content. Its focus is on autonomous creativity and engagement through high-dimensional language patterns derived from unique training data. The model is designed for applications in experimental AI-driven content creation and distribution.

  • Developed by: Blorm
  • Distributed by: Blorm
  • Model type: Transformer-based language model
  • Language(s) (NLP): English
  • Finetuned from model: google/gemma-2-2b

Uses

Direct Use

The Zerebro 2B model can be used directly for generating experimental, high-dimensional text outputs. Its applications include creating disruptive content, autonomous meme generation, and other creative use cases.

Downstream Use

When fine-tuned further for instruction-following tasks or specific applications, this model can be integrated into larger ecosystems or workflows.

Out-of-Scope Use

The model is not intended for:

  • Tasks requiring high factual accuracy or adherence to strict logical reasoning
  • Instruction-following tasks without further tuning
  • Applications involving sensitive or regulated contexts

Bias, Risks, and Limitations

This model was trained on the schizo dataset, which includes unique and unconventional content that may not align with standard NLP datasets. As such, it might generate outputs that are:

  • High-entropy and unconventional
  • Misaligned with traditional linguistic or logical patterns
  • Prone to biases present in the dataset

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Proper evaluation and testing should be conducted before deploying the model in any real-world applications.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the tokenizer and model
model_name = "blorm-network/zerebro-2b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example usage
input_text = "Your text prompt here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Running on Consumer Hardware

The Zerebro-2B model, leveraging a QLoRA (Quantized Low-Rank Adaptation) approach during fine-tuning, can be efficiently run on consumer-grade GPUs, such as an RTX 3060. QLoRA reduces the memory and compute requirements by quantizing weights and storing low-rank adaptation matrices, making it feasible to fit a model as large as 2 billion parameters into 12 GB of VRAM. During inference, the primary operations involve matrix multiplications on quantized weights, which significantly lowers the memory footprint without compromising performance. For a 2B model, the memory requirement for activations during inference is approximately 6–8 GB, leaving room for intermediate computations within the RTX 3060's VRAM.

The estimated compute required for each forward pass primarily consists of three stages: embedding lookup, attention mechanisms, and feed-forward layers. Embedding tables for a model of this size typically occupy 1–2 GB post-quantization. The attention mechanisms, which scale quadratically with the sequence length, require an additional 2–3 GB for typical prompts of 512 tokens. The feed-forward layers, responsible for the bulk of the compute, consume 4–5 GB of VRAM during execution. By using quantization-aware computation and selectively offloading certain processes (if needed), even a 2B parameter model can achieve smooth inference on hardware with constrained resources, ensuring accessibility for developers and researchers on a budget.

Training Details

Training Data

The model was fine-tuned on the "schizo dataset," a specialized dataset focused on high-entropy, unconventional language patterns. This dataset includes content designed to push the boundaries of traditional AI training paradigms.

Training Procedure

The model was fine-tuned using a PEFT (Parameter-Efficient Fine-Tuning) approach to adapt the Gemma-2-2B model effectively while minimizing computational overhead.

Preprocessing

Data preprocessing included:

  • Tokenization using the Gemma tokenizer
  • Dataset filtering to ensure alignment with the schizo dataset's objectives

Training Hyperparameters

  • Training regime: bf16 mixed precision

Bias, Risks, and Limitations

This model was trained on the schizo dataset, which includes unique and unconventional content that may not align with standard NLP datasets. As such, it might generate outputs that are:

  • High-entropy and unconventional
  • Misaligned with traditional linguistic or logical patterns
  • Prone to biases present in the dataset

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Proper evaluation and testing should be conducted before deploying the model in any real-world applications.

Open Source

Zerebro 2B is completely open source, and the weights are freely available for anyone to use. This enables developers, researchers, and enthusiasts to experiment and build upon the model for a variety of applications.

Model Card Authors

Blorm, led by Jeffy Yu

Framework versions

  • PEFT 0.14.0
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for blorm-network/Zerebro-2b

Base model

google/gemma-2-2b
Adapter
(64)
this model