This is a new kind of model optimization. This model is based on Gemma-2-27b-it.

Update: This model is in flux. It was updated on the 12th of August, with better parameters!

A paper on the technique is currently being written.

This research was supported with hardware from the appliedAI Institute, whose goal is to generate and communicate high-quality knowledge about trustworthy AI.

Quickstart

# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("dnhkng/RYS-Gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-2-9b-it",
    device_map="auto",
)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))

SHAMELESS ADVERTISING BREAK

Iโ€™m on the hunt for new challenges and a chance to dive into some exciting research opportunities. Oh, and did I mention I just snagged a top spot on the Open LLM leaderboard? ๐ŸŽ‰

Profile

Innovation enthusiast, AI strategist, and interdisciplinary-tech nerd โ€“ that's me! With over a decade of experience in research and project management, my professional journey has been largely shaped by my passion for artificial intelligence and its potential to transform various industries. With a solid background in artificial intelligence and machine learning, coupled with a knack for innovation and problem-solving (and a healthy dose of curiosity), I'm excited to bring my skills to a new team.

Originally from Australia, where I earned my degrees in Organic Chemistry and Biochemistry, I moved to Germany in 2004. My academic pursuit continued with a PhD in Chemistry at the Max Planck Institute of Biochemistry. Today, I leverage my robust educational background and diverse industry experience to drive AI innovations in a wide range of applications. Hobbies? Lots: I've also built the world's most powerful espresso machine and am working to bring GLaDOS to life.


I'm based out of Munich, Germany, but I would be interested in working remotely for a team with more compute than my 2x 4090s ๐Ÿš€

Reach out via LinkedIn - Dr David Noel Ng

Downloads last month
2,535
Safetensors
Model size
32.9B params
Tensor type
BF16
ยท
Inference API
Unable to determine this model's library. Check the docs .

Model tree for dnhkng/RYS-Gemma-2-27b-it

Quantizations
3 models