GemMarketing: A Marketing Large Language Model
GemMarketing is a 2B parameter Domain-Specific Large Language Model (LLM).
It was specifically adapted to the marketing domain from gemma-2b through continuous pretraining on a meticulously curated and comprehensive marketing corpus of more than 43B tokens.
GemMarketing outperforms gemma-2b on specific marketing tasks. We are releasing this early checkpoint of the model to the AI community.
Model Description
GemMarketing is a powerful tool that can help generate high-quality marketing content and conduct research in the field of marketing. It is an excellent resource for staying ahead in the rapidly changing world of marketing.
While the model is designed to encode marketing knowledge, this checkpoint is not yet adapted to deliver knowledge appropriately, safely, or within professional actionable constraints.
We recommend against deploying GemMarketing in real-world practice settings.
Model Details
- Developed by: Marketeam
- Model type: Causal decoder-only transformer language model
- Continue-pretrained from model: gemma-2b
- Context length: 3K tokens
- Input & Output: Text-only
- Language: English
- Knowledge Cutoff: December 2023
Uses
GemMarketing has been developed for further research of LLM for marketing applications.
The potential use cases for this tool are diverse and varied, ranging from marketing question answering to general marketing information queries, and actions (function-calls) on marketing platforms.
GemMarketing is a Foundation Language Model (FLM) without finetuning or instruction-tuning.
We recommend applying SFT or RLHF-tuned for specific downstream tasks. Or rather apply in-context learning with 1000-1500 tokens added to the prompt.
Training Details
Training Data
Marketing data from publicly available and internal sources such as:
- Blogs
- Books
- Websites
- Podcasts
- Newsletters
- Publications
- Social Media
- Ad-Campaigns
- Landing Pages
- Press Releases
- Email-Campaigns
- Brochures & Flyers
- Product Description
- Testimonials & Reviews
- ...
And ±10% of previously seen data to avoid catastrophic forgetting.
Training Procedure
Our training procedure includes using the AWS SageMaker framework, 4 NVIDIA A100 GPUs, p4de.24xlarge machine.
With a total train time of ±250 hours, with a total training cost of ±10K$.
This is an early checkpoint of the model that we are releasing to the community.
Training Hyperparameters
Param | Value |
---|---|
bf16 | true |
tf32 | true |
lr | 1e-4 |
optim | adamw |
epochs | 1 |
lr scheduler | constant |
warmup ratio | 0.03 |
max grad norm | 0.3 |
context lengt | 3072 |
attention | SPDA |
How to use
Using Transformers pipeline
import transformers
import torch
model_id = "marketeam/GemMarketing"
tokenizer_id = "google/gemma-2b"
token = "hf-token"
pipeline = transformers.pipeline("text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16},
tokenizer=tokenizer_id, token=token, device_map='auto')
pipeline("What are the key components of a digital marketing strategy?")
Using Transformers generate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "marketeam/GemMarketing"
tokenizer_id = "google/gemma-2b"
token = "hf_token"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id, token=token)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype=torch.bfloat16, token=token).to(device)
message = "How do I calculate customer lifetime value?"
inputs = tokenizer(message, return_tensors="pt").to(device)
outputs = model.generate(**inputs)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
Intended Usage
GemMarketing is now available for further testing and assessment. Potential use cases include, but are not limited to:
- Text Generation: This model can produce creative text formats in the marketing domain.
- Knowledge Exploration: It can assist marketing researchers by generating valuable marketing information or answering questions about marketing-specific topics.
- Natural Language Processing (NLP) Research: This model can form the basis for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field.
Contributers
- Downloads last month
- 243