# ============================================================================= | |
# LOCAL/API CONFIGURATION | |
# ============================================================================= | |
# ----------------------------------------------------------------------------- | |
# REQUIRED CONFIGURATION | |
# ----------------------------------------------------------------------------- | |
# Hugging Face token (required for all setups) | |
HF_TOKEN=hf_... | |
# Generation Settings | |
MAX_NUM_TOKENS=2048 | |
MAX_NUM_ROWS=1000 | |
DEFAULT_BATCH_SIZE=5 | |
# Required for chat data generation with Llama or Qwen models | |
# Options: "llama3", "qwen2", or custom template string | |
MAGPIE_PRE_QUERY_TEMPLATE=llama3 | |
# ----------------------------------------------------------------------------- | |
# A. CLOUD API SERVICES | |
# ----------------------------------------------------------------------------- | |
# 1. HUGGING FACE INFERENCE API (Default, Recommended) | |
MODEL=meta-llama/Llama-3.1-8B-Instruct | |
# MODEL=Qwen/Qwen2.5-1.5B-Instruct | |
# 2. OPENAI API | |
# OPENAI_BASE_URL=https://api.openai.com/v1/ | |
# MODEL=gpt-4 | |
# API_KEY=sk-... | |
# 3. HUGGING FACE SPACE FOR ARGILLA (optional) | |
# ARGILLA_API_URL=https://your-space.hf.space/ | |
# ARGILLA_API_KEY=your_key | |
# ----------------------------------------------------------------------------- | |
# B. LOCAL SERVICES (Requires Installation) | |
# ----------------------------------------------------------------------------- | |
# 1. LOCAL OLLAMA | |
# OLLAMA_BASE_URL=http://127.0.0.1:11434/ | |
# MODEL=llama3.2:1b | |
# TOKENIZER_ID=meta-llama/Llama-3.2-1B-Instruct | |
# 2. LOCAL VLLM | |
# VLLM_BASE_URL=http://127.0.0.1:8000/ | |
# MODEL=Qwen/Qwen2.5-1.5B-Instruct | |
# TOKENIZER_ID=Qwen/Qwen2.5-1.5B-Instruct | |
# 3. LOCAL TGI | |
# HUGGINGFACE_BASE_URL=http://127.0.0.1:3000/ | |
# MODEL=meta-llama/Llama-3.1-8B-Instruct | |
# TOKENIZER_ID=meta-llama/Llama-3.1-8B-Instruct | |