Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Fineweb train configuration

#39
by nezhazheng - opened

May i ask when will the configuration and hyper-parameters for training Fineweb be open-sourced?

HuggingFaceFW org

We are currently waiting for some changes on the nanotron codebase to be completed, but here are the main arguments (not sure if it would run on the current state of nanotron):

model_config = LlamaConfig(
    bos_token_id=1,
    eos_token_id=2,
    hidden_act="silu",
    hidden_size=2048,
    initializer_range=0.02,
    intermediate_size=8192,
    max_position_embeddings=2048,
    num_attention_heads=32,
    num_hidden_layers=24,
    num_key_value_heads=32,
    pretraining_tp=1,
    rms_norm_eps=1e-05,
    rope_scaling=None,
    tie_word_embeddings=True,
    use_cache=True,
    vocab_size=50272,  # GPT2 tokenizer rounded to next multiple of 8
)

    parallelism = ParallelismArgs(
        dp=64,
        pp=1,
        tp=1,
        pp_engine="1f1b",
        tp_mode="REDUCE_SCATTER",
        tp_linear_async_communication=True,
    )

    tokens = TokensArgs(
        batch_accumulation_per_replica=4,
        micro_batch_size=4,
        sequence_length=2048,
        train_steps=args.train_steps,
        val_check_interval=100,
    )

    model = ModelArgs(
        model_config=model_config,
        make_vocab_size_divisible_by=1,
        init_method=RandomInit(
            std=0.02,
            # std=1
            # / math.sqrt(model_config.hidden_size)  # 0.01275  # Basically 1/sqrt(N),
            # path="/fsx/shared-falcon-180B/brrr-falcon-180B"
        ),
        dtype=torch.bfloat16,
    )
    optimizer = OptimizerArgs(
        accumulate_grad_in_fp32=True,
        adam_beta1=0.9,
        adam_beta2=0.95,
        adam_eps=1.0e-8,
        clip_grad=1.0,
        torch_adam_is_fused=True,
        weight_decay=0.1,
        zero_stage=0,
        learning_rate_scheduler=LRSchedulerArgs(
            learning_rate=3e-4,
            lr_warmup_steps=500,
            lr_warmup_style="linear",
            lr_decay_style="cosine",
            # lr_decay_steps=10000-500,  # Keeping it to 10k for comparision for now
            min_decay_lr=3.0e-5
        )
    )

Is there a specific reason for weight decay 0.1?

Does lr_decay_steps being commented out mean you trained with constant LR at 3e-4 after the warmup was complete?

Sign up or log in to comment