FSDP训练模型,应该怎么对baichuan模型进行fsdp_wrap, 根据官方示例无法对模型进行fsdp_wrap。

#23
by stonem - opened

2023-08-31 11:36,FullyShardedDataParallel(
2023-08-31 11:36, (_fsdp_wrapped_module): BaichuanForCausalLM(
2023-08-31 11:36, (model): BaichuanModel(
2023-08-31 11:36, (embed_tokens): Embedding(64000, 5120, padding_idx=0)
2023-08-31 11:36, (layers): ModuleList(
2023-08-31 11:36, (0-39): 40 x BaichuanLayer(
2023-08-31 11:36, (self_attn): BaichuanAttention(
2023-08-31 11:36, (W_pack): Linear(in_features=5120, out_features=15360, bias=False)
2023-08-31 11:36, (o_proj): Linear(in_features=5120, out_features=5120, bias=False)
2023-08-31 11:36, )
2023-08-31 11:36, (mlp): MLP(
2023-08-31 11:36, (gate_proj): Linear(in_features=5120, out_features=13696, bias=False)
2023-08-31 11:36, (down_proj): Linear(in_features=13696, out_features=5120, bias=False)
2023-08-31 11:36, (up_proj): Linear(in_features=5120, out_features=13696, bias=False)
2023-08-31 11:36, (act_fn): SiLUActivation()
2023-08-31 11:36, )
2023-08-31 11:36, (input_layernorm): RMSNorm()
2023-08-31 11:36, (post_attention_layernorm): RMSNorm()
2023-08-31 11:36, )
2023-08-31 11:36, )
2023-08-31 11:36, (norm): RMSNorm()
2023-08-31 11:36, )
2023-08-31 11:36, (lm_head): Linear(in_features=5120, out_features=64000, bias=False)
2023-08-31 11:36, )
2023-08-31 11:36,)

经过FSDP的模型,应该变成
2023-08-31 14:11,FullyShardedDataParallel(
2023-08-31 14:11, (_fsdp_wrapped_module): LlamaForCausalLM(
2023-08-31 14:11, (model): LlamaModel(
2023-08-31 14:11, (embed_tokens): Embedding(55296, 5120)
2023-08-31 14:11, (layers): ModuleList(
2023-08-31 14:11, (0-39): 40 x FullyShardedDataParallel(
2023-08-31 14:11, (_fsdp_wrapped_module): LlamaDecoderLayer(
2023-08-31 14:11, (self_attn): LlamaAttention(
2023-08-31 14:11, (q_proj): Linear(in_features=5120, out_features=5120, bias=False)
2023-08-31 14:11, (k_proj): Linear(in_features=5120, out_features=5120, bias=False)
2023-08-31 14:11, (v_proj): Linear(in_features=5120, out_features=5120, bias=False)
2023-08-31 14:11, (o_proj): Linear(in_features=5120, out_features=5120, bias=False)
2023-08-31 14:11, (rotary_emb): LlamaRotaryEmbedding()
2023-08-31 14:11, )
2023-08-31 14:11, (mlp): LlamaMLP(
2023-08-31 14:11, (gate_proj): Linear(in_features=5120, out_features=13824, bias=False)
2023-08-31 14:11, (up_proj): Linear(in_features=5120, out_features=13824, bias=False)
2023-08-31 14:11, (down_proj): Linear(in_features=13824, out_features=5120, bias=False)
2023-08-31 14:11, (act_fn): SiLUActivation()
2023-08-31 14:11, )
2023-08-31 14:11, (input_layernorm): LlamaRMSNorm()
2023-08-31 14:11, (post_attention_layernorm): LlamaRMSNorm()
2023-08-31 14:11, )
2023-08-31 14:11, )
2023-08-31 14:11, )
2023-08-31 14:11, (norm): LlamaRMSNorm()
2023-08-31 14:11, )
2023-08-31 14:11, (lm_head): Linear(in_features=5120, out_features=55296, bias=False)
2023-08-31 14:11, )
2023-08-31 14:11,)

对每一个layer都应该有一个 (_fsdp_wrapped_module)
我给百川模型设置了wrap_policy
def get_baichun_wrapper():
baichuan_auto_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
recurse=True,
transformer_layer_cls={
BaichuanLayer,
}
)
return baichuan_auto_wrap_policy

看了modeling_baichuan.py这个文件,整体实现上是继承nn.Moudle,llama模型也是一样的。不知道哪里出了问题,不能进行fsdp包装

stonem changed discussion title from FSDP训练模型,应该怎么对模型进行fsdp_wrap, 根据官方示例无法对模型进行fsdp_wrap。 to FSDP训练模型,应该怎么对baichuan模型进行fsdp_wrap, 根据官方示例无法对模型进行fsdp_wrap。
stonem changed discussion status to closed

朋友请问你现在解决这个问题了吗?

Sign up or log in to comment