/home/cfruan/.conda/envs/mlc-source-311/bin/python -m mlc_chat gen_config /ssd1/cfruan/models/stablelm-zephyr-3b --quantization q4f32_1 --conv-template stablelm-3b --output /tmp/tmpb2cdvwez --context-window-size 4096 [2024-02-02 20:03:24] INFO auto_config.py:115: Found model configuration: /ssd1/cfruan/models/stablelm-zephyr-3b/config.json [2024-02-02 20:03:24] INFO auto_config.py:153: Found model type: stablelm_epoch. Use `--model-type` to override. [2024-02-02 20:03:24] INFO stablelm_model.py:45: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-02-02 20:03:24] INFO stablelm_model.py:59: prefill_chunk_size defaults to context_window_size (4096) [2024-02-02 20:03:24] INFO config.py:106: Overriding context_window_size from 4096 to 4096 [2024-02-02 20:03:24] WARNING config.py:99: Warning: Cannot override max_batch_size, because StableLMEpochConfig does not have this field [2024-02-02 20:03:24] INFO gen_config.py:116: [generation_config.json] Setting bos_token_id: 0 [2024-02-02 20:03:24] INFO gen_config.py:116: [generation_config.json] Setting eos_token_id: 0 [2024-02-02 20:03:24] INFO gen_config.py:130: Not found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/tokenizer.model [2024-02-02 20:03:24] INFO gen_config.py:128: Found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/tokenizer.json. Copying to /tmp/tmpb2cdvwez/tokenizer.json [2024-02-02 20:03:24] INFO gen_config.py:130: Not found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/vocab.json [2024-02-02 20:03:24] INFO gen_config.py:130: Not found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/merges.txt [2024-02-02 20:03:24] INFO gen_config.py:130: Not found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/added_tokens.json [2024-02-02 20:03:24] INFO gen_config.py:128: Found tokenizer config: /ssd1/cfruan/models/stablelm-zephyr-3b/tokenizer_config.json. Copying to /tmp/tmpb2cdvwez/tokenizer_config.json [2024-02-02 20:03:24] INFO gen_config.py:69: [System default] Setting pad_token_id: 0 [2024-02-02 20:03:24] INFO gen_config.py:69: [System default] Setting temperature: 0.7 [2024-02-02 20:03:24] INFO gen_config.py:69: [System default] Setting repetition_penalty: 1.0 [2024-02-02 20:03:24] INFO gen_config.py:69: [System default] Setting top_p: 0.95 [2024-02-02 20:03:24] INFO gen_config.py:69: [System default] Setting mean_gen_len: 128 [2024-02-02 20:03:24] INFO gen_config.py:69: [System default] Setting max_gen_len: 512 [2024-02-02 20:03:24] INFO gen_config.py:69: [System default] Setting shift_fill_factor: 0.3 [2024-02-02 20:03:24] INFO gen_config.py:158: Dumping configuration file to: /tmp/tmpb2cdvwez/mlc-chat-config.json /home/cfruan/.conda/envs/mlc-source-311/bin/python -m mlc_chat convert_weight /ssd1/cfruan/models/stablelm-zephyr-3b --quantization q4f32_1 --source-format auto --output /tmp/tmpb2cdvwez [2024-02-02 20:03:24] INFO auto_config.py:115: Found model configuration: /ssd1/cfruan/models/stablelm-zephyr-3b/config.json [2024-02-02 20:03:25] INFO auto_device.py:76: Found device: cuda:0 [2024-02-02 20:03:25] INFO auto_device.py:76: Found device: cuda:1 [2024-02-02 20:03:25] INFO auto_device.py:85: Not found device: rocm:0 [2024-02-02 20:03:25] INFO auto_device.py:85: Not found device: metal:0 [2024-02-02 20:03:26] INFO auto_device.py:76: Found device: vulkan:0 [2024-02-02 20:03:26] INFO auto_device.py:76: Found device: vulkan:1 [2024-02-02 20:03:26] INFO auto_device.py:76: Found device: vulkan:2 [2024-02-02 20:03:26] INFO auto_device.py:85: Not found device: opencl:0 [2024-02-02 20:03:26] INFO auto_device.py:33: Using device: cuda:0 [2024-02-02 20:03:26] INFO auto_weight.py:70: Finding weights in: /ssd1/cfruan/models/stablelm-zephyr-3b [2024-02-02 20:03:26] INFO auto_weight.py:136: Not found Huggingface PyTorch [2024-02-02 20:03:26] INFO auto_weight.py:143: Found source weight format: huggingface-safetensor. Source configuration: /ssd1/cfruan/models/stablelm-zephyr-3b/model.safetensors.index.json [2024-02-02 20:03:26] INFO auto_weight.py:106: Using source weight configuration: /ssd1/cfruan/models/stablelm-zephyr-3b/model.safetensors.index.json. Use `--source` to override. [2024-02-02 20:03:26] INFO auto_weight.py:110: Using source weight format: huggingface-safetensor. Use `--source-format` to override. [2024-02-02 20:03:26] INFO auto_config.py:153: Found model type: stablelm_epoch. Use `--model-type` to override. [2024-02-02 20:03:26] INFO stablelm_model.py:45: context_window_size not found in config.json. Falling back to max_position_embeddings (4096) [2024-02-02 20:03:26] INFO stablelm_model.py:59: prefill_chunk_size defaults to context_window_size (4096) Weight conversion with arguments: --config /ssd1/cfruan/models/stablelm-zephyr-3b/config.json --quantization GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) --model-type stablelm_epoch --device cuda:0 --source /ssd1/cfruan/models/stablelm-zephyr-3b/model.safetensors.index.json --source-format huggingface-safetensor --output /tmp/tmpb2cdvwez 0%| | 0/260 [00:00 type is zero. setattr(self, word, getattr(machar, word).flat[0]) /home/cfruan/.conda/envs/mlc-source-311/lib/python3.11/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for type is zero. return self._float_to_str(self.smallest_subnormal) Start storing to cache /tmp/tmpb2cdvwez [0001/0390] saving lm_head.q_weight [0002/0390] saving lm_head.q_scale [0003/0390] saving model.embed_tokens.q_weight [0004/0390] saving model.embed_tokens.q_scale [0005/0390] saving model.layers.0.input_layernorm.bias [0006/0390] saving model.layers.0.input_layernorm.weight [0007/0390] saving model.layers.0.mlp.down_proj.q_weight [0008/0390] saving model.layers.0.mlp.down_proj.q_scale [0009/0390] saving model.layers.0.mlp.gate_up_proj.q_weight [0010/0390] saving model.layers.0.mlp.gate_up_proj.q_scale [0011/0390] saving model.layers.0.post_attention_layernorm.bias [0012/0390] saving model.layers.0.post_attention_layernorm.weight [0013/0390] saving model.layers.0.self_attn.qkv_proj.q_weight [0014/0390] saving model.layers.0.self_attn.qkv_proj.q_scale [0015/0390] saving model.layers.0.self_attn.o_proj.q_weight [0016/0390] saving model.layers.0.self_attn.o_proj.q_scale [0017/0390] saving model.layers.1.input_layernorm.bias [0018/0390] saving model.layers.1.input_layernorm.weight [0019/0390] saving model.layers.1.mlp.down_proj.q_weight [0020/0390] saving model.layers.1.mlp.down_proj.q_scale [0021/0390] saving model.layers.1.mlp.gate_up_proj.q_weight [0022/0390] saving model.layers.1.mlp.gate_up_proj.q_scale [0023/0390] saving model.layers.1.post_attention_layernorm.bias [0024/0390] saving model.layers.1.post_attention_layernorm.weight [0025/0390] saving model.layers.1.self_attn.qkv_proj.q_weight [0026/0390] saving model.layers.1.self_attn.qkv_proj.q_scale [0027/0390] saving model.layers.1.self_attn.o_proj.q_weight [0028/0390] saving model.layers.1.self_attn.o_proj.q_scale [0029/0390] saving model.layers.10.input_layernorm.bias [0030/0390] saving model.layers.10.input_layernorm.weight [0031/0390] saving model.layers.10.mlp.down_proj.q_weight [0032/0390] saving model.layers.10.mlp.down_proj.q_scale [0033/0390] saving model.layers.10.mlp.gate_up_proj.q_weight [0034/0390] saving model.layers.10.mlp.gate_up_proj.q_scale [0035/0390] saving model.layers.10.post_attention_layernorm.bias [0036/0390] saving model.layers.10.post_attention_layernorm.weight [0037/0390] saving model.layers.10.self_attn.qkv_proj.q_weight [0038/0390] saving model.layers.10.self_attn.qkv_proj.q_scale [0039/0390] saving model.layers.10.self_attn.o_proj.q_weight [0040/0390] saving model.layers.10.self_attn.o_proj.q_scale [0041/0390] saving model.layers.11.input_layernorm.bias [0042/0390] saving model.layers.11.input_layernorm.weight [0043/0390] saving model.layers.11.mlp.down_proj.q_weight [0044/0390] saving model.layers.11.mlp.down_proj.q_scale [0045/0390] saving model.layers.11.mlp.gate_up_proj.q_weight [0046/0390] saving model.layers.11.mlp.gate_up_proj.q_scale [0047/0390] saving model.layers.11.post_attention_layernorm.bias [0048/0390] saving model.layers.11.post_attention_layernorm.weight [0049/0390] saving model.layers.11.self_attn.qkv_proj.q_weight [0050/0390] saving model.layers.11.self_attn.qkv_proj.q_scale [0051/0390] saving model.layers.11.self_attn.o_proj.q_weight [0052/0390] saving model.layers.11.self_attn.o_proj.q_scale [0053/0390] saving model.layers.12.input_layernorm.bias [0054/0390] saving model.layers.12.input_layernorm.weight [0055/0390] saving model.layers.12.mlp.down_proj.q_weight [0056/0390] saving model.layers.12.mlp.down_proj.q_scale [0057/0390] saving model.layers.12.mlp.gate_up_proj.q_weight [0058/0390] saving model.layers.12.mlp.gate_up_proj.q_scale [0059/0390] saving model.layers.12.post_attention_layernorm.bias [0060/0390] saving model.layers.12.post_attention_layernorm.weight [0061/0390] saving model.layers.12.self_attn.qkv_proj.q_weight [0062/0390] saving model.layers.12.self_attn.qkv_proj.q_scale [0063/0390] saving model.layers.12.self_attn.o_proj.q_weight [0064/0390] saving model.layers.12.self_attn.o_proj.q_scale [0065/0390] saving model.layers.13.input_layernorm.bias [0066/0390] saving model.layers.13.input_layernorm.weight [0067/0390] saving model.layers.13.mlp.down_proj.q_weight [0068/0390] saving model.layers.13.mlp.down_proj.q_scale [0069/0390] saving model.layers.13.mlp.gate_up_proj.q_weight [0070/0390] saving model.layers.13.mlp.gate_up_proj.q_scale [0071/0390] saving model.layers.13.post_attention_layernorm.bias [0072/0390] saving model.layers.13.post_attention_layernorm.weight [0073/0390] saving model.layers.13.self_attn.qkv_proj.q_weight [0074/0390] saving model.layers.13.self_attn.qkv_proj.q_scale [0075/0390] saving model.layers.13.self_attn.o_proj.q_weight [0076/0390] saving model.layers.13.self_attn.o_proj.q_scale [0077/0390] saving model.layers.14.input_layernorm.bias [0078/0390] saving model.layers.14.input_layernorm.weight [0079/0390] saving model.layers.14.mlp.down_proj.q_weight [0080/0390] saving model.layers.14.mlp.down_proj.q_scale [0081/0390] saving model.layers.14.mlp.gate_up_proj.q_weight [0082/0390] saving model.layers.14.mlp.gate_up_proj.q_scale [0083/0390] saving model.layers.14.post_attention_layernorm.bias [0084/0390] saving model.layers.14.post_attention_layernorm.weight [0085/0390] saving model.layers.14.self_attn.qkv_proj.q_weight [0086/0390] saving model.layers.14.self_attn.qkv_proj.q_scale [0087/0390] saving model.layers.14.self_attn.o_proj.q_weight [0088/0390] saving model.layers.14.self_attn.o_proj.q_scale [0089/0390] saving model.layers.15.input_layernorm.bias [0090/0390] saving model.layers.15.input_layernorm.weight [0091/0390] saving model.layers.15.mlp.down_proj.q_weight [0092/0390] saving model.layers.15.mlp.down_proj.q_scale [0093/0390] saving model.layers.15.mlp.gate_up_proj.q_weight [0094/0390] saving model.layers.15.mlp.gate_up_proj.q_scale [0095/0390] saving model.layers.15.post_attention_layernorm.bias [0096/0390] saving model.layers.15.post_attention_layernorm.weight [0097/0390] saving model.layers.15.self_attn.qkv_proj.q_weight [0098/0390] saving model.layers.15.self_attn.qkv_proj.q_scale [0099/0390] saving model.layers.15.self_attn.o_proj.q_weight [0100/0390] saving model.layers.15.self_attn.o_proj.q_scale [0101/0390] saving model.layers.16.input_layernorm.bias [0102/0390] saving model.layers.16.input_layernorm.weight [0103/0390] saving model.layers.16.mlp.down_proj.q_weight [0104/0390] saving model.layers.16.mlp.down_proj.q_scale [0105/0390] saving model.layers.16.mlp.gate_up_proj.q_weight [0106/0390] saving model.layers.16.mlp.gate_up_proj.q_scale [0107/0390] saving model.layers.16.post_attention_layernorm.bias [0108/0390] saving model.layers.16.post_attention_layernorm.weight [0109/0390] saving model.layers.16.self_attn.qkv_proj.q_weight [0110/0390] saving model.layers.16.self_attn.qkv_proj.q_scale [0111/0390] saving model.layers.16.self_attn.o_proj.q_weight [0112/0390] saving model.layers.16.self_attn.o_proj.q_scale [0113/0390] saving model.layers.17.input_layernorm.bias [0114/0390] saving model.layers.17.input_layernorm.weight [0115/0390] saving model.layers.17.mlp.down_proj.q_weight [0116/0390] saving model.layers.17.mlp.down_proj.q_scale [0117/0390] saving model.layers.17.mlp.gate_up_proj.q_weight [0118/0390] saving model.layers.17.mlp.gate_up_proj.q_scale [0119/0390] saving model.layers.17.post_attention_layernorm.bias [0120/0390] saving model.layers.17.post_attention_layernorm.weight [0121/0390] saving model.layers.17.self_attn.qkv_proj.q_weight [0122/0390] saving model.layers.17.self_attn.qkv_proj.q_scale [0123/0390] saving model.layers.17.self_attn.o_proj.q_weight [0124/0390] saving model.layers.17.self_attn.o_proj.q_scale [0125/0390] saving model.layers.18.input_layernorm.bias [0126/0390] saving model.layers.18.input_layernorm.weight [0127/0390] saving model.layers.18.mlp.down_proj.q_weight [0128/0390] saving model.layers.18.mlp.down_proj.q_scale [0129/0390] saving model.layers.18.mlp.gate_up_proj.q_weight [0130/0390] saving model.layers.18.mlp.gate_up_proj.q_scale [0131/0390] saving model.layers.18.post_attention_layernorm.bias [0132/0390] saving model.layers.18.post_attention_layernorm.weight [0133/0390] saving model.layers.18.self_attn.qkv_proj.q_weight [0134/0390] saving model.layers.18.self_attn.qkv_proj.q_scale [0135/0390] saving model.layers.18.self_attn.o_proj.q_weight [0136/0390] saving model.layers.18.self_attn.o_proj.q_scale [0137/0390] saving model.layers.19.input_layernorm.bias [0138/0390] saving model.layers.19.input_layernorm.weight [0139/0390] saving model.layers.19.mlp.down_proj.q_weight [0140/0390] saving model.layers.19.mlp.down_proj.q_scale [0141/0390] saving model.layers.19.mlp.gate_up_proj.q_weight [0142/0390] saving model.layers.19.mlp.gate_up_proj.q_scale [0143/0390] saving model.layers.19.post_attention_layernorm.bias [0144/0390] saving model.layers.19.post_attention_layernorm.weight [0145/0390] saving model.layers.19.self_attn.qkv_proj.q_weight [0146/0390] saving model.layers.19.self_attn.qkv_proj.q_scale [0147/0390] saving model.layers.19.self_attn.o_proj.q_weight [0148/0390] saving model.layers.19.self_attn.o_proj.q_scale [0149/0390] saving model.layers.2.input_layernorm.bias [0150/0390] saving model.layers.2.input_layernorm.weight [0151/0390] saving model.layers.2.mlp.down_proj.q_weight [0152/0390] saving model.layers.2.mlp.down_proj.q_scale [0153/0390] saving model.layers.2.mlp.gate_up_proj.q_weight [0154/0390] saving model.layers.2.mlp.gate_up_proj.q_scale [0155/0390] saving model.layers.2.post_attention_layernorm.bias [0156/0390] saving model.layers.2.post_attention_layernorm.weight [0157/0390] saving model.layers.2.self_attn.qkv_proj.q_weight [0158/0390] saving model.layers.2.self_attn.qkv_proj.q_scale [0159/0390] saving model.layers.2.self_attn.o_proj.q_weight [0160/0390] saving model.layers.2.self_attn.o_proj.q_scale [0161/0390] saving model.layers.20.input_layernorm.bias [0162/0390] saving model.layers.20.input_layernorm.weight [0163/0390] saving model.layers.20.mlp.down_proj.q_weight [0164/0390] saving model.layers.20.mlp.down_proj.q_scale [0165/0390] saving model.layers.20.mlp.gate_up_proj.q_weight [0166/0390] saving model.layers.20.mlp.gate_up_proj.q_scale [0167/0390] saving model.layers.20.post_attention_layernorm.bias [0168/0390] saving model.layers.20.post_attention_layernorm.weight [0169/0390] saving model.layers.20.self_attn.qkv_proj.q_weight [0170/0390] saving model.layers.20.self_attn.qkv_proj.q_scale [0171/0390] saving model.layers.20.self_attn.o_proj.q_weight [0172/0390] saving model.layers.20.self_attn.o_proj.q_scale [0173/0390] saving model.layers.21.input_layernorm.bias [0174/0390] saving model.layers.21.input_layernorm.weight [0175/0390] saving model.layers.21.mlp.down_proj.q_weight [0176/0390] saving model.layers.21.mlp.down_proj.q_scale [0177/0390] saving model.layers.21.mlp.gate_up_proj.q_weight [0178/0390] saving model.layers.21.mlp.gate_up_proj.q_scale [0179/0390] saving model.layers.21.post_attention_layernorm.bias [0180/0390] saving model.layers.21.post_attention_layernorm.weight [0181/0390] saving model.layers.21.self_attn.qkv_proj.q_weight [0182/0390] saving model.layers.21.self_attn.qkv_proj.q_scale [0183/0390] saving model.layers.21.self_attn.o_proj.q_weight [0184/0390] saving model.layers.21.self_attn.o_proj.q_scale [0185/0390] saving model.layers.22.input_layernorm.bias [0186/0390] saving model.layers.22.input_layernorm.weight [0187/0390] saving model.layers.22.mlp.down_proj.q_weight [0188/0390] saving model.layers.22.mlp.down_proj.q_scale [0189/0390] saving model.layers.22.mlp.gate_up_proj.q_weight [0190/0390] saving model.layers.22.mlp.gate_up_proj.q_scale [0191/0390] saving model.layers.22.post_attention_layernorm.bias [0192/0390] saving model.layers.22.post_attention_layernorm.weight [0193/0390] saving model.layers.22.self_attn.qkv_proj.q_weight [0194/0390] saving model.layers.22.self_attn.qkv_proj.q_scale [0195/0390] saving model.layers.22.self_attn.o_proj.q_weight [0196/0390] saving model.layers.22.self_attn.o_proj.q_scale [0197/0390] saving model.layers.23.input_layernorm.bias [0198/0390] saving model.layers.23.input_layernorm.weight [0199/0390] saving model.layers.23.mlp.down_proj.q_weight [0200/0390] saving model.layers.23.mlp.down_proj.q_scale [0201/0390] saving model.layers.23.mlp.gate_up_proj.q_weight [0202/0390] saving model.layers.23.mlp.gate_up_proj.q_scale [0203/0390] saving model.layers.23.post_attention_layernorm.bias [0204/0390] saving model.layers.23.post_attention_layernorm.weight [0205/0390] saving model.layers.23.self_attn.qkv_proj.q_weight [0206/0390] saving model.layers.23.self_attn.qkv_proj.q_scale [0207/0390] saving model.layers.23.self_attn.o_proj.q_weight [0208/0390] saving model.layers.23.self_attn.o_proj.q_scale [0209/0390] saving model.layers.24.input_layernorm.bias [0210/0390] saving model.layers.24.input_layernorm.weight [0211/0390] saving model.layers.24.mlp.down_proj.q_weight [0212/0390] saving model.layers.24.mlp.down_proj.q_scale [0213/0390] saving model.layers.24.mlp.gate_up_proj.q_weight [0214/0390] saving model.layers.24.mlp.gate_up_proj.q_scale [0215/0390] saving model.layers.24.post_attention_layernorm.bias [0216/0390] saving model.layers.24.post_attention_layernorm.weight [0217/0390] saving model.layers.24.self_attn.qkv_proj.q_weight [0218/0390] saving model.layers.24.self_attn.qkv_proj.q_scale [0219/0390] saving model.layers.24.self_attn.o_proj.q_weight [0220/0390] saving model.layers.24.self_attn.o_proj.q_scale [0221/0390] saving model.layers.25.input_layernorm.bias [0222/0390] saving model.layers.25.input_layernorm.weight [0223/0390] saving model.layers.25.mlp.down_proj.q_weight [0224/0390] saving model.layers.25.mlp.down_proj.q_scale [0225/0390] saving model.layers.25.mlp.gate_up_proj.q_weight [0226/0390] saving model.layers.25.mlp.gate_up_proj.q_scale [0227/0390] saving model.layers.25.post_attention_layernorm.bias [0228/0390] saving model.layers.25.post_attention_layernorm.weight [0229/0390] saving model.layers.25.self_attn.qkv_proj.q_weight [0230/0390] saving model.layers.25.self_attn.qkv_proj.q_scale [0231/0390] saving model.layers.25.self_attn.o_proj.q_weight [0232/0390] saving model.layers.25.self_attn.o_proj.q_scale [0233/0390] saving model.layers.26.input_layernorm.bias [0234/0390] saving model.layers.26.input_layernorm.weight [0235/0390] saving model.layers.26.mlp.down_proj.q_weight [0236/0390] saving model.layers.26.mlp.down_proj.q_scale [0237/0390] saving model.layers.26.mlp.gate_up_proj.q_weight [0238/0390] saving model.layers.26.mlp.gate_up_proj.q_scale [0239/0390] saving model.layers.26.post_attention_layernorm.bias [0240/0390] saving model.layers.26.post_attention_layernorm.weight [0241/0390] saving model.layers.26.self_attn.qkv_proj.q_weight [0242/0390] saving model.layers.26.self_attn.qkv_proj.q_scale [0243/0390] saving model.layers.26.self_attn.o_proj.q_weight [0244/0390] saving model.layers.26.self_attn.o_proj.q_scale [0245/0390] saving model.layers.27.input_layernorm.bias [0246/0390] saving model.layers.27.input_layernorm.weight [0247/0390] saving model.layers.27.mlp.down_proj.q_weight [0248/0390] saving model.layers.27.mlp.down_proj.q_scale [0249/0390] saving model.layers.27.mlp.gate_up_proj.q_weight [0250/0390] saving model.layers.27.mlp.gate_up_proj.q_scale [0251/0390] saving model.layers.27.post_attention_layernorm.bias [0252/0390] saving model.layers.27.post_attention_layernorm.weight [0253/0390] saving model.layers.27.self_attn.qkv_proj.q_weight [0254/0390] saving model.layers.27.self_attn.qkv_proj.q_scale [0255/0390] saving model.layers.27.self_attn.o_proj.q_weight [0256/0390] saving model.layers.27.self_attn.o_proj.q_scale [0257/0390] saving model.layers.28.input_layernorm.bias [0258/0390] saving model.layers.28.input_layernorm.weight [0259/0390] saving model.layers.28.mlp.down_proj.q_weight [0260/0390] saving model.layers.28.mlp.down_proj.q_scale [0261/0390] saving model.layers.28.mlp.gate_up_proj.q_weight [0262/0390] saving model.layers.28.mlp.gate_up_proj.q_scale [0263/0390] saving model.layers.28.post_attention_layernorm.bias [0264/0390] saving model.layers.28.post_attention_layernorm.weight [0265/0390] saving model.layers.28.self_attn.qkv_proj.q_weight [0266/0390] saving model.layers.28.self_attn.qkv_proj.q_scale [0267/0390] saving model.layers.28.self_attn.o_proj.q_weight [0268/0390] saving model.layers.28.self_attn.o_proj.q_scale [0269/0390] saving model.layers.29.input_layernorm.bias [0270/0390] saving model.layers.29.input_layernorm.weight [0271/0390] saving model.layers.29.mlp.down_proj.q_weight [0272/0390] saving model.layers.29.mlp.down_proj.q_scale [0273/0390] saving model.layers.29.mlp.gate_up_proj.q_weight [0274/0390] saving model.layers.29.mlp.gate_up_proj.q_scale [0275/0390] saving model.layers.29.post_attention_layernorm.bias [0276/0390] saving model.layers.29.post_attention_layernorm.weight [0277/0390] saving model.layers.29.self_attn.qkv_proj.q_weight [0278/0390] saving model.layers.29.self_attn.qkv_proj.q_scale [0279/0390] saving model.layers.29.self_attn.o_proj.q_weight [0280/0390] saving model.layers.29.self_attn.o_proj.q_scale [0281/0390] saving model.layers.3.input_layernorm.bias [0282/0390] saving model.layers.3.input_layernorm.weight [0283/0390] saving model.layers.3.mlp.down_proj.q_weight [0284/0390] saving model.layers.3.mlp.down_proj.q_scale [0285/0390] saving model.layers.3.mlp.gate_up_proj.q_weight [0286/0390] saving model.layers.3.mlp.gate_up_proj.q_scale [0287/0390] saving model.layers.3.post_attention_layernorm.bias [0288/0390] saving model.layers.3.post_attention_layernorm.weight [0289/0390] saving model.layers.3.self_attn.qkv_proj.q_weight [0290/0390] saving model.layers.3.self_attn.qkv_proj.q_scale [0291/0390] saving model.layers.3.self_attn.o_proj.q_weight [0292/0390] saving model.layers.3.self_attn.o_proj.q_scale [0293/0390] saving model.layers.30.input_layernorm.bias [0294/0390] saving model.layers.30.input_layernorm.weight [0295/0390] saving model.layers.30.mlp.down_proj.q_weight [0296/0390] saving model.layers.30.mlp.down_proj.q_scale [0297/0390] saving model.layers.30.mlp.gate_up_proj.q_weight [0298/0390] saving model.layers.30.mlp.gate_up_proj.q_scale [0299/0390] saving model.layers.30.post_attention_layernorm.bias [0300/0390] saving model.layers.30.post_attention_layernorm.weight [0301/0390] saving model.layers.30.self_attn.qkv_proj.q_weight [0302/0390] saving model.layers.30.self_attn.qkv_proj.q_scale [0303/0390] saving model.layers.30.self_attn.o_proj.q_weight [0304/0390] saving model.layers.30.self_attn.o_proj.q_scale [0305/0390] saving model.layers.31.input_layernorm.bias [0306/0390] saving model.layers.31.input_layernorm.weight [0307/0390] saving model.layers.31.mlp.down_proj.q_weight [0308/0390] saving model.layers.31.mlp.down_proj.q_scale [0309/0390] saving model.layers.31.mlp.gate_up_proj.q_weight [0310/0390] saving model.layers.31.mlp.gate_up_proj.q_scale [0311/0390] saving model.layers.31.post_attention_layernorm.bias [0312/0390] saving model.layers.31.post_attention_layernorm.weight [0313/0390] saving model.layers.31.self_attn.qkv_proj.q_weight [0314/0390] saving model.layers.31.self_attn.qkv_proj.q_scale [0315/0390] saving model.layers.31.self_attn.o_proj.q_weight [0316/0390] saving model.layers.31.self_attn.o_proj.q_scale [0317/0390] saving model.layers.4.input_layernorm.bias [0318/0390] saving model.layers.4.input_layernorm.weight [0319/0390] saving model.layers.4.mlp.down_proj.q_weight [0320/0390] saving model.layers.4.mlp.down_proj.q_scale [0321/0390] saving model.layers.4.mlp.gate_up_proj.q_weight [0322/0390] saving model.layers.4.mlp.gate_up_proj.q_scale [0323/0390] saving model.layers.4.post_attention_layernorm.bias [0324/0390] saving model.layers.4.post_attention_layernorm.weight [0325/0390] saving model.layers.4.self_attn.qkv_proj.q_weight [0326/0390] saving model.layers.4.self_attn.qkv_proj.q_scale [0327/0390] saving model.layers.4.self_attn.o_proj.q_weight [0328/0390] saving model.layers.4.self_attn.o_proj.q_scale [0329/0390] saving model.layers.5.input_layernorm.bias [0330/0390] saving model.layers.5.input_layernorm.weight [0331/0390] saving model.layers.5.mlp.down_proj.q_weight [0332/0390] saving model.layers.5.mlp.down_proj.q_scale [0333/0390] saving model.layers.5.mlp.gate_up_proj.q_weight [0334/0390] saving model.layers.5.mlp.gate_up_proj.q_scale [0335/0390] saving model.layers.5.post_attention_layernorm.bias [0336/0390] saving model.layers.5.post_attention_layernorm.weight [0337/0390] saving model.layers.5.self_attn.qkv_proj.q_weight [0338/0390] saving model.layers.5.self_attn.qkv_proj.q_scale [0339/0390] saving model.layers.5.self_attn.o_proj.q_weight [0340/0390] saving model.layers.5.self_attn.o_proj.q_scale [0341/0390] saving model.layers.6.input_layernorm.bias [0342/0390] saving model.layers.6.input_layernorm.weight [0343/0390] saving model.layers.6.mlp.down_proj.q_weight [0344/0390] saving model.layers.6.mlp.down_proj.q_scale [0345/0390] saving model.layers.6.mlp.gate_up_proj.q_weight [0346/0390] saving model.layers.6.mlp.gate_up_proj.q_scale [0347/0390] saving model.layers.6.post_attention_layernorm.bias [0348/0390] saving model.layers.6.post_attention_layernorm.weight [0349/0390] saving model.layers.6.self_attn.qkv_proj.q_weight [0350/0390] saving model.layers.6.self_attn.qkv_proj.q_scale [0351/0390] saving model.layers.6.self_attn.o_proj.q_weight [0352/0390] saving model.layers.6.self_attn.o_proj.q_scale [0353/0390] saving model.layers.7.input_layernorm.bias [0354/0390] saving model.layers.7.input_layernorm.weight [0355/0390] saving model.layers.7.mlp.down_proj.q_weight [0356/0390] saving model.layers.7.mlp.down_proj.q_scale [0357/0390] saving model.layers.7.mlp.gate_up_proj.q_weight [0358/0390] saving model.layers.7.mlp.gate_up_proj.q_scale [0359/0390] saving model.layers.7.post_attention_layernorm.bias [0360/0390] saving model.layers.7.post_attention_layernorm.weight [0361/0390] saving model.layers.7.self_attn.qkv_proj.q_weight [0362/0390] saving model.layers.7.self_attn.qkv_proj.q_scale [0363/0390] saving model.layers.7.self_attn.o_proj.q_weight [0364/0390] saving model.layers.7.self_attn.o_proj.q_scale [0365/0390] saving model.layers.8.input_layernorm.bias [0366/0390] saving model.layers.8.input_layernorm.weight [0367/0390] saving model.layers.8.mlp.down_proj.q_weight [0368/0390] saving model.layers.8.mlp.down_proj.q_scale [0369/0390] saving model.layers.8.mlp.gate_up_proj.q_weight [0370/0390] saving model.layers.8.mlp.gate_up_proj.q_scale [0371/0390] saving model.layers.8.post_attention_layernorm.bias [0372/0390] saving model.layers.8.post_attention_layernorm.weight [0373/0390] saving model.layers.8.self_attn.qkv_proj.q_weight [0374/0390] saving model.layers.8.self_attn.qkv_proj.q_scale [0375/0390] saving model.layers.8.self_attn.o_proj.q_weight [0376/0390] saving model.layers.8.self_attn.o_proj.q_scale [0377/0390] saving model.layers.9.input_layernorm.bias [0378/0390] saving model.layers.9.input_layernorm.weight [0379/0390] saving model.layers.9.mlp.down_proj.q_weight [0380/0390] saving model.layers.9.mlp.down_proj.q_scale [0381/0390] saving model.layers.9.mlp.gate_up_proj.q_weight [0382/0390] saving model.layers.9.mlp.gate_up_proj.q_scale [0383/0390] saving model.layers.9.post_attention_layernorm.bias [0384/0390] saving model.layers.9.post_attention_layernorm.weight [0385/0390] saving model.layers.9.self_attn.qkv_proj.q_weight [0386/0390] saving model.layers.9.self_attn.qkv_proj.q_scale[2024-02-02 20:03:41] INFO convert_weight.py:143: Saved to directory: /tmp/tmpb2cdvwez [0387/0390] saving model.layers.9.self_attn.o_proj.q_weight [0388/0390] saving model.layers.9.self_attn.o_proj.q_scale [0389/0390] saving model.norm.bias [0390/0390] saving model.norm.weight All finished, 67 total shards committed, record saved to /tmp/tmpb2cdvwez/ndarray-cache.json Also saved a bf16 record to /tmp/tmpb2cdvwez/ndarray-cache-b16.json