build: 3787 (6026da52) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu llama_model_loader: loaded meta data with 35 key-value pairs and 963 tensors from Qwen2.5-72B-Instruct-IMat-GGUF/Qwen2.5-72B-Instruct.Q8_0.gguf.hardlink.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 72B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 72B llama_model_loader: - kv 6: general.license str = other llama_model_loader: - kv 7: general.license.name str = qwen llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-7... llama_model_loader: - kv 9: general.base_model.count u32 = 1 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 72B llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-72B llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 15: qwen2.block_count u32 = 80 llama_model_loader: - kv 16: qwen2.context_length u32 = 32768 llama_model_loader: - kv 17: qwen2.embedding_length u32 = 8192 llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 29568 llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 64 llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: general.file_type u32 = 7 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - type f32: 401 tensors llama_model_loader: - type q8_0: 562 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 29568 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 72.71 B llm_load_print_meta: model size = 71.95 GiB (8.50 BPW) llm_load_print_meta: general.name = Qwen2.5 72B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 0.85 MiB llm_load_tensors: offloading 24 repeating layers to GPU llm_load_tensors: offloaded 24/81 layers to GPU llm_load_tensors: CPU buffer size = 73677.66 MiB llm_load_tensors: CUDA0 buffer size = 21345.94 MiB ................................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA_Host KV buffer size = 112.00 MiB llama_kv_cache_init: CUDA0 KV buffer size = 48.00 MiB llama_new_context_with_model: KV self size = 160.00 MiB, K (f16): 80.00 MiB, V (f16): 80.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.58 MiB llama_new_context_with_model: CUDA0 compute buffer size = 1575.25 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 17.01 MiB llama_new_context_with_model: graph nodes = 2806 llama_new_context_with_model: graph splits = 788 system_info: n_threads = 25 (n_threads_batch = 25) / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | compute_imatrix: tokenizing the input .. compute_imatrix: tokenization took 125.949 ms compute_imatrix: computing over 128 chunks with batch_size 512 compute_imatrix: 9.39 seconds per pass - ETA 20.03 minutes [1]4.0467,[2]2.9873,[3]2.8087,[4]3.0234,[5]2.9891,[6]2.7538,[7]2.9255,[8]2.9496,[9]3.3081,[10]3.2863,[11]3.3080,[12]3.6255,[13]4.0152,[14]4.2567,[15]4.6175,[16]4.8913,[17]5.1014,[18]5.4834,[19]5.3059,[20]5.4344,[21]5.4322,[22]5.4811,[23]5.4042,[24]5.5849,[25]5.7222,[26]5.6376,[27]5.4498,[28]5.1735,[29]5.0485,[30]5.0490,[31]4.9413,[32]4.7763,[33]4.6960,[34]4.6488,[35]4.6337,[36]4.6216,[37]4.6191,[38]4.6656,[39]4.6504,[40]4.7819,[41]4.8328,[42]4.6855,[43]4.5342,[44]4.4345,[45]4.3131,[46]4.3076,[47]4.2861,[48]4.3629,[49]4.4574,[50]4.5277,[51]4.4941,[52]4.5846,[53]4.6879,[54]4.7710,[55]4.8245,[56]4.8975,[57]4.9536,[58]5.0200,[59]5.0692,[60]5.1019,[61]5.1107,[62]5.1081,[63]5.1540,[64]5.2393,[65]5.2118,[66]5.2220,[67]5.2454,[68]5.2070,[69]5.1820,[70]5.1878,[71]5.1802,[72]5.1822,[73]5.1937,[74]5.1624,[75]5.1332,[76]5.1107,[77]5.1126,[78]5.1093,[79]5.1015,[80]5.0576,[81]5.0862,[82]5.0871,[83]5.0631,[84]5.0764,[85]5.0904,[86]5.0784,[87]5.0733,[88]5.0705,[89]5.0957,[90]5.1264,[91]5.1309,[92]5.1142,[93]5.0933,[94]5.0644,[95]5.0430,[96]5.0226,[97]4.9991,[98]4.9784,[99]4.9671,[100]4.9869,[101]5.0155,[102]5.0910,[103]5.1626,[104]5.2165,[105]5.3081,[106]5.3701,[107]5.3980,[108]5.4013,[109]5.4127,[110]5.4034,[111]5.3518,[112]5.2911,[113]5.2462,[114]5.2897,[115]5.3100,[116]5.3263,[117]5.3479,[118]5.3805,[119]5.3877,[120]5.3941,[121]5.4189,[122]5.3992,[123]5.4158,[124]5.3764,[125]5.3364,[126]5.2911,[127]5.2408,[128]5.1974, Final estimate: PPL = 5.1974 +/- 0.06975 llama_perf_context_print: load time = 39800.68 ms llama_perf_context_print: prompt eval time = 809653.10 ms / 65536 tokens ( 12.35 ms per token, 80.94 tokens per second) llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_perf_context_print: total time = 841419.99 ms / 65537 tokens