QuantFactory/diffullama-GGUF
This is quantized version of diffusionfamily/diffullama created using llama.cpp
Original Model Card
diffullama
This model is a fine-tuned version of [llama2].
Model description
Details and model loading can be seen https://github.com/HKUNLP/DiffuLLaMA.
Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
@misc{gong2024scalingdiffusionlanguagemodels,
title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models},
author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
year={2024},
eprint={2410.17891},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.17891},
}
- Downloads last month
- 144
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for QuantFactory/diffullama-GGUF
Base model
meta-llama/Llama-2-7b-hf