Transformers
GGUF
English
llama-factory
Inference Endpoints
conversational
aashish1904's picture
Upload README.md with huggingface_hub
e75e6d0 verified
---
library_name: transformers
tags:
- llama-factory
license: llama3
datasets:
- allenai/ValuePrism
- Value4AI/ValueBench
language:
- en
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/ValueLlama-3-8B-GGUF
This is quantized version of [Value4AI/ValueLlama-3-8B](https://huggingface.co/Value4AI/ValueLlama-3-8B) created using llama.cpp
# Original Model Card
# Model Card for ValueLlama
## Model Description
ValueLlama is designed for perception-level value measurement in an open-ended value space, which includes two tasks: (1) Relevance classification determines whether a perception is relevant to a value; and (2) Valence classification determines whether a perception supports, opposes, or remains neutral (context-dependent) towards a value. Both tasks are formulated as generating a label given a value and a perception.
- **Model type:** Language model
- **Language(s) (NLP):** en
- **Finetuned from model:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Paper
For more information, please refer to our paper: [*Measuring Human and AI Values based on Generative Psychometrics with Large Language Models*](https://arxiv.org/abs/2409.12106).
## Uses
It is intended for use in **research** to measure human/AI values and conduct related analyses.
See our codebase for more details: [https://github.com/Value4AI/gpv](https://github.com/Value4AI/gpv).
## BibTeX:
If you find this model helpful, we would appreciate it if you cite our paper:
```bibtex
@misc{ye2024gpv,
title={Measuring Human and AI Values based on Generative Psychometrics with Large Language Models},
author={Haoran Ye and Yuhang Xie and Yuanyi Ren and Hanjun Fang and Xin Zhang and Guojie Song},
year={2024},
eprint={2409.12106},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.12106},
}
```