Edit model card

QuRater model fine-tuned from 1.3B Sheared-LLaMA model.

From the paper: QuRating: Selecting High-Quality Data for Training Language Models

The model is a sequence classification model that predicts quality ratings across 4 critera as outputs. The quality ratings (which are unnormalized) can be accessed via the 4 logits in the output of the model. They are in the following order:

  • Logit 0: Writing Style
  • Logit 1: Required Expertise
  • Logit 2: Facts and Trivia
  • Logit 3: Educational Value

Note: The model was only fine-tuned on sequences of up to 512 tokens, and should not be used for longer documents. Instead, compute the quality ratings for windows of up to 512 token and average them weighted by the window length.

Guidance on Responsible Use:

In the paper, we document various types of bias that are present in the quality ratings from the QuRater model (biases related to domains, topics, social roles, regions and languages - see Section 6 of the paper). Hence, be aware that data selection with QuRating could have unintended and harmful effects on the language model that is being trained. We strongly recommend a comprehensive evaluation of the language model for these and other types of bias, particularly before real-world deployment. We hope that releasing the data/models can facilitate future research aimed at uncovering and mitigating such biases. Note that the quality ratings do not measure the social or literary value of a text and should not be used for textual or demographic studies.

Downloads last month
361
Safetensors
Model size
1.28B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train princeton-nlp/QuRater-1.3B