BiaSWE / README.md
juditai's picture
Update README.md
206736d verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: annotations
      struct:
        - name: annotator 1
          struct:
            - name: category
              dtype: string
            - name: comment
              dtype: string
            - name: hate_speech
              dtype: string
            - name: misogyny
              dtype: string
            - name: rating
              dtype: string
        - name: annotator 2
          struct:
            - name: category
              dtype: string
            - name: comment
              dtype: string
            - name: hate_speech
              dtype: string
            - name: misogyny
              dtype: string
            - name: rating
              dtype: string
        - name: annotator 3
          struct:
            - name: category
              dtype: string
            - name: comment
              dtype: string
            - name: hate_speech
              dtype: string
            - name: misogyny
              dtype: string
            - name: rating
              dtype: string
        - name: annotator 4
          struct:
            - name: category
              dtype: string
            - name: comment
              dtype: string
            - name: hate_speech
              dtype: string
            - name: misogyny
              dtype: string
            - name: rating
              dtype: string
  splits:
    - name: train
      num_bytes: 153663
      num_examples: 150
    - name: val
      num_bytes: 182637
      num_examples: 150
    - name: test
      num_bytes: 176851
      num_examples: 150
  download_size: 308431
  dataset_size: 513151
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
      - split: test
        path: data/test-*

About BiaSWE

We present BiaSWE, a small annotated dataset for misogyny detection in Swedish, annotated for hate speech, misogyny, misogyny type categories and severity by a group of experts in social sciences and humanities. This dataset is a proof of concept and it can be used to perform classification of misogynistic vs non-misogynistic text, as well as debiasing on Language Models.

Content warning: Sensitive content might appear in this dataset. The language does not reflect the authors’ views.

Data collection methodology

This dataset contains 450 datapoints, extracted from the forum Flashback by data scraping and keyword matching on a list of keywords agreed on by our team of expert annotators. Each datapoint has been manually annotated by at least 2 experts.

The annotation task was divided into 4 sub-tasks: hate-speech detection ("yes" or "no"), misogyny detection ("yes" or "no"), category detection ("Stereotype", "Erasure and minimization", "Violence against women", "Sexualization and objectification" and "Anti-feminism and denial of discrimination") and, finally, severity rating (on a scale of 1 to 10).

The dataset has, as a final step, been manually anonymized.

Description of the format

Each datapoint's annotation is structured as follows:

{"text": "...", "annotations": {"annotator 1": {"hate_speech": "...", "misogyny": "...", "category": "...", "rating": "...", "comment": "..."}, "annotator 2": ...}}

Note that whenever an annotator labeled, for example, "misogyny" as "No", then the following labels "category" and "rating" will be empty (NaN).

Note also that each datapoint's "annotations" part has four keys ("annotator 1", "annotator 2", "annotator 3", "annotator 4") but there are between 2 to 4 annotations per datapoint so the value for the missing annotator(s) is null.

Description of the data

The annotation guidelines designed for the annotation task, as well as the keywords that enabled us to retrieve the annotation data are provided inside the guidelines and keywords folder.

The dataset, in data, is separated into 3 parquet files comprising a train, test and validation set.

Structure is as follows:

BiaSWE/
  -README.md
  /guidelines and keywords
    - Annotation guidelines.pdf
    - Keywords.pdf
  /data
    - train-00000-of-00001.parquet
    - val-00000-of-00001.parquet
    - test-00000-of-00001.parquet

Authors

Kätriin Kukk, Judit Casademont Moner, Danila Petrelli.

License

CC BY 4.0

Acknowledgments

This work is a result of the “Interdisciplinary Expert Pool for NLU” project funded by Vinnova (Sweden’s innovation agency) under grant 2022-02870. Experts involved in the creation of the dataset:

Annika Raapke, Researcher at Uppsala University, Department of History;
Eric Orlowski, Doctoral Candidate at University College London/Uppsala University, Social and Cultural Anthropology;
Michał Dzieliński, Assistant Professor at Stockholm Business School, International Finance;
Maria Jacobson, Antidiskrimineringsbyrån Väst (Anti-Discrimimination Agency West Sweden);
Astrid Carsbrin, Sveriges Kvinnoorganisationer (Swedish Women's Lobby);
Cia Bohlin, Internetstiftelsen (The Swedish Internet Foundation);
Richard Brattlund, Internetstiftelsen (The Swedish Internet Foundation).

BiaSWE's multi-disciplinary engagement process was, in part, inspired by the Biasly project from Mila - Quebec AI Institute.

Special thanks to Francisca Hoyer at AI Sweden for making the Interdisciplinary Expert Pool possible from the start, to Magnus Sahlgren at AI Sweden for guidance, and to Allison Cohen at MILA AI for Humanity for participation and support during the experiment.