Edit model card

Model Description

This is a spaCy model fine-tuned to extract names of cities and municipalities from German news articles. It was trained on 50,000 LLM-annotated (LLAMA 3.1 8B-Instruct) German news articles from the CommonCrawl news dataset. This is still work in progress. Please report any bugs in the community tab.

  • Developed by: Lukas Kriesch
  • Model type: Named Entity Recognition (NER)
  • Language(s) (NLP): German
  • License: Llama 3.1 Community License Agreement
  • Finetuned from model : spacy/de_core_news_lg

Uses

Direct Use

This model can be directly used to extract city and municipality names from any German-language text source, particularly news articles. Researchers and developers working in geospatial analysis or regional studies may find this useful for location-based analyses.

Downstream Use

Fine-tuned applications might involve integration into larger workflows for geospatial data processing, population studies, regional analysis, or sentiment analysis in the context of location data.

Out-of-Scope Use

The model should not be used to infer broader geographical trends or to analyze texts unrelated to city or municipal locations in Germany. Additionally, it may not perform well on non-news domains or texts that lack clear references to cities or municipalities.

Bias, Risks, and Limitations

The model is trained specifically on German news articles and may not generalize well to other domains such as social media, legal texts, or scientific literature. Additionally, there may be biases in the training data, particularly if certain regions are underrepresented in the dataset. Users should be cautious about the model's performance across different subpopulations of city or location mentions (e.g., historical names or rare municipalities).

How to Get Started with the Model

Install SpaCy: https://spacy.io/usage Install pretrained vectors: https://spacy.io/models/de#de_core_news_lg

Download https://huggingface.co/LKriesch/LLAMA_fast_geotag/tree/main/spacy_lg_geo

import spacy
nlp=spacy.load("path_to_model")
text="Das Olympiastadion steht in Berlin."
doc=nlp(text)
for ent in doc.ents:
    print(f"Entity: {ent.text}, Label: {ent.label_}")

Training Hyperparameters

Batch size: 64

Epochs: Up to 100, with early stopping (patience of 3 epochs)

Optimizer: Default spaCy optimizer for NER fine-tuning

Training regime: Mixed precision (fp16)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for LKriesch/LLAMA_fast_geotag

Finetuned
(420)
this model