Res-BERT / README.md
holygrimm's picture
Create README.md
13b46c5 verified
metadata
language:
  - en
base_model:
  - distilbert/distilbert-base-uncased
license: apache-2.0
metrics:
  - accuracy
  - precision
  - recall
  - f1
library_name: adapter-transformers
tags:
  - resume-classification
  - multi-label-classification
  - human-resources
  - transformers
  - distilbert
  - career-guidance
  - fine-tuned

Res-BERT

Fine-tuned DistilBERT model for multi-label resume classification.


Model Overview

Res-BERT is a fine-tuned version of the DistilBERT base model, trained on a multi-labeled dataset of resumes (resume_corpus) with occupation labels. This model can classify resumes into multiple occupation categories, making it a useful tool for HR teams, recruitment platforms, and AI-powered career assistants.

Base Model

  • DistilBERT (uncased): A smaller, faster, and cheaper version of BERT, pretrained on BookCorpus and English Wikipedia. It provides a balance of performance and efficiency for NLP tasks.

Dataset

The resume_corpus dataset was used for training. It consists of resumes labeled with occupations. The dataset includes:

  • resumes_corpus.zip: A collection of .txt files (resumes) with corresponding labels in .lab files.
  • resumes_sample.zip: A consolidated text file, where each line contains:
    • Resume ID
    • Occupation labels (separated by ;)
    • Resume text.
  • normalized_classes: Associations between raw and normalized occupation labels.

Dataset Citation

Jiechieu, K.F.F., Tsopze, N. Skills prediction based on multi-label resume classification using CNN with model predictions explanation. Neural Comput & Applic (2020). DOI:10.1007/s00521-020-05302-x.


Training Procedure

The model was fine-tuned using:

  • Input Format: Lowercased, tokenized text using WordPiece with a vocabulary of 30,000 tokens.
  • Hyperparameters: Default settings of the Hugging Face Trainer API for DistilBERT-based sequence classification.
  • Preprocessing:
    • Masking: 15% of tokens were masked during pretraining.
    • Split: 80% training, 10% validation, 10% test.
  • Hardware: 8 16GB V100 GPUs, trained for 10 hours.

Intended Use

Applications

  • Resume screening for recruitment platforms.
  • Career guidance and job-matching services.
  • Analyzing skills and experiences from resumes.

How to Use

Using Transformers' pipeline:

from transformers import pipeline

classifier = pipeline("text-classification", model="Res-BERT", tokenizer="Res-BERT", multi_label=True)
resumes = ["Software developer with 5 years of experience in Java and Python."]
predictions = classifier(resumes)

print(predictions)

Using Transformers' pipeline:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("Res-BERT")
model = AutoModelForSequenceClassification.from_pretrained("Res-BERT")

text = "Experienced mechanical engineer with expertise in CAD and manufacturing."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

print(outputs.logits)

Citations

@article{Sanh2019DistilBERTAD, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, journal={ArXiv}, year={2019}, volume={abs/1910.01108} }

@article{Jiechieu2020ResumeClassification, title={Skills prediction based on multi-label resume classification using CNN with model predictions explanation}, author={K.F.F. Jiechieu and N. Tsopze}, journal={Neural Comput & Applic}, year={2020}, doi={10.1007/s00521-020-05302-x} }