Apizhai's picture
Update README.md
2125156
|
raw
history blame
2.61 kB
metadata
language:
  - th
pipeline_tag: summarization
tags:
  - text_classification
  - albert
widget:
  - text: >-
      NairiSoft is looking for a highly qualified person with deep knowledge and
      practical experience in Java programming. The selected candidate will be
      involved in all stages of the development life cycle.
    example_title: Current Position Requirments 1
  - text: >-
      Ogma Applications is seeking motivated Senior Developers to work on its
      worldwide projects. The projects are web applications utilizing latest
      technologies in video webcasting over internet for web browsers,
      Televisions and telephone systems. In order to succeed in this team, the
      incumbent must have the passion and energy to work in an entrepreneurial,
      and fast paced environment. In addition, the Senior Software Engineer must
      be an experienced senior architect and technical leader with in-depth
      knowledge of software development processes. As a senior member of the
      team in Armenia, Senior Software Engineer will be working closely with
      other developers and peers in the US and other teams around the globe, to
      analyze, design, develop, test and deliver the best in class software.
    example_title: Current Position Requirments 2
  - text: >-
      Armeconombank OJSC is looking for a .Net Developer to join its team. The
      Software Developer will take part in design and development projects.
    example_title: Current Position Requirments 3

This repository contains a Albert model designed for text classification. The architecture of the model is based on the Albert Base v2 model.

Library

pip install transformers
pip install sentencepiece

Example

from transformers import AutoModel,AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('Apizhai/Albert-IT-JobRecommendation', use_fast=False),
model = AutoModel.from_pretrained('Apizhai/Albert-IT-JobRecommendation')

Training hyperparameters

The following hyperparameters were used during training:

  • max_seq_length: 128
  • max_length: 128
  • train_batch_size: 4
  • eval_batch_size: 32
  • num_train_epochs: 10
  • evaluate_during_training: False
  • evaluate_during_training_steps: 100
  • use_multiprocessing: False
  • fp16: True
  • save_steps: -1
  • save_eval_checkpoints: False
  • save_model_every_epoch: False
  • no_cache: True
  • reprocess_input_data: True
  • overwrite_output_dir: True
  • preprocess_inputs: False
  • num_return_sequences: 1

Score

  • f1-score: 0.82951
  • macro avg: 0.84748
  • weighted avg: 0.81575