--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-bpmn results: [] widget: - text: "The process starts when the customer enters the shop. The customer then takes the product from the shelf. The customer then pays for the product and leaves the store." example_title: "Example 1" - text: "The process begins when the HR department hires the new employee. Next, the new employee completes necessary paperwork and provides documentation to the HR department. After the initial task, the HR department performs a decision to determine the employee's role and department assignment. The employee is trained on the company's sales processes and systems by the Sales department. After the training, the Sales department assigns the employee a sales quota and performance goals. Finally, the process ends with an 'End' event, when the employee begins their role in the Sales department." example_title: "Example 2" - text: "The process begins with a 'Start' event, when a customer places an order for a product on the company's website. Next, the customer service department checks the availability of the product and confirms the order with the customer. After the initial task, the warehouse processes the order. If the order is eligible for same-day shipping, the warehouse staff picks and packs the order, and it is sent to the shipping department. After the order is packed, the shipping department arranges for the order to be delivered to the customer. Finally, the process ends with an 'End' event, when the customer receives their order." example_title: "Example 3" --- # bert-finetuned-bpmn This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on a dataset containing textual process descriptions. The dataset contains 2 target labels: * `AGENT` * `TASK` The dataset (and the notebook used for training) can be found on the following GitHub repo: https://github.com/jtlicardo/bert-finetuned-bpmn Update: a model trained on 5 BPMN-specific labels can be found here: https://huggingface.co/jtlicardo/bpmn-information-extraction The model achieves the following results on the evaluation set: - Loss: 0.2656 - Precision: 0.7314 - Recall: 0.8366 - F1: 0.7805 - Accuracy: 0.8939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.8437 | 0.1899 | 0.3203 | 0.2384 | 0.7005 | | No log | 2.0 | 20 | 0.4967 | 0.5421 | 0.7582 | 0.6322 | 0.8417 | | No log | 3.0 | 30 | 0.3403 | 0.6719 | 0.8431 | 0.7478 | 0.8867 | | No log | 4.0 | 40 | 0.2821 | 0.6923 | 0.8235 | 0.7522 | 0.8903 | | No log | 5.0 | 50 | 0.2656 | 0.7314 | 0.8366 | 0.7805 | 0.8939 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2