File size: 2,656 Bytes
409bce2 bdb99ba 409bce2 bdb99ba 409bce2 bdb99ba b1c1eaf bdb99ba b1c1eaf bdb99ba b1c1eaf bdb99ba 0471f86 bdb99ba b1c1eaf bdb99ba 7a541c4 bdb99ba |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
language: en
license: apache-2.0
datasets:
- ESGBERT/environmental_2k
tags:
- ESG
- environmental
---
# Model Card for EnvironmentalBERT-environmental
## Model Description
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the EnvironmentalBERT-environmental language model. A language model that is trained to better classify environmental texts in the ESG domain.
Using the [EnvironmentalBERT-base](https://huggingface.co/ESGBERT/EnvironmentalBERT-base) model as a starting point, the EnvironmentalBERT-environmental Language Model is additionally fine-trained on a 2k environmental dataset to detect environmental text samples.
## How to Get Started With the Model
See these tutorials on Medium for a guide on [model usage](https://medium.com/@schimanski.tobi/analyzing-esg-with-ai-and-nlp-tutorial-1-report-analysis-towards-esg-risks-and-opportunities-8daa2695f6c5?source=friends_link&sk=423e30ac2f50ee4695d258c2c4d54aa5), [large-scale analysis](https://medium.com/@schimanski.tobi/analyzing-esg-with-ai-and-nlp-tutorial-2-large-scale-analyses-of-environmental-actions-0735cc8dc9c2?source=friends_link&sk=13a5aa1999fbb11e9eed4a0c26c40efa), and [fine-tuning](https://medium.com/@schimanski.tobi/analyzing-esg-with-ai-and-nlp-tutorial-3-fine-tune-your-own-models-e3692fc0b3c0?source=friends_link&sk=49dc9f00768e43242fc1a76aa0969c70).
You can use the model with a pipeline for text classification:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
tokenizer_name = "ESGBERT/EnvironmentalBERT-environmental"
model_name = "ESGBERT/EnvironmentalBERT-environmental"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) # set device=0 to use GPU
# See https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
print(pipe("Scope 1 emissions are reported here on a like-for-like basis against the 2013 baseline and exclude emissions from additional vehicles used during repairs.", padding=True, truncation=True))
```
## More details can be found in the paper
```bibtex
@article{Schimanski23ESGBERT,
title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
year={2023},
journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
``` |