Papers
arxiv:2011.00677

IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP

Published on Nov 2, 2020
Authors:
,
,

Abstract

Although the Indonesian language is spoken by almost 200 million people and the 10th most spoken language in the world, it is under-represented in NLP research. Previous work on Indonesian has been hampered by a lack of annotated datasets, a sparsity of language resources, and a lack of resource standardization. In this work, we release the IndoLEM dataset comprising seven tasks for the Indonesian language, spanning morpho-syntax, semantics, and discourse. We additionally release IndoBERT, a new pre-trained language model for Indonesian, and evaluate it over IndoLEM, in addition to benchmarking it against existing resources. Our experiments show that IndoBERT achieves state-of-the-art performance over most of the tasks in IndoLEM.

Community

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 6

Browse 6 datasets citing this paper

Spaces citing this paper 13

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.