TermEval

Shared Task on Monolingual Automatic Term Extraction

The TermEval 2020 shared task was supposed to be organised during the CompuTerm workshop (alongside LREC 2020), in Marseille on May 16th, 2020. However, due to the global pandemic, the workshop could not take place. Nevertheless, the shared task was still organised and the results are published in the workshop proceedings at https://lrec2020.lrec-conf.org/media/proceedings/Workshops/Books/COMPUTERM2020book.pdf
These proceedings contain both an overview paper and separate papers from the participating teams.

Now that the results are in for the shared task, the ACTER dataset has been made publicly available at https://github.com/AylaRT/ACTER or http://hdl.handle.net/20.500.12124/24

Important dates

  • 16 December 2019 official CFP of CompuTerm (training data available same week)
  • 3 February 2020 test/evaluation data available
  • 5 February 2020 deadline for participants to upload their results (midnight) EDIT: deadline extension until 12 February, 10am (GMT+1)!
  • 14 February 2020 announcement of results
  • 01 March 2020 paper submission deadline
  • 13 March 2020 notification of acceptance
  • 25 March 2020 camera-ready papers due

Despite all of the research interest automatic term extraction has received, it remains a very challenging area. The lack of agreement between researchers on even the most basic characteristics of the task is a big hurdle for benchmarking and comparative research in general. Moreover, the difficulties with term annotation mean that few large, diverse resources are available for evaluation or for the training of supervised machine learning systems.

The aim of the TermEval shared task is to unite researchers by providing a large and varied dataset and having multiple teams working on the same problem. This will both provide a productive platform for discussion and innovation, and introduce a valuable new dataset, which will be made publicly available.

The dataset covers three languages (English, French, and Dutch) and offers plenty of training/development data with three different domains (corruption, dressage, wind energy), with a fourth domain (heart failure) used for evaluation. The elaborate annotation guidelines are available as well. Additionally, there are different tracks per language: open vs closed (depending on the resources used) and final f1-scores are calculated both including and excluding Named Entities. All of this should allow a wide range of potential participants.

All further information about TermEval can be found on this website. If any information is missing, feel free to contact the organisers.