Submission Guidelines

Submission

Submissions should be made through COMPUTERM 2020 Startconf site

Questionnaire + Overview paper

Upon submitting the final results for the test set, we kindly request all participants to fill out a short questionnaire. In this questionnaire, you are requested to provide a very short system description. You are also asked about the track for which you wish to submit, the resources you have used, etc.

This information is to help the organisers with the overview paper which will appear in the proceedings. This paper will contain a thorough description of the dataset (similar to the current information on the website). It will also contain a short overview of all participating systems, using the descriptions you provide in the questionnaire, so make sure the description you provide there is accurate. It is our intention to write a more elaborate journal paper with more thorough comparisons and evaluations after the workshop.

Participants can cite the short overview paper when referring to the shared task and dataset with the following reference:

Rigouts Terryn, A., Drouin, P., Hoste, V., & Lefever, E. (2020). TermEval 2020: Shared Task on Automatic Term Extraction Using the Annotated Corpora for Term Extraction Research (ACTER) Dataset. Proceedings of CompuTerm 2020.

Guidelines

The submissions should be written in English and anonymised for review and must use the LaTeX or Word template files provided by LREC 2020. All paper titles should start with: “TermEval 20202: “, followed by the rest of the title.

Participants are free to choose between short or long papers, with the same guidelines as the other papers for the CompuTerm workshop:

  • Long papers: up to 8 pages of content, plus 2 pages for references; final versions of long papers: one additional page: up to 9 pages with unlimited pages for references;
  • Short papers: up to 4 pages of content, plus 2 pages for references; final version of short papers: up to 5 pages with unlimited pages for references.

When submitting through CompuTerm’s Startconf site, under “Submission Categories”, make sure to select submission type “TermEval shared task paper”.

Evaluation

While f1-scores are calculated in the test phase of the shared task, we encourage participants to perform more detailed evaluations of their systems as well. How well does the system perform on the different domains (and languages)? Are rare terms extracted as well as frequent terms? Is there a difference in performance between single-word terms and multi-word terms? What is the ratio between precision and recall? How are different POS-patterns handled?

The goal is to really go beyond simply listing f1-scores and to find out the various strengths and weaknesses of the systems. This will also be discussed in the overview paper after the workshop.