Survey of Active Learning Hyperparameters

No Thumbnail Available
Date
2025-06-02
Journal Title
Journal ISSN
Volume Title
Publisher
Technische Universität Dresden
Abstract

Annotating data is a time-consuming and costly task, but it is inherently required for supervised machine learning. Active Learning (AL) is an established method that minimizes human labeling effort by iteratively selecting the most informative unlabeled samples for expert annotation, thereby improving the overall classification performance. Even though AL has been known for decades [1], AL is still rarely used in real-world applications. As indicated in the two community web surveys among the NLP community about AL [2], [3], two main reasons continue to hold practitioners back from using AL: first, the complexity of setting AL up, and second, a lack of trust in its effectiveness. We hypothesize that both reasons share the same culprit: the large hyperparameter space of AL. This mostly unexplored hyperparameter space often leads to misleading and irreproducible AL experiment results. In this study, we first compiled a large hyperparameter grid of over 4.6 million hyperparameter combinations, second, recorded the performance of all combinations in the so-far biggest conducted AL study, and third, analyzed the impact of each hyperparameter in the experiment results. Besides our publication we are making our raw experiment results publically available for other researchers to built upon.

[1] B. Settles, “Active Learning Literature Survey,” University of Wisconsin-Madison Department of Computer Sciences, Technical Report, 2009. [2] K. Tomanek and F. Olsson, “A web survey on the use of active learning to support annotation of text data,” in Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, E. Ringger, R. Haertel, and K. Tomanek, Eds. Boulder, Colorado: Association for Computational Linguistics, 2009, pp. 45–48. [3] J. Romberg, C. Schröder, J. Gonsior, K. Tomanek, and F. Olsson, “Have LLMs Made Active Learning Obsolete? Surveying the NLP Community,” 2025.

Description
Keywords
Citation
Attribution-NonCommercial 4.0 International