active learning algorithm, scores, parameterless method
A practical concern for Active Learning (AL) is the amount of time human experts must wait for the next instance to label. We propose a method for eliminating this wait time independent of specific learning and scoring algorithms by making scores always available for all instances, using old (stale) scores when necessary. The time during which the expert is annotating is used to train models and score instances–in parallel–to maximize the recency of the scores. Our method can be seen as a parameterless, dynamic batch AL algorithm. We analyze the amount of staleness introduced by various AL schemes and then examine the effect of the staleness on performance on a part-of-speech tagging task on the Wall Street Journal. Empirically, the parallel AL algorithm effectively has a batch size of one and a large candidate set size but eliminates the time an annotator would have to wait for a similarly parameterized batch scheme to select instances. The exact performance of our method on other tasks will depend on the relative ratios of time spent annotating, training, and scoring, but in general we expect our parameterless method to perform favorably compared to batch when accounting for wait time.
Original Publication Citation
Robbie Haertel, Paul Felt, Eric K. Ringger, Kevin Seppi. June 21. "Parallel Active Learning: Eliminating Wait Time with Minimal Staleness". In Proceedings of the NAACL HLT 21 Workshop on Active Learning for Natural Language Processing (ALNLP 21). Los Angeles, California.
BYU ScholarsArchive Citation
Felt, Paul; Haertel, Robbie; Ringger, Eric K.; and Seppi, Kevin, "Parallel Active Learning: Eliminating Wait Time with Minimal Staleness" (2010). All Faculty Publications. 100.
Physical and Mathematical Sciences
© 2010 ACL.
Copyright Use Information