Keywords

active learning algorithm, scores, parameterless method

Abstract

A practical concern for Active Learning (AL) is the amount of time human experts must wait for the next instance to label. We propose a method for eliminating this wait time independent of specific learning and scoring algorithms by making scores always available for all instances, using old (stale) scores when necessary. The time during which the expert is annotating is used to train models and score instances–in parallel–to maximize the recency of the scores. Our method can be seen as a parameterless, dynamic batch AL algorithm. We analyze the amount of staleness introduced by various AL schemes and then examine the effect of the staleness on performance on a part-of-speech tagging task on the Wall Street Journal. Empirically, the parallel AL algorithm effectively has a batch size of one and a large candidate set size but eliminates the time an annotator would have to wait for a similarly parameterized batch scheme to select instances. The exact performance of our method on other tasks will depend on the relative ratios of time spent annotating, training, and scoring, but in general we expect our parameterless method to perform favorably compared to batch when accounting for wait time.

Original Publication Citation

Robbie Haertel, Paul Felt, Eric K. Ringger, Kevin Seppi. June 21. "Parallel Active Learning: Eliminating Wait Time with Minimal Staleness". In Proceedings of the NAACL HLT 21 Workshop on Active Learning for Natural Language Processing (ALNLP 21). Los Angeles, California.

Document Type

Peer-Reviewed Article

Publication Date

2010-06-01

Permanent URL

http://hdl.lib.byu.edu/1877/2595

Publisher

ACL Press

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS