Keywords

machine learning, annotation, active learning

Abstract

Traditional Active Learning (AL) techniques assume that the annotation of each datum costs the same. This is not the case when annotating sequences; some sequences will take longer than others. We show that the AL technique which performs best depends on how cost is measured. Applying an hourly cost model based on the results of an annotation user study, we approximate the amount of time necessary to annotate a given sentence. This model allows us to evaluate the effectiveness of AL sampling methods in terms of time spent in annotation. We acheive a 77% reduction in hours from a random baseline to achieve 96.5% tag accuracy on the Penn Treebank. More significantly, we make the case for measuring cost in assessing AL methods.

Original Publication Citation

Robbie Haertel, Eric Ringger, Kevin Seppi, James Carroll, Peter McClanahan. June 28. "Assessing the Costs of Sampling Methods in Active Learning for Annotation". In the Proceedings of the Conference of the Association of Computational Linguistics (ACL 28). Columbus, Ohio.

Document Type

Peer-Reviewed Article

Publication Date

2008-06-01

Permanent URL

http://hdl.lib.byu.edu/1877/2641

Publisher

ACL Press

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS