Keywords

Markov Decision Processes, value iteration, policy iteration, prioritized sweeping, dynamic programming

Abstract

The performance of value and policy iteration can be dramatically improved by eliminating redundant or useless backups, and by backing up states in the right order. We study several methods designed to accelerate these iterative solvers, including prioritization, partitioning, and variable reordering. We generate a family of algorithms by combining several of the methods discussed, and present extensive empirical evidence demonstrating that performance can improve by several orders of magnitude for many problems, while preserving accuracy and convergence guarantees.

Original Publication Citation

David Wingate and Kevin D. Seppi. "Prioritization Methods for Accelerating MDP Solvers." In Journal of Machine Learning Research, 6 (25), pp. 851-881, MIT Press, Cambridge, Massachusetts.

Document Type

Peer-Reviewed Article

Publication Date

2005-01-01

Permanent URL

http://hdl.lib.byu.edu/1877/2604

Publisher

MIT Press

Language

English

College

Physical and Mathematical Sciences

Department

Computer Science

Share

COinS