Markov Decision Processes, value iteration, policy iteration, prioritized sweeping, dynamic programming
The performance of value and policy iteration can be dramatically improved by eliminating redundant or useless backups, and by backing up states in the right order. We study several methods designed to accelerate these iterative solvers, including prioritization, partitioning, and variable reordering. We generate a family of algorithms by combining several of the methods discussed, and present extensive empirical evidence demonstrating that performance can improve by several orders of magnitude for many problems, while preserving accuracy and convergence guarantees.
Original Publication Citation
David Wingate and Kevin D. Seppi. "Prioritization Methods for Accelerating MDP Solvers." In Journal of Machine Learning Research, 6 (25), pp. 851-881, MIT Press, Cambridge, Massachusetts.
BYU ScholarsArchive Citation
Seppi, Kevin and Wingate, David, "Prioritization Methods for Accelerating MDP Solvers" (2005). Faculty Publications. 1005.
Physical and Mathematical Sciences
© 2005 David Wingate and Kevin Seppi. Original publication may be found at http://jmlr.csail.mit.edu/.