Physical and Mathematical Sciences
First Faculty Advisor
First Faculty Reader
reinforcement learning, machine learning, multi-task, multi-objective, transfer
In the multi-objective reinforcement learning (MORL) paradigm, the relative importance of environment objectives is often unknown prior to training, so agents must learn to specialize their behavior to optimize different combinations of environment objectives that are specified post-training. These are typically linear combinations, so the agent is effectively parameterized by a weight vector that describes how to balance competing environment objectives. However, we show that behaviors can be successfully specified and learned by much more expressive non-linear logical specifications. We test our agent in several environments with various objectives and show that it can generalize to many never-before-seen specifications.
BYU ScholarsArchive Citation
Nottingham, Kolby, "Using Logical Specifications for Multi-Objective Reinforcement Learning" (2020). Undergraduate Honors Theses. 133.