Existing approaches to continuous optimization are essentially mechanisms for deciding which locations should be sampled in order to obtain information about a target function's global optimum. These methods, while often effective in particular domains, generally base their decisions on heuristics developed in consideration of ill-defined desiderata rather than on explicitly defined goals or models of the available information that may be used to achieve them. The problem of numerical optimization is essentially one of deciding what information to gather, then using that information to infer the location of the global optimum. That being the case, it makes sense to model the problem using the language of decision theory and Bayesian inference. The contribution of this work is precisely such a model of the optimization problem, a model that explicitly describes information relationships, admits clear expression of the target function class as dictated by No Free Lunch, and makes rational and mathematically principled use of utility and cost. The result is an algorithm that displays surprisingly sophisticated behavior when supplied with simple and straightforward declarations of the function class and the utilities and costs of sampling. In short, this work intimates that continuous optimization is equivalent to statistical inference and decision theory, and the result of viewing the problem in this way has concrete theoretical and practical benefits.



College and Department

Physical and Mathematical Sciences; Computer Science



Date Submitted


Document Type





optimization, evolutionary computation, value of information, utility, decision process, graphical model