Imagine being given an optimization problem, such that, we can obtain a sample from the problem and obtain a cost . The problem is noisy and non-linear (i.e. has multiple local and global minima) and is costly to evaluate. Basically, we have no idea about what function to fit to such a problem, but we really want a global optimum! How do we approach such a problem?
Well, we could start by randomly-sampling from the problem, until we obtain a optimum. This approach is not, very intelligent or interesting or practical (sample evaluation is costly). Can we do better?
We could apply an intelligent sampling strategy which cover a lot of optimization landscape, for example, Monte-Carlo Methods. This makes things a little more interesting, but now can we exploit some structure in our samples (e.g. sample covariances) or could we evolve a better sample, given good samples?
Sure, Evolutionary Algorithms, esp, CMA-ES can be applied. But wouldn’t it be great if we had someone (an acquisition function) who tells us to sample more (explore) in those areas of the optimization landscape, where we have no samples and at the same time also sample more (exploit) in those areas, where we have local optimums? Yes, that would be great!
Keeping all these requirements in mind, we introduce Bayesian Optimization (BO). Since, this is an elaborate topic we divide it into a two part series. This part builds up the basics required for BO i.e. Bayes’ Theorem and Gaussian Processes (GPs). The part II of the blog will introduce BO, the use of acquisition functions, exploration and exploitation strategies and BO’s applications.
Introduction to Bayes’ Theorem
Building up on the context described in Introduction, let us assume that our optimization problem can be modeled using a function . Also, lets say we are given a few samples (data) , where and is a vector of such samples. Then the Bayes’ Theorem suggests that:
Here although we do not know the true function which models our optimization problem, we assume it and the prior represents this, our confidence in . While the posterior represents our confidence about the function , given (conditioned) the samples .
Multi-Variate Normal (MVN) Distribution
Before we formally define GPs, lets take a detour into some results about MVN distributions (See Murphy 2012 el. al., Chapter 4). If are jointly Gaussian with parameters, , and . Then,
Introduction to Gaussian Process (GP)
If a MVN represents scalar and vectors, then intuitively, a GP represents a distributions which models functions. These functions have a Gaussian prior. Say, that we are given samples as follows and a test sample of size , then GP says that,
where is , is , and is . is also called the Kernel function and for example can be given by,
where and are the parameter of the kernel function. Then, using definitions from Section on MVN, we get:
Hence, we see that a new the function can be estimated, given the old data and new sample . This process of modeling data at MVNs and deriving a distribution (function) for new data is collectively GP modeling. Also, see lectures on Gaussian Processes by Nando de Fretias.