Meta-Learning for Randomized Optimization Heuristics

Abstract: Randomized optimization heuristics, such as Simulated Annealing or evolutionary algorithms, are applied very successfully to a wide range of complex optimization problems. Key to the success of each heuristic is its distribution from which it draws new candidate solutions. Modern heuristics use feedback from prior solutions to dynamically adjust their distribution during the optimization process. Such update strategies range from rather simple success-based rules to complex strategies combining static as well as dynamic information. Similar approaches can be found in Reinforcement Learning, however applied to dynamic optimal control problems. While several types of evolutionary algorithms can be used for their solution, the field generated its own approaches to learning optimal behavior as well.

Talks

Speaker: Carsten Witt
Slides: View online
Abstract: Evolutionary Algorithms (EAs) are general-purpose optimization heuristics applied in various settings, e.g., when problem-specific algorithms are not available, in black-box (derivative-free) optimization where the objective function is not given explicitly, and in settings under uncertainty such as noisy and dynamic optimization. I will give an introduction to the working principles of EAs and present their main components such as search spaces, populations, mutation and crossover. EAs come usually with various parameters such as mutation rate, population size etc. that have to be set properly to maximize performance. Therefore, I will also illustrate some challenges in parameter control that may benefit from mechanisms automatically learning promising parameter settings.

Speaker: Johannes Lengler
Slides: View online
Abstract: Evolutionary algorithms are a powerful technique in the machine learning toolbox, and it is important to understand when and how this technique can be useful. I will discuss in which situations evolutionary algorithms should be considered, and I will point out some examples related to deep learning and to reinforcement learning.