Algorithm Configuration through the Theoretician’s Lens

– Performance Guarantees for Randomized Search Heuristics with Multiple Parameters

Nowadays, for many different research areas, computers assist researchers in finding good solutions to their complex problems. Many of these problems are immensely challenging for researchers and computers alike. Adding the right amount of randomness can drastically improve the output of the computer programs.

Typically, there are many factors involved that determine which degree of randomness is beneficial. And while best practices exist, theoretical justifications for these approaches are rather limited, as the rapid progress in mathematical research in the last 10 years often only considered a single problem dimension instead of multiple. This is unsatisfying, as theoretical guarantees help determine the relevant factors for certain choices and thus also help improve the state of the art.

The aim of this research project was to better understand how the interplay of different properties of a problem and the computer program applied to solve it affect the choice for the appropriate degree of randomness. The focus of this endeavor was to derive results with mathematical rigor, proving certain guarantees. Afterward, these results were communicated to practitioners in order for the theory to actually have an impact on practice.


The project was supported by the Paris Region Fellowship Program, and it ran from April 1, 2021, until August 31, 2022. During this time, I contributed to two conference papers (each winning a best-paper award) and one journal paper with respect to the core of this project. Furthermore, the financial support allowed me to contribute to three more papers in other research areas that I am interested in. (See also the publications below.)


Early on, when talking to non-theoreticians about existing results for the computer programs central to this project (called randomized search heuristics, short RSHs), I realized that the uptake of existing theoretical results is not too high. Thus, I adjusted the aim of the project slightly and focused primarily on deriving theoretical guarantees that can be communicated in a clear manner to a large audience.

One of the results suggests to use an established benchmark from theoretical analyses also for empirical analyses. The reasoning behind this approach is that this benchmark comes with exact guarantees and thus provides ground truths for certain scenarios. In collaboration with empirical researchers from the domain of RSHs, we generalized the known theoretical guarantees for a framework of RSHs that has multiple parameters. We then analyzed a learning approach that aims at configuring these parameters automatically, comparing it to the theoretically optimal configurations. We observed that the automatic learning worked well for lower-dimensional problems but worsened for larger dimensions. Our efforts were acknowledged with the best-paper award in the track for general evolutionary computation and hybrids at the top international conference for evolutionary algorithms (GECCO).

Another result considered one of the most basic RSHs: local search. This approach is easy to implement and, in some sense, is at the core of many more involved RSHs. Arguably its biggest downside is that it is trapped by local optima. In order to circumvent this, different approaches exist. We analyzed various modifications of local search and provided performance guarantees for different types of local optima. Our results show that there is no best strategy, but certain modifications have a potentially huge impact, so that being informed about the pros and cons of the different operators is useful. We presented our results as a poster at Europe’s top conference for evolutionary computation (PPSN), communicating our findings to a large audience of people who utilize RSHs. Our efforts were acknowledged with, both, the best-paper award and the best-poster award of the conference.

The last core result for this project considers a modular problem class characterized by different features, each of which allows for an intensity. This intensity changes the complexity of problems in that class. We considered one feature, called neutrality, and analyzed it for its entire parameter range. Neutrality models the problem that small changes to a solution result in no change in the solution quality. The search process thus performs a random walk if it only looks at close-by solutions. Our results show how the performance of a specific RSH changes for different parameter regimes. Further, we state our results in a general way so that they can be easily extended to larger algorithm classes if specific core criteria are met. We published our results in one of the most prestigious journals for evolutionary computation: IEEE TEVC.

In addition to the three papers above, I also contributed to papers in different domains that are within the scope of my research interests. These papers are listed below but not explained in more detail here, as they are more related to other goals of this project than the scientific goal mentioned at the beginning.


I contributed to the following papers during the project: