Apples and Apples: Experimentation with and Benchmarking of Hyperparameter Tuning
Date: Wednesday, Sep 13, 2023, 11:00 - 12:30
- Matthias Seeger, Amazon
- Jacek Golebiowski, Amazon
- Matthias Poloczek, Amazon
- David Salinas, Amazon
Empirical studies in current HPO publications are often apples-and-oranges comparisons, plagued by many confounding factors, which can lower trust in the field and hinders adoption among practitioners, especially in industry. In this tutorial, we show how many confounding factors can be eliminated, without restricting the freedom of researchers to innovate on methodology. In Syne Tune, HPO methods are implemented against a simple API, abstracting away details of trial execution or signalling. Experiments can be run locally, distributed in the cloud, or in simulation. Benchmarking on tabulated or surrogate benchmarks is simplified and standardized by a method-agnostic simulation backend and a blackbox repository, which provide fully realistic results over many benchmarks, methods, and random repetitions often orders of magnitude faster than real time. An exploration can be switched from simulation to distributed execution in the cloud with a few lines of code. Syne Tune features clean implementations of many SotA HPO methods, including multi-fidelity, constrained, and multi-objective modalities, so that studies can start from a wide range of baselines. It also comes with results aggregation and visualization tools, and incorporates best practices of comparing methods across several benchmarks.
Matthias Seeger is a principal applied scientist at Amazon Web Services, working on automation of large scale training and transfer learning, AutoML, streamlining experimentation, and explainable AI. He received a PhD from the University of Edinburgh and held postdoctoral and assistant professor positions at Berkeley, MPI T¨ubingen, University of Saarbr¨ucken, and EPF Lausanne. He taught machine learning courses at EPFL, and gave tutorials at ICML 2008, DAGM 2010, MLSS T¨ubingen 2013, and ECCV 2020.
Jacek Golebiowski is a senior applied scientist at Amazon Web Services, working on AutoML, multi-objective HPO, and hardware recommendation for deep learning inference. He received a PhD from Imperial College London, at the intersection of machine learning and applied physics. At Imperial College, he taught courses on ML and computational methods for engineering students.
David Salinas is a senior applied scientist at Amazon Web Services, working on AutoML, multi-objective HPO, deep transfer learning, forecasting, and hardware recommendation for deep learning inference. He received a PhD from University of Grenoble on computational geometry and was a postdoctoral researcher ar Inria, before joining Amazon. He has given a number of tutorials on forecasting and AutoML.
Matthias Poloczek is a principal applied scientist and science manager at Amazon Robotics. He designed science strategies for key ML initiatives and established mechanisms to enhance science and ML excellence which have been adopted in several VP orgs. Before joining Amazon, he evangelized Bayesian optimization and bandits at Uber AI Labs. He received a PhD from the University of Frankfurt, and held postdoctoral and assistant professor positions at Cornell University and the University of Arizona. Matthias taught courses on ML and decision-making under uncertainty at Cornell University and the University of Arizona.