Last Edited : Mar 11, 2023

It’s awesome when AutoML ‘just works’. However, as with any machine learning technique, it also happens that methods presented at top conferences just don’t work quite as well in practice. At the AutoML conference (see, we’re aiming to resolve this problem by launching a new track that takes a much more holistic approach to studying AutoML and machine learning techniques in general.

Whereas the main track will focus on algorithm design, this new track will welcome groundbreaking work on the ecosystem that enables the development of these algorithms: Applications, Benchmarks, Competitions, and Datasets. We call this the ABCD track for short, symbolising that this work is crucial to provide solid foundations for the development of new machine learning methods. As such, we also hope that other conferences may want to follow suit.

Why is this important?

There can be many subtle reasons why results in AutoML papers don’t always translate effortlessly to the real world. First, real-world data may simply be different from the data that researchers had access to when developing their methods. Second, the algorithm implementations might have unknown limitations that only come to light when used by many people in practice. And third, the real world might challenge us with problems that require an entirely different approach. The ABCD track will encourage research that directly addresses these issues by empowering the community with better tools, by providing unique insights, and by bridging the gap between theory and practice.

How will we do this?

The ABCD track will serve as a novel venue for publications and as a forum for discussions on the science and practice of AutoML itself. To incentivize this work, all accepted papers will be treated equivalently to the main track in terms of publication and presentation at the conference. Submissions in this track will go through a separate review process, equally stringent as the main track, but with particular emphasis on the potential impact of the individual applications, benchmarks, challenges, and datasets, evaluated according to specific criteria. More details can be found in the call for papers.

What work can we find in this track?

First, Applications cover research and development into novel, open-source, well-engineered, and demonstrably high-impact AutoML systems and tools that enable AutoML to be applied in many real-world problems or that accelerate AutoML research itself. It also covers high-quality analyses of real-world applications that teach us valuable lessons in how to apply AutoML in practice that could not be learned otherwise, especially in high-profile applications of importance to humankind.

Second, Benchmarks covers work that proposes new benchmarks, demonstrates pitfalls in benchmarking AutoML systems, as well as solutions for more efficient and correct AutoML benchmarks.

Third, Challenges should incentivize work in creating challenges that successfully encourage researchers to explore new ground. It also includes challenge analyses (by organizers or participants), that yield important new insights, improve benchmarking, and push the community to new heights.

Fourth, Datasets cover important new datasets, collections of datasets, or meta-datasets that open up new avenues of research into better machine learning systems or focus research more closely on tasks with measurable real-world impact. These include new datasets on underexplored areas of AutoML (e.g. imperfect, multi-modal, or multi-objective data), meta-datasets that contain hyperparameter configurations and their performance on many tasks, large datasets to pretrain AutoML methods, and many more.

Who are the ABCD reviewers?

Like other reviewers, these will be volunteers from the community. We expect ABCD reviewers to have significant experience in one or more of the areas discussed above.

How do I sign up as an ABCD reviewer?

Glad you asked! We would love you to be a part of this important new initiative. To volunteer as a reviewer, please sign up here. To allow sufficiently in-depth reviews, we plan to assign at most 3 papers to each reviewer.

When can I submit my work to the ABCD track?

Right now! Head over to the Call for Papers to find all instructions and tips. We’re very much looking forward to learning all about it, and working together with you to make sure that AutoML ‘just works’ :). The paper submission deadline is April 13th. Notification is on May 29th, with ample time before the NeurIPS Datasets & Benchmark track baseline. We’re looking forward to your submission!

Joaquin & Frank