Last Edited : Mar 11, 2023

Start a new submission

AutoML Applications (A), Benchmarks (B), Challenges (C), and Datasets (D) are indispensable for AutoML research and applications. AutoML Applications provide robust tools that work well in practice, Benchmarks help us measure progress, Challenges help us make rapid progress is specific areas, and new Datasets help us address underexplored areas and build better AutoML tools. As the ‘ABCD’ name implies, they form the foundations of our field, hence this track welcomes all impactful work in these directions..

This call for papers extends the main call for papers by a special track. Please first read the main CfP. Here, we only highlight how this special track differs from the main track. Please note that papers in this track should use a slightly different template (with a different footer).

Please use the standard AutoML stylesheet with the [abcdtrack] option, hence “\usepackage[abcdtrack]{automl}”.

Single-Blind Review

Since authors and organizers of systems, benchmarks, challenges, and datasets are often easily identifiable and it is often required to look up the actual framework, submissions to this special track will undergo a single-blind review process, i.e., the authors of the submission should be listed on the front page. In case there is a good reason to submit double-blind, and this does not hinder the review process, authors could also choose to submit double-blind.

Relationship to the main track

Submissions in this track will go through a separate review process, but all accepted papers will appear as part of the main track. The evaluation is equally stringent as the main track. The paper formatting guidelines and publication venue are identical, except for a slightly different footer in the template. For authors doubting between submitting to the main track or the ABCD track, the main criterion is to consider that the reviewing in this track is more focused on evaluating the potential impact of individual applications, benchmarks, challenges, and datasets, rather than on algorithmic advances. Other, practical criteria are the single-blind review and easy discoverability for people looking for AutoML applications, benchmarks, challenges, and datasets.

A

What makes a good AutoML application submission to this track?

We welcome both open-source AutoML software and applications in this category that help us bridge the gap between theory and practice. For software, we follow a similar approach to JMLR MLOSS by looking for papers about novel, well-engineered, well-established and well-documented systems. To this end, a submission has to show that:

  1. It is a novel system that has features or application domains that were not available beforehand.
  2. It already has an established user base (shown by stars on github, active commit history by several developers, an active issue tracker, etc.)
  3. It is an open-source software package with an open-source software licence that allows users to easily use and contribute to it.
  4. It achieves excellent performance on the addressed application domains.

For applications, we are specifically interested in real-world applications that have taught us valuable lessons in how to apply AutoML in practice. Submissions have to show that:

  1. It is an actual real-world application of AutoML highlighting aspects of AutoML that are often overlooked in the literature.
  2. New and/or surprising insights were obtained that are important and generally useful for AutoML practitioners.
  3. The real-world problem, as well as the developed solutions, are well-described to an extent that allows others to verify the findings.

We particularly welcome high-profile applications of importance to humankind.

B

What makes a good benchmark submission to this track?

The progress in the field of AutoML is often driven by empirical results. Although the community has made tremendous progress in defining best practices and benchmarks in recent years, we invite submissions to further enhance the quality of benchmarking in AutoML. This could include (but is not strictly limited to):

  1. Demonstrating pitfalls and proposing solutions in benchmarking AutoML systems
  2. Proposing new benchmarks (e.g., similar to HPOBench for hyperparameter optimization or the NASBench series for neural architecture search) or substantial extensions of existing benchmarks
  3. Approaches for more efficient benchmarking

For all submissions, it is important that all benchmarking data and tools are easily accessible, and that all benchmarking results are easily reproducible. All necessary datasets, code, and evaluation procedures must be accessible and well-documented.

C

What makes a good challenge submission to this track?

In recent years, there were many challenges on AutoML, AutoDL, HPOand NAS pushing the community to new heights. Since neither running a meaningful competition is trivial nor gaining thorough insights from it, we invite submissions on the following topics:

  1. Design and visions for future challenges on AutoML
  2. Post-challenge analysis, highlighting gained insights and future open tasks
  3. Methodology and best practices for organizing AutoML challenges

We note that especially post-challenge analysis and insights can also be described by attendees and not only by organizers.

D

What makes a good dataset submission to this track?

We welcome all types of new datasets, collections of datasets, or meta-datasets that open up new avenues of research into better AutoML systems. These include new datasets on underexplored areas of AutoML (e.g. imperfect, multi-modal, or multi-objective data), meta-datasets that contain hyperparameter configurations and their performance on many tasks, large datasets to pretrain AutoML methods, and many more. Submissions have to show that they are:

  1. Impactful. It must be clear how the dataset benefits AutoML research and/or its (future) applicability in the real world.
  2. Well-documented. The content of the data and how it was collected, and how it is intended to be used must be well-described. We recommend using data documentation frameworks, such as datasheets for datasets.
  3. Easily accessible and easy to use. There must be a clear reference or URL to the website/platform where the dataset can be viewed and downloaded.
  4. Well-maintained. There should be sustainable hosting, licensing and a maintenance plan to ensure that the dataset will be accessible for the foreseeable future.

Important dates

There is no separate abstract submission deadline.

Paper Submission deadline: April 13th, 2023

Review deadline: May 10th, 2023

Rebuttal period: May 15th - May 19th, 2023

Author Notification: May 29th, 2023

ABCD Track Chair

Joaquin Vanschoren, j.vanschoren@tue.nl