Aaron Klein

profile.jpg

I lead a research group at the ELLIS Institute Tübingen as part of OpenEuroLLM, a European initiative developing a family of foundation models for European languages. Our team focuses on advancing automated methods for large-scale pre-training of large language models.

I also co-host the virtual AutoML Seminar, organized by the ELLIS units in Berlin and Freiburg.

My research interests include:

  • AutoML, which aims to progressively automate various stages of the machine learning pipeline. Within AutoML, I primarily work on hyperparameter optimization and neural architecture search.

  • Blackbox Optimization (or gradient-free optimization), with a focus on Bayesian optimization methods.

  • Model compression of large language models to reduce resource consumption during inference.

Previously, I headed the AutoML research group at ScaDS.AI (Center for Scalable Data Analytics and Artificial Intelligence) in Leipzig.

Until 2024, I worked as an applied scientist at AWS, where I was part of the long-term science team behind SageMaker, AWS’s machine learning cloud platform, as well as the science team for Amazon Q, AWS’s generative AI assistant.

I earned my PhD at the University of Freiburg in 2019 in the machine learning lab under the supervision of Frank Hutter. In 2022, my co-authors and I received the Best Paper Award at the AutoML Conference. Earlier, in 2015, my collaborators from the University of Freiburg and I won the ChaLearn AutoML Challenge.

I have co-organized the Neural Architecture Search workshop at ICRL 2020 and ICLR 2021, and served as the local chair for the AutoML Conference 2023. I regularly review for top-tier venues including ICML, ICLR, NeurIPS, TMLR, and JMLR.

news

Jun 01, 2025 I started at the ELLIS Institute Tübingen to work on OpenEuroLLM.
May 31, 2025 One ICML paper and one AutoML Conf paper accepted.
Apr 06, 2025 Together with Pascal Kerschke, I am looking for a PhD Student to work on multi-objective Neural Architecture Search.
Apr 01, 2025 I will give a talk at the AutoML Summer School 2025
Sep 26, 2024 Our paper on HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models got accepted at the NeurIPS 2024 DBT Track.