Organizers
Nick Erickson is a Research Scientist at Prior Labs. He is the lead developer and maintainer of the AutoML framework AutoGluon, and of TabArena, a living benchmark for tabular ML. He has co-authored research on foundation models for structured data and automated data science agents, including Mitra, XTab, Chronos-2, and MLZero, and has published and served as a reviewer for ICML and NeurIPS, with recognition as a top reviewer at ICML 2025. Nick has delivered 15+ invited talks and tutorials at venues including ICML, NeurIPS, KDD, PyData, GTC, and re:Invent. He has also helped lead community efforts as Kaggle Grand Prix 2024 co-organizer, AutoML Conference 2025 PC Chair, and co-organizer of the inaugural “Foundation Models for Structured Data” workshop at ICML 2025.
Xiyuan Zhang is an Applied Scientist at Amazon Web Services working on structured data analysis including tabular and time series data. She is the lead author of Mitra, ranking #1 and #4 on HuggingFace downloads for tabular regression and classification respectively, with 7M+ total downloads in 6 months. She is also a co-author of Chronos and Chronos-2, the most downloaded time series foundation model on HuggingFace. Xiyuan earned her PhD in Computer Science from the University of California, San Diego. Her research has resulted in over 15 publications at top-tier machine learning conferences, such as NeurIPS, ICML, ICLR, and KDD. Xiyuan is a recipient of the Qualcomm Innovation Fellowship and has been recognized as a Cyber-Physical-System (CPS) Rising Star. She co-organizes “Foundation Models for Structured Data” workshop at ICML 2025 and “Time Series in the Age of Large Models” workshop at ICLR 2026. Her broad research interests lie in machine learning for sequential data (time series, spatiotemporal, tabular, NLP), with practical impact including healthcare, IoT, climate science and beyond.
Mononito Goswami is an Applied Scientist in the Agentic AI organization in Amazon Web Services, focusing on building pragmatic agents which can model structured data. He recently earned his Ph.D. from Carnegie Mellon University, where his research focused on developing foundation models for structured data. He has led the development of widely used foundation models, including MOMENT, a foundation model for time series that has garnered over 2 million downloads on HuggingFace and attracted more than $2 million in research funding. In 2021, he was awarded the Center for Machine Learning and Health (CMLH) fellowship. His research has been published in premier machine learning conferences including NeurIPS, ICLR, and ICML. He serves as a regular reviewer for these venues and has been recognized as a top reviewer. Mononito co-organized the AAAI 2024 Spring Symposium on Clinical Foundation Models, Foundation Models for Structured Data workshop at ICML 2025 and Time Series in the Age of Large Models workshop at ICLR 2026. His research interests lie at the intersection of LLM agents, foundation models and structured data, with a particular focus on developing practical machine learning solutions for healthcare and education applications.
Lennart Purucker is an AI Scientist at Prior Labs and a PhD Student at the University of Freiburg, with a focus on benchmarking and foundation models for tabular data. He is the research lead of the TabArena benchmarking framework, a core contributor to the predictive machine learning system AutoGluon, and a core contributor to the benchmarking platform OpenML. He co-authored TabPFNv2, RealTabPFN, and TabPFN-2.5, a series of foundation models for predictive tasks on tabular data. Furthermore, he collaborates with interdisciplinary researchers and industry partners at the University of Freiburg through the Small Data Initiative. Lennart is actively involved in the research community as a reviewer for various conferences, serving as the Reproducibility Chair for the AutoML Conference (2023–2025), and co-organizing the AutoML Seminar. He was a co-organizer of the inaugural “Foundation Models for Structured Data” workshop at ICML 2025 and the EurIPS’25 Workshop on “AI for Tabular Data”.
Boran Han is a Senior Applied Scientist at Amazon Web Services working on multimodal time series foundation models and LLM agents for AutoML systems. She is one of the lead authors of AutoGluon Assistant, the only automated agent to score points in Kaggle’s AutoML Grand Prix 2024, placing 10th overall. She has also co-authored research on foundation models for structured data, including Mitra, Chronos-2. She earned the Ph.D. degree from Harvard University. She has both published in and served as a reviewer at NeurIPS, ICML, ICLR etc. She has also published in esteemed journals such as Science, Cell, Nature Communications, and Proceedings of the National Academy of Sciences. Her broad research interests lie in multimodal foundation models and its applications in natural science domains such as healthcare and climate.
Maximilian Schambach is an Expert AI Scientist at SAP working on foundation models for structured data. Within SAP, he is the workstream lead for the development of Tabular Foundation Models which resulted in the publication of a NeurIPS Spotlight paper and open-source model (ConTextTab, currently ~500k HuggingFace downloads), as well as a model productized at SAP (RPT-1). His research focuses on the development of robust, scalable tabular models that adapt to the broad characteristics and modalities of real-world tabular data and utilize additional semantic or structured context. He holds a PhD from Karlsruhe Institute of Technology, where he worked on deep learning for reconstruction of coded measurements in computer vision.
Mila-Quebec AI Institute / University of Montreal
Arjun Ashok is a third-year PhD student at MILA-Quebec AI Institute and Université de Montréal, and a Student Researcher at Google Cloud AI Research in the San Francisco Bay Area. His research interests are in time series forecasting, specifically in building flexible models. Some of his notable contributions include making transformer architectures faster and better for forecasting tasks (TACTiS-2), building some of the early foundation models in this space (Lag-Llama), developing methods and benchmarks for textual-context aided forecasting (CiK, Beyond DP), and building post-training data pipelines to adapt LLMs for multimodal forecasting (CAF-7M). Prior to Google, Arjun spent three years at ServiceNow Research in Montreal as Visiting Researcher, where he worked on forecasting research and published at top-tier venues. He co-organized the Time Series in the Age of Large Models workshop at NeurIPS 2024, with a second edition to be organized at ICLR 2026.
Google Research
Rajat Sen is a Research Scientist at Google Research working on foundation models for time-series, tabular data as well as combinations of those modalities with others. His main focus is on scaling datasets (both real and synthetic) and models, possibly establishing scaling loss for these modalities. He is also interested in unlocking new capabilities such as making time-series foundation models able to respond to textual events data. He is a co-creator of TimesFM.