Reproducible science of transferable learning with open foundation models
Date: Tuesday, Sep 12, 2023, 10:00 - 10:30
Jenia Jitsev is computer scientist and neuroscientist, who is co-founder and scientific lead of LAION e.V, the German non-profit research organization committed to open science around large-scale foundation models (openCLIP, openFlamingo) and datasets (LAION-400M/5B, DataComp). He also leads Scalable Learning & Multi-Purpose AI (SLAMPAI) lab at Juelich Supercomputer Center of Helmholtz Association, Germany. His research happens in the overlap of machine learning and neuroscience, seeking to investigate learning as a generic process of incrementally building up a useful model of the surrounding world from available sensory observations and executed actions. He did his PhD with Frankfurt Institute for Advanced Studies (FIAS) and Max Planck Institute for Brain Research on unsupervised learning in hierarchically organized recurrent networks of the visual cortex, and continued as postdoc with Max Planck Institute for Neurological Research in Cologne and Institute of Neuroscience and Medicine in Research Center Juelich, working on models of unsupervised and reinforcement learning in cortico-basal ganglia loops. In LAION and in his lab at Juelich Supercomputing Center, Dr. Jitsev current focus is on driving and democratizing research on scalable systems for generalist, transferable multi-modal learning, leading to foundation AI models capable of strong transfer with predictable behavior derived from corresponding scaling laws, and therefore easily adaptable to broad range of desired tasks and hardware resource settings. For his work, Dr. Jitsev received Best Paper Award at IJCNN 2012 and Outstanding Paper Award at NeurIPS 2022.