Systems & Computational Biology

Machine Learning

"Machine learning (ML) is a rapidly developing branch of artificial intelligence (AI), with a long history rooted in statistics, computer science, and physics. Its recent successes in AI applications—from self-driving cars to content generation—have featured prominently in the scientific and popular news. The excitement for ML is also growing in the biomedical community as it becomes clear that ML could assist and improve practical applications ranging from medical image analysis to discovering patterns in patient databases, as well as to address basic science questions such as protein structure prediction, interpreting genetic networks and explaining the function of brain circuits.

In August 2016, faculty of systems and computational biology started the Reading Group on Recent Advances in Machine Learning, an informal, monthly meeting in which we discuss the newest publications and techniques in ML. The meeting offers the opportunity to discover new applications of ML, learn the techniques that make such advances possible, and discuss higher-level conceptual issues.

The meeting typically lasts 1 to 1.5 hours, with slide presentation, questions and discussions. Everyone is welcome to attend and join the interactive discussion. Please see below to view the full calendar and meeting locations (which can vary month to month), and contact Dr. Ruben Coen-Cagli at for information. Also, be sure to bookmark this page for easy reference and updates."

Calendar for 2023-24

Past Meetings

  • 5/15/2023. Zachary Flamholz (Einstein, Kelly lab) - Large Language Models for Biologic Discovery
  • 4/24/2023. Maryam Shanechi (University of Southern California)
  • 3/27/2023. Bo Wang (NIH/NCI-CCR) – “Machine Learning and the Mutational Effects Problem”
  • 01/23/2023. Kohitij Kar (MIT and York University) – “Probing the neural mechanisms of primate visual cognition”
  • 12/27/2022. Ulisse Ferrari (Sorbonne University, Paris) – “How do natural neuronal networks deal with noise?”
  • 2022 May. Ilker Yildirim (Yale) - “Reverse-engineering the neural code in the language of objects and generative models”
  • 2022 April. Olivier Henaff (Google Deep Mind) - “Towards general self-supervised learning”
  • 2022 March. Carsen Stringer (HHMI Janelia) - "Making sense of large-scale neural and behavioral data"
  • 2022 February. Yinghao Wu’s lab (Einstein) - "A structural-based machine learning method to classify binding affinities between TCR and peptide-MHC complexes"
  • 2021 November. Thomas Serre (Brown) - “Feedforward and feedback processes in visual recognition”
  • 2021 November. Ben Cowley (Princeton) - "Finding compact models of visual cortical neurons in macaque V4"
  • 2021 May. Stephane Deni (Facebook AI) – Self-Supervised Learning Inspired by the Visual System.
  • 2021 April. Luigi Acerbi (Helsinki University) – Practical sample-efficient Bayesian inference for models with and without likelihoods.
  • 2021 March. Saad Kahn (Kelly lab, Einstein) – Reinforcement Learning.
  • 2021 February. Odelia Schwartz (University of Miami) – Normalization in neuroscience and deep neural networks.
  • 2021 January. Judy Wawira Gichoya (Emory) – Machine Learning for Health in Real Life.
  • 2020 December. Ingmar Kanitscheider (Open AI) – Emergent tool use from multi-agent autocurricula.
  • 2020 November. Theofanis Karaletsos (Uber AI Labs) – Structured priors for neural networks.
  • 2020 May. Nikolaus Kriegeskorte (Columbia University, Director of Cognitive Imaging). Testing deep neural network models of human vision with brain and behavioral data.
  • 2020 April. Special session on AI/ML initiatives for COVID-19.
  • 2020 February. Aude Genevay (MIT, Geometric Data Processing Group). Optimal transport and applications.
  • 2020 January. Youtube videolecture by Surya Ganguli (Stanford). Deep Learning Theory: From Generalization to the Brain.
  • 2019 December. Sacha Sokoloski (Einstein, Coen-Cagli lab). State-of-the-Art of Artificial General Intelligence.
  • 2019 November. Saad Kahn (Einstein, Kelly lab). Interpretation methods for deep learning models: saliency mappings.
  • 2019 October. Ruben Coen-Cagli (Einstein). Neural population control using ‘mind-blowing’ synthetic images.
  • 2019 May. Rajesh Ranganath (NYU Courant Institute)
  • 2019 March. Multiscale interpretable models of neural dynamics, presented by Memming Park (Stony Brook)
  • 2019 February. Probabilistic segmentation with U-NET, presented by Ruben Coen-Cagli (Einstein)
  • 2019 January. Enhancing fluorescence microscopy with deep learning, presented by Adrian Jacobo (Rockefeller)
  • 2018 November. Geometric deep learning, presented by Saad Kahn (Einstein)
  • 2018 October. Adversarial networks, presented by Ruben Coen-Cagli (Einstein)
  • 2018 May. Multiscale Methods for Networks, presented by Bo Wang
  • 2018 April. The scattering transform, presented by Jonathan Vacher
  • 2018 March. Topological data analysis, presented by Michoel Snow
  • 2018 February. Recurrent Neural Networks for Sequence Learning, presented by Sacha Sokoloski
  • 2018 January. Opening the black box of Deep Neural Networks via Information, presented by Saad Khan
  • 2017 December. Probabilistic programming with STAN, presented by Dylan Festa
  • 2017 November. Visualizing Data using t-SNE, presented by Daniel Pique
  • 2017 October. Deep Convolutional Neural Networks, presented by Sacha Sokoloski
  • 2017 September. Bayesian sparse priors and shrinkage, presented by Shuonan Chen
  • 2017 August. The Variational Autoencoder, presented by Ruben Coen-Cagli