Brains have a seemingly infinite ability to learn, remember and forget. As a proxi for learning in the brain, I study the learning rules that unlock such flexible and robust learning in neural networks models. I am particularly interested in the relative contributions and interplay between the various processes thought to be involved in biological learning, for instance synaptic plasticity, structural plasticity and innate connectivity motifs. To understand how these processes may be implemented in the brain, I use tools from Physics and ML, combining top-down (e.g. analytical derivations in linear recurrent networks) and bottom-up approaches (e.g. numerical simulations/optimizations of rate or spiking recurrent networks).
I am interested in how intelligent agents can make rich and structured inferences from impoverished data. I demonstrate how such abilities can be implemented via learning algorithms in neural networks, with the aim of grounding higher-level cognitive phenomena in a candidate neural implementation. This research requires me to use cross-disciplinary insights from psychology, neuroscience, and machine learning, in addition to a combination of behavioral experiments, computational simulations, and analytical techniques.
How is data structure affecting the learning dynamics? Can we design learning paradigms that take advantage of such structure? Can prior knowledge help in learning? How? My research lies at the intersection between theoretical machine learning and neuroscience, and gravitates around these questions. In few words, I like to construct simple models that capture emerging phenomena in learning by identifying few key relevant parameters. Using methods from statistical physics, I analyze such models and obtain exact equations that lead to a quantitative insight on the underlying process.
How does the brain rewire itself in response to experience? We think that learning in the brain proceeds by changing the connection strengths between neurons, but what are the rules that govern this process? What kinds of representations do they produce? And how do these representations support adaptive (and maladaptive) generalization of previously learned relationships? My research aims to address these questions by studying the behavior of rodents and their patterns of neural activity as they learn to perform complex tasks with richly structured associations.
I earned my PhD from Imperial College London under the supervision of Claudia Clopath. During my doctoral studies and subsequent postdoctoral position, I focused on developing mechanistic models of synaptic plasticity. Seeking to further collaborate with experimental researchers, I joined Athena Akrami’s lab at the SWC where we developed a cross-species research project investigating decision-making in rodents, humans, and computational models. Recently, I joined the Saxe lab to delve deeper into the intersection between experimental and theoretical neuroscience, with a particular interest in the mechanisms and fundamentals underlying learning and memory formation.
I’m a student on the 1+3 doctoral programme in neuroscience, co-supervised by Andrew Saxe and Adam Packer. My project involves devising and testing experimental predictions about network dynamics across the visual cortical hierarchy under different theories, in response to activity perturbations, during sensory experience and throughout the course of perceptual learning. Previously, I studied neuroscience at Bristol University and spent time in industry at Roche, where I studied the development of excitatory-inhibitory balance in human stem cell derived neurons.
I am interested in machine learning paradigms involving multiple tasks, such as continual learning (tasks in sequence) and meta-learning (task distributions). I want to develop a better understanding of phenomena in deep neural networks in these problem settings. I am jointly supervised by Claudia Clopath at Imperial College London. Before starting my PhD, I completed my Master’s in computer science with Andrew Saxe in Oxford.
Before joining the lab as a DPhil student, I studied Cognitive Science at the University of Osnabrück and Computational Neuroscience at the Bernstein Center for Computational Neuroscience in Berlin. My research aims at developing mathematical tools and at using simulation studies to understand learning dynamics of gradient-based algorithms and how they apply to learning in biological neural networks.
I’m interested in how visual working memory representations change during learning. Supervised by Masud Husain and Andrew Saxe and funded by the ESRC and New College.
Can learning in the brain approximate end-to-end learning like gradient descent? How do brain-wide changes cumulate to better task performance? I am tackling these and other questions by investigating the role of midbrain dopamine neurons (and their projections to the striatum) in mice as they learn to perform a visual decision task, from day 1 to expert performance.
My aim is to build a theoretical framework to link the observed behaviour with the measured dopamine release and neural recordings, and then look at possible extensions to the model which can account for continual learning (i.e. learning a second task after training on the first) and the cognitive control of learning (i.e. the decision of whether and how much of our learning abilities to invest in learning a new task).
I am funded by the Department of Physiology, Anatomy and Genetics and co-supervised by Andrew Saxe and Armin Lak.
I am a second year student at Gatsby co-supervised by Andrew Saxe and Felix Hill. I’m interested in how humans and AIs can form abstract concepts and transfer them across tasks (incl. different state spaces). Currently, I study this through the lens of Sudoku-esque Nikoli puzzles, which offer a great test bed where concept learning is easily quantifiable and high-level concepts are known to apply across puzzles. I’m also generally interested in large language models, and their emergent abilities.
In my free time, I love to play strategy board games, listen to sci-fi/fantasy audiobooks, and vibe to (mostly Ed Sheeran) music.
I am particularly interested in studying the computational neural theories at the basis of learning and memory consolidation in neuronal networks. I am interested in working toward answering questions such as: How does the brain create, store, generalize and update memories without interfering with previously stored memory? What is the function of episodic memory? I am looking forward to making advances in answering these questions, working at the intersection between theoretical neuroscience research and machine learning.
Whether a novel task is worth learning, how effort may impact learning, and how much effort to allocate towards learning are important questions agents face. To answer these questions, I’m currently working in models of cognitive control for learning systems. Particularly how control signals might shape the learning dynamics in linear networks. I’m also collaborating with Clementine Domine (somewhere in this same page) to build a software for people to test their models on hippocampus and entorhinal cortex, on simulated environments that somehow resemble the experimental setting, to directly compare with neural data recorded from each experiment. I have a broad set of interests, such as neuroscience, machine learning, biology, and philosophy in science. Feel free to reach if you want to chat!
I am a student on the GUDTP 1+3 in Experimental Psychology at the university of Oxford co-advised by Chris summerfield and Andrew Saxe.
My interests range from Cognitive Neuroscience and Psychology to Computational Neuroscience and Machine Learning. My MSc work investigates semantic learning. Specifically, I will examine if behavioural and representational changes during the learning of semantic knowledge are analogous to those observed in deep linear networks. To achieve this end, we employ behavioural experiments, neuroimaging, and modelling experiments.
I’m interested in how task and data structure can promote the learning of representations that allow reuse/composition/transfer in different neural network architectures.
Compositional learning and inference is a core capability of intelligent agents. It allows agents to learn and perform complex tasks in a flexible way by combining lower level task and knowledge they already acquired.
My aim is to formulate a compositional cognitive process in biological agent, emulate it in artificial neural networks to establish an analytical understanding of the learning process.
To do this, I aim to study both computational/theoretical approach to describe a compositinal learning and experimental approach to figure out how biological agents represent and solve compositionality underlying tasks.