Research Fellow - Deep Learning Theory & AI Safety
Closing date: 26 March 2026 We invite applications for a postdoctoral training fellowship under the guidance of Professor Andrew Saxe. The Saxe Lab works across the Gatsby Computational Neuroscience Unit and the Sainsbury Wellcome Centre and is focussed on understanding learning in biological and artificial systems. This role lies at the interface of deep learning theory and AI safety. You will conduct research into artificial deep networks using techniques from applied mathematics and statistical physics. A particular focus is on understanding how depth affects learned representations and network behaviour, with applications to AI safety including emergent misalignment, unlearning and the efficacy of safety fine-tuning. The project is a collaboration with the Sarao Mannelli group at Chalmers University of Technology, where additional Research Fellows will be based. Funding for collaborative visits is included. This post is funded for two years by a grant from OpenAI's Alignment Team, awarded by the UK AI Security Institute, through the Alignment Project. The appointment will be on the UCL Grade 7 salary scale £45,103 - £52,586. Grade 7 Research Fellows will also be eligible for a departmental allowance of up to £5,000 p.a. You should have a PhD in Computer Science, Physics or closely related discipline (Engineering, Theoretical Neuroscience, Mathematics) or have submitted your final thesis by the agreed start date of the position. A proven track-record of publishing work as a lead author relating to theory of deep learning or AI safety is essential as is a demonstrable interest in mathematical analysis of artificial neural network models. A demonstrable interest in AI safety and track record of running controlled empirical experiments on large deep network models on HPC or via frontier model APIs is desirable. For detailed information on the role and how to apply, please visit “UCL Job: B02-10169” (https://www.ucl.ac.uk/work-at-ucl/search-ucl-jobs/details?jobId=42098&jobTit...).
participants (1)
-
I-Chun Lin