Instructor(s):

Balázs B Ujfalussy
Mihály Bányai
Weeks
1-14
Contact hours
2x2 hours/week
Credit
4 credits

The course is organised around six topics, each focusing on a computational task, discussing its algorithmic solution and its connections to neural data. We cover the necessary mathematics, the primary literature, then understand the details of the approach via exercises in programming and data analysis.

Aim of the Course:

The aim of the course is to familiarise the students with some of the landscape of algorithmic modelling approaches to the way humans and animals learn and act, as well as the underlying neurophysiology. By placing those approaches on a principled mathematical and natural scientific foundation, we aim to give the students the ability to critically assess published results in both neuroscience and artificial intelligence, as well as to gain hands-on experience in developing models, working with neuronal data and applying efficient algorithms to solve computational problems. The algorithms we chose are central for Bayesian statistics, machine learning and reinforcement learning and thus potentially lay the foundations of the development of future AI systems.

Prerequisites:

  • Basics of linear algebra (operations with matrices and vectors)
  • Basics of multivariate calculus (derivatives and integrals)
  • Basics of probability theory (random variables, normal distributions)
  • Basics of Python programming (variables, lists, arrays, loops, functions)

Syllabus:
Each of the 6 topic blocks follow this structure of 4 lectures:

  1. Foundations - lecture, demonstrating a fundamental computational problem, its mathematical and algorithmic solution with neural connections. Distribution of papers for C.
  2. Technical details: a tutorial session with a deeper and more practical presentation of the mathematical background and illustration of the algorithm in python notebooks. Selecting the programming exercises for D.
  3. Paper dissection (student-led): by each group, chosen from a list of papers offered by the lecturers, who also provide guidance in the form of specific questions for each paper.
  4. Presentation and discussion of the programming exercises (student-led): by each group, chosen from a list of exercises offered by the lecturers. The TA and the lecturers will be available for guidance throughout the week.

In each case, the coding exercise starts with a demo notebook, provided by the instructors, that should be modified or extended by the students according to a specific neuroscience problem. The student has to both change the code, evaluate the results and interpret it in the context of the relevant neuroscience experiments.
Example coding exercises:

  • Perform inference in a simple generative model (Gaussian mixtures clustering, linear Gaussian models or similar).
  • Analyse the variability of neuronal responses in a dataset from the visual cortex.
  • Implement a solution to the speed-accuracy tradeoff in a decision making model.
  • Detect the signatures of speed-accuracy tradeoff in neural dataset.
  • Adapt the Q-learning algorithm for a simple problem chosen by the lecturers.
  • Investigate the sensitivity of a replay detection algorithm to temporal binning using a real hippocampus dataset.

The 28 lectures are divided into topics as follows:

1-3. Introduction. Algorithms for understanding brain function. Overview of biological structures and measurement techniques involved in the study of learning. Overview of Bayesian probability theory. 

4 - 7. Block 1: Bayesian inference - Perception as an inference problem. Generative models and inference algorithms. Inference in the visual system. Representing uncertainty in the brain.

8 - 11. Block 2: Decision making - Actions, loss function - value function, sequential value estimation, representing uncertainty during decision making, evidence integration. Drift-diffusion model of evidence integration, neural representation of decision variables.

12 - 15. Block 3: Navigation - Sequential decision making problems: POMDP. Different forms of uncertainties: model-, state-, and value-uncertainty. Route planning and graph search problems. Elements of navigation algorithm in the brain: place cells, cognitive map, grid cells, path integration. Offline and online computations during navigation.

16 - 19. Block 4: Reinforcement learning (RL) - algorithms to solve the sequential decision making problem. Value-based learning and using reward prediction errors to model neural activity.

20 - 23. Block 5: Model-based RL - building a world model for sequential decision making. Behavioural markers of model-based and model-free learning. Replaying episodic memories or from models. Using replay algorithms to predict neural data.

24 - 27. Block 6: Representation learning - the problem of deciding what information to keep and what to discard. Limitations of the Bayesian Inference and RL accounts, and how to go beyond each. Predicting and measuring behaviour and neural activity using representation learning algorithms.

28. Student presentations. The most active frontiers in computational neuroscience today.

Requirements and grading:

  1. Group projects. In each block students are sorted into groups randomly (approx. 3 people each). They will get a common score for presenting a paper based on guidelines (makes up 25% of total score), and a coding problem (makes up 25% of total score). Each person’s group project scores are the average of 6+6 group scores respectively.
  2. Weekly questions. Each week each student has to send a question (makes up 10% of total score). Then the questions can be answered by the students online for answer points, depending on the announced difficulty of the question (makes up 20% of total score).
  3. Final presentation. In order to increase scores, students can choose to give a short presentation on the last week about a selected topic (20%).

Recommended literature / textbooks:

  • Griffiths, T. L., Chater, N., & Tenenbaum, J. B. (Eds.). (2024). Bayesian models of cognition: Reverse engineering the mind. MIT Press.
  • Carter, M., & Shieh, J. C. (2015). Guide to research techniques in neuroscience. Academic Press.

Instructors' bio:

Balázs B Ujfalussy is a group leader at the lnstitute of Experimental Medicine (KOKl) in Budapest. Previously he studied biology for his MSc and did a neurobiology PhD at the Eötvös Lóránd University, Budapest in the computational neuroscience group of Péter Érdi. Then he moved to the UK where he was a postdoc with Máté Lengyel at the CBL, Dept. of Engineering, University of Cambridge and then with Tiago Branco at the MRC LMB. After returning to Hungary he started working in the Laboratory of Neuronal Signalling, KOKl with Judit Makara as an independent postdoc. He is interested in the biophysical basis of information processing in single neurons, dendritic nonlinearities in particular and in the role of the hippocampus in navigation and planning.

Mihály Bányai is a staff scientist at the Central European University, interested in the theory of representation learning, in particular how humans change their representation as they learn a task, but also machine learning algorithms that do the same efficiently. Previously he worked on meta-cognitive reinforcement learning at the Max Planck lnstitute for Biological Cybernetics in Tübingen, Germany, and on hierarchical Bayesian models for the visual cortex at the Wigner lnstitute in Budapest. He is also very interested in developing better ways to handle software development in an academic environment, and has been working on various related efforts.