Date: 4 to 8 April 2022 Location: Groningen, the Netherlands Fee: € 300 (late fee after March 7 will be € 350) www.cognitive-modeling.com/springschool After an enforced two-year covid break, we are excited to announce the fifth Groningen Spring School on Cognitive Modeling (4 to 8 April 2022), with a great lineup of speakers. The Spring School will cover four different modeling paradigms: ACT-R, Nengo, PRIMs, and discriminative learning. It thereby offers a unique opportunity to learn the relative strengths and weaknesses of these approaches. In addition, this year we are offering a lecture series on dynamical systems, which should be interesting for anyone looking into modeling cognitive dynamics at some or other level of abstraction. We recommend this lecture series as an excellent combination with Nengo, for those interested in neuromorphic computing. The first day will provide an introduction to all five topics. From day two, spring school students will be asked to commit to one topic, for which they will attend lectures as well as hands-on tutorials to get practical experience at working with the paradigm. In addition, students can sign up for a second topic, for which they will attend lectures only. All students are invited to join a series of plenary research talks on the different paradigms. Please feel free to forward the information to anyone who might be interested in the Spring School. The Spring School team PS: Please note that, due to its interactive character, the spring school will be held as an offline-only event. We will not offer the option for hybrid or online teaching. In case the spring school needs to be cancelled at the last minute after all, participants will receive a full reimbursement. ______________ ACT-R Teachers: Jelmer Borst, Stephen Jones, & Katja Mehlhorn (University of Groningen) Website: http://act-r.psy.cmu.edu. ACT-R is a high-level cognitive theory and simulation system for developing cognitive models for tasks that vary from simple reaction time experiments to driving a car, learning algebra, and air traffic control. ACT-R can be used to develop process models of a task at a symbolic level. Participants will follow a compressed five-day version of the traditional summer school curriculum. We will also cover the connection between ACT-R and fMRI. Nengo Teacher: Terry Stewart and Andreas Stöckel (University of Waterloo) Website: http://www.nengo.ca Nengo is a toolkit for converting high-level cognitive theories into low-level spiking neuron implementations. In this way, aspects of model performance such as response accuracy and reaction times emerge as a consequence of neural parameters such as the neurotransmitter time constants. It has been used to model adaptive motor control, visual attention, serial list memory, reinforcement learning, Tower of Hanoi, and fluid intelligence. Participants will learn to construct these kinds of models, starting with generic tasks like representing values and positions, and ending with full production-like systems. There will also be special emphasis on extracting various forms of data out of a model, such that it can be compared to experimental data. PRIMs Teacher: Niels Taatgen (University of Groningen) Website: https://www.ai.rug.nl/~niels/prims/index.html How do people handle and prioritize multiple tasks? How can we learn something in the context of one task, and partially benefit from it in another task? The goal of PRIMs is to cross the artificial boundary that most cognitive architectures have imposed on themselves by studying single tasks. It has mechanisms to model transfer of cognitive skills, and the competition between multiple goals. In the tutorial we will look at how PRIMs can model phenomena of cognitive transfer and cognitive training, and how multiple goals compete for priority in models of distraction. Discriminative learning and the lexicon: NDL and LDL Harald Baayen, Yu-Ying Chuang, and Maria Heitmeier University of Tuebingen NDL and LDL are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error which calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the endstate of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing. In this course, we will show how comprehension and production of morphologically complex words can be modeled successfully with the "Discriminative Lexicon" model for a range of languages (Hebrew, Maltese, English, German, Dutch, Mandarin Chinese, Korean, Kinyarwanda, Estonian, and Finnish). We will discuss the kinds of form and meaning representations that can be set up, including form features derived from the speech signal for auditory comprehension and semantic features grounded in distributional semantics. Furthermore, we will provide a survey of the measures that can be derived from the model mappings to predict empirical response variables such as reaction times in primed and unprimed lexical decision, spoken word duration, and tongue movements during speaking. Finally, participants will receive some training in using the JudiLing package for Julia. This package provides optimized code for implementing and evaluating components of a "discriminative lexicon" for a given language. Dynamical Systems: a Navigation Guide Teacher: Herbert Jaeger (University of Groningen) This lecture series gives a broad overview over the zillions of formal models and methods invented by mathematicians and physicists for describing “dynamical systems”. Here is a list of covered items: Finite-state automata (with and without input, deterministic and non-deterministic, probabilistic), hidden Markov models and partially observable Markov decision processes, cellular automata, dynamical Bayesian networks, iterated function systems, ordinary differential equations, stochastic differential equations, delay differential equations, partial differential equations, (neural) field equations, Takens’ theorem, the engineering view on “signals”, describing sequential data by grammars, Chomsky hierarchy, exponential and power-law long-range interactions, attractors, structural stability, bifurcations, phase transitions, topological dynamics, nonautonomous attractor concepts. In the lectures I work out the connecting lines between these different models and methods, aiming at drawing the "big picture". — Thomas Tiotto, PhD Candidate, CogniGron - Groningen Cognitive Systems and Materials University of Groningen Nijenborgh 9, 9747 AG Groningen, The Netherlands Email: t.f.tiotto@rug.nl Profile: https://www.rug.nl/staff/t.f.tiotto/ Office: 5161 0318