Our workshop aims to explore the emergence of similar representations from diverse neural models, both artificial and biological, when exposed to similar stimuli. We will investigate this phenomenon's reasons, conditions, and methods and explore the potential for unifying these representations into a shared framework. Additionally, we will delve into exciting applications such as model merging, stitching, and reusing independently trained modules.
Our primary objective is to foster collaboration and idea exchange among Machine Learning, Neuroscience, and Cognitive Science researchers. We aim to create a platform for interdisciplinary discussions and collaborations by bringing together experts from diverse backgrounds. Check our Call For Papers below.
For any additional information check our website, Twitter profile, Slack workspace, or contact us at unireps.organizers@gmail.com.
Call For Papers
Neural models, whether in biological or artificial systems, tend to learn similar representations when exposed to similar stimuli. This phenomenon has been observed in various scenarios, e.g., when individuals are exposed to the same stimulus or in different initializations of the same neural architecture. Similar representations occur in settings where data is acquired from multiple modalities (e.g., text and image representations of the same entity) or when observations in a single modality are acquired under different conditions (e.g., multiview learning). The emergence of these similar representations has sparked interest in the fields of Neuroscience, Artificial Intelligence, and Cognitive Science. This workshop aims to get a unified view on this topic and facilitate the exchange of ideas and insights across these fields, focusing on three key points:
When: Understanding the patterns by which these similarities emerge in different neural models and developing methods to measure them.
Why: Investigating the underlying causes of these similarities in neural representations, considering both artificial and biological models.
What for: Exploring and showcasing applications in modular deep learning, including model merging, reuse, stitching, efficient strategies for fine-tuning, and knowledge transfer between models and across modalities.
A non-exhaustive list of the preferred topics includes:
Model merging, stitching, and reuse
Representational alignment
Identifiability in neural models
Symmetry and equivariance in NNs
Learning dynamics
Disentangled representations
Multiview representation learning
Representation similarity analysis
Linear mode connectivity
Similarity-based learning
Multimodal learning
Similarity measures in NNs
Paper submission deadline: Oct 04, 2023 – Submit on OpenReview
Final decisions to authors: Oct 27, 2023
Submissions to the workshop are organized in two tracks, both requiring novel and unpublished results: an Extended abstract track, which will address early-stage results, insightful negative findings, opinion pieces, and a Proceedings track, which will address complete papers to be published in a dedicated workshop proceedings volume. Both tracks will be included in the workshop poster session to give an opportunity to authors to present their work, and a subset of the submissions will be selected for a spotlight talk session during the workshop.