2024 Brain-Score Benchmarking Competition -- Deadline coming up
[Image] The Brain-Score Benchmarking Competition<https://www.brain-score.org/competition/> aims to find behavioral and neural benchmarks on which the predictions of our current top models break down. We are entering the final week of the 2024 competition - please submit your benchmarks to highlight the flaws in current state-of-the-art models of vision (and win prizes)! In 2022, the first Brain-Score Competition led to new and improved models of primate vision that predicted existing benchmarks reasonably well. In this year's Brain-Score Competition 2024, we are closing the loop on testing model predictions by rewarding those benchmarks that show where models are the least aligned to the primate visual ventral stream. The Competition is open to the scientific community and we provide an infrastructure to evaluate a variety of models on new behavioral and neural benchmarks in a standardized and unified manner. In addition, we will incentivize benchmark submissions by providing visibility to participants and a $10,000 prize pool<https://www.brain-score.org/competition/#tracks> to the winning benchmarks. Submissions<https://www.brain-score.org/profile/vision/> are open until June 30, 2024. For regular updates related to the competition, please follow Brain-Score on Twitter<https://twitter.com/brain_score> and join our community<https://www.brain-score.org/community>. Good Luck! Competition Tracks * Behavioral: This track will reward those benchmarks that show-case model short-comings in predicting behavior (human or non-human primate). The winning submissions will be the behavioral benchmarks with the lowest model scores, mean-averaged over all models, i.e. as close as possible to 0. The benchmarks can target any behavioral task that the models engage on (labeling, class probabilities, odd-one-out ; see the model interface<https://brain-score.readthedocs.io/en/latest/modules/model_interface.html#brainscore_vision.model_interface.BrainModel.Task>). 1st prize: $3,000; 2nd prize: $2,000. * Neural: This track will reward those benchmarks that show-case model short-comings in predicting neural activity across the primate visual ventral stream. The winning submissions will be the neural benchmarks with the lowest model scores, mean-averaged over all models, i.e. as close as possible to 0. The benchmarks can target any region(s) in the visual ventral stream: V1, V2, V4, IT (see the model interface<https://brain-score.readthedocs.io/en/latest/modules/model_interface.html#brainscore_vision.model_interface.BrainModel.RecordingTarget>). Brain recordings can be from human or non-human primates. 1st prize: $3,000; 2nd prize: $2,000.
participants (1)
-
Martin Schrimpf