Call for papers: Biological Cybernetics Special Issue: What can Computer Vision learn from Visual Neuroscience?
Dear Colleagues, Please consider submitting your work to this special issue of Biological Cybernetics. Call for papers: Biological Cybernetics Special Issue: What can Computer Vision learn from Visual Neuroscience? Submission deadline: September 30, 2022 https://www.springer.com/journal/422/updates/20421374 This special issue invites original research and review articles related to topics in biological vision that can potentially benefit computer vision systems. The following is a non exhaustive list of topics in biological vision that can also benefit computer vision systems. • Active vision’s role in visual search, scene understanding, social interactions, etc. • Learning in the visual system. Learning in biology is continual, few-shot, and adversarially robust. • The roles of recurrent and top-down connections in the visual cortex. • Spike based spatiotemporal processing and its implications for neuromorphic vision. • Motion perception in dynamic environments. • Neural coding schemes in the visual system (e.g., sparse coding, predictive coding, and temporal coding.) • The roles of attention mechanisms in biological vision. How to submit: Please submit your manuscript using the journal online submission system following the Biological Cybernetics submission guidelines. Guest Editors: Kexin Chen - Department of Cognitive Sciences, University of California, Irvine Hirak J. Kashyap - Department of Computer Science, University of California, Irvine Jeffrey L. Krichmar - Department of Cognitive Sciences, Department of Computer Science, University of California, Irvine Xiumin Li - College of Automation, Chongqing University, Chongqing, China If you have any questions, please reach out to one of the guest editors. Best regards, Jeff Krichmar Department of Cognitive Sciences 2328 Social & Behavioral Sciences Gateway University of California, Irvine Irvine, CA 92697-5100 jkrichma@uci.edu http://www.socsci.uci.edu/~jkrichma
participants (1)
-
Jeffrey Krichmar