====================================================================== *Call for Papers- First Workshop on Affordances: Affordances in Vision for Cognitive Robotics* (in conjunction with RSS 2014), July 13, 2014, Berkeley, USA http://affordances.info/workshops/RSS.html ====================================================================== Based on the Gibsonian principle of defining objects by their function, "affordances" have been studied extensively by psychologists and visual perception researchers, resulting in the creation of numerous cognitive models. These models are being increasingly revisited and adapted by computer vision and robotics researchers to build cognitive models of visual perception and behavioral algorithms in recent years. This workshop attempts to explore this nascent, yet rapidly emerging field of affordance based cognitive robotics while integrating the efforts and language of affordance communities not just in computer vision and robotics, but also psychophysics and neurobiology by creating an open affordance research forum, feature framework and ontology called AfNet (theaffordances.net). In particular, the workshop will focus on emerging trends in affordances and other human-centered function/action features that can be used to build computer vision and robotic applications. The workshop also features contributions from researchers involved in traditional theories to affordances, especially from the point of view of psychophysics and neuro-biology. Avenues to aiding research in these fields using techniques from computer vision and cognitive robotics will also be explored. Primary topics addressed by the workshop include the following among others - Affordances in visual perception models - Affordances as visual primitives, common coding features and symbolic cognitive systems - Affordances for object recognition, search, attention modulation, functional scene understanding/ classification - Object functionality analysis - Affordances from appearance and touch based cues - Haptic adjectives - Functional-visual categories for transfer learning - Actions and functions in object perception - Human-object interactions and modeling - Motion-capture data analysis for object categorization - Affordances in human and robot grasping - Robot behavior for affordance learning - Execution of affordances on robots - Affordances to address cognitive and domestic robot applications - Affordance ontologies - Psychophysics of affordances - Neurobiological and cognitive models for affordances The workshop also seeks to address key challenges in robotics with regard to functional form descriptions. While affordances describe the function that each object or entity affords, these in turn define the manipulation schema and interaction modes that robots need to use to work with objects. These functional features, ascertained through vision, haptics and other sensory information also help in categorizing objects, task planning, grasp planning, scene understanding and a number of other robotic tasks. Understanding various challenges in the field and building a common language and framework for communication across varied communities in are the key goals of the proposed workshop. Through the course of the workshop, we also envisage the establishment of a working group for AfNet. An initial version is available online at www.theaffordances.net. We hope the workshop will serve to foster greater collaboration between the affordance communities in various fields. *Paper Submissions* Paper contributions to the workshop are solicited in four different formats - *Conceptual papers* (1 page): Authors are invited to submit original ideas on approaches to address specific problems in the targeted areas of the workshop. While a clear presentation of the proposed approach and the expected results are essential, specifics of implementation and evaluations are outside the scope of this format. This format is intended at exchange and evaluation of ideas prior to implementation/ experimental work as well as to open up collaboration avenues. - *Design papers* (3 pages): Authors submitting design papers are required to address key issues regarding the problem considered with detailed algorithms and preliminary or proof-of-concept results. Detailed evaluations and analyses are outside the scope of this format. This format is intended at addressing late-breaking and work in progress results as well as fostering collaboration between research and engineering groups. - *Experimental papers* (3 pages): Experimental papers are required to present results of experiments and evaluation of previously published algorithms or design frameworks. Details of implementation and exhaustive test case analyses are key to this format. These papers are geared at benchmarking and standardizing previously known approaches. - *Full papers* (5 pages): Full papers must be self-inclusive contributions with a detailed treatment of the problem statement, related work, design methodology, algorithm, test-bed, evaluation, comparative analysis, results and future scope of work. Submission of original and unpublished work is highly encouraged. Since the goal of this workshop is to bring together the various affordance communities, extended versions/ summary reports of recent research published elsewhere, as adapted to the goals of the workshop, will also be accepted. These papers are required to clearly state the relevance to the workshop and the necessary adaptation. The program will be composed of oral as well as Pecha-Kucha style presentations. Each contribution will be reviewed by three reviewers through a single-blind review process. The paper formatting should follow the RSS formatting guidelines (Templates: Word<http://www.roboticsconference.org/paper-template-word.zip> andLaTeX <http://www.roboticsconference.org/paper-template-latex.tar.gz>). All contributions are to be submitted via Microsoft Conference Management Tool <https://cmt.research.microsoft.com/A2014/Default.aspx> in PDF format. Appendices and supplementary text can be submitted as a second PDF, while other materials such as videos (though, preferably as links on vimeo or youtube) as a zipped file. All papers are expected to be self-inclusive and supplementary materials are not guaranteed to be reviewed. Please adhere to the following strict deadlines. In addition to direct acceptance, early submissions may be conditionally accepted, in which case submission of a revised version of the paper based on reviewer comments, prior to the late submission deadline is necessary. The final decision on acceptance of such conditionally accepted papers will be announced along with the decisions for the late submissions. *Important Dates* - Initial submissions (Early): 23:59:59 PDT May 5, 2014 - Notification of acceptance (Early submissions): May 15, 2014 - Initial submissions (Late): 23:59:59 PDT May 27, 2014 - Notification of acceptance (Late submissions): June 5, 2014 - Submission of publication-ready version: June 10, 2014 - Workshop date: July 13, 2014 *Organizers* Karthik Mahesh Varadarajan <http://www.karthikmahesh.com/> (varadarajan(at) acin.tuwien.ac.at), TU Wien Markus Vincze <http://www.acin.tuwien.ac.at/index.php?id=231&L=1> (vincze(at)tuwien.ac.at), TU Wien Trevor Darrell <http://www.eecs.berkeley.edu/~trevor/> (trevor(at) eecs.berkeley.edu), UC. Berkeley Juergen Gall <http://www.vision.ee.ethz.ch/~gallju/>(gall(at)iai.uni-bonn.de), Univ. Bonn *Speakers and Participants* (To be updated) Abhinav Gupta (Affordances in Computer Vision), Carnegie Mellon University Ashutosh Saxena (Affordances in Cognitive Robotics), Cornell University Lisa Oakes (Psychophysics of affordances)*, UC. Davis TBA (Neurobiology of affordances)* *Program Committee* Irving Biederman (USC) Aude Olivia (MIT) Fei-Fei Li (Stanford University) Martha Teghtsoonian (Smith College) Derek Hoiem (UIUC) Barbara Caputo (Univ. of Rome, IDIAP) Song-Chun Zhu (UCLA) Antonis Argyros (FORTH) Tamim Asfour (KIT) Michael Beetz (TUM) Norbert Krueger (Univ. of Southern Denmark) Sven Dickinson (Univ. of Toronto) Diane Pecher (Erasmus Univ. Rotterdam) Aaron Bobick (GeorgiaTech) Jason Carso (UB New York) Juan Carlos Niebles (Universidad del Norte) Tamara Berg (UNC Chapel Hill) Moritz Tenorth (Univ. Bremen) Dejan Pangercic (Robert Bosch) Roozbeh Mottaghi (Stanford) Alireza Fathi (Stanford) Xiaofeng Ren (Amazon) David Fouhey (CMU) Tucker Hermans (Georgia Tech) Tian Lan (Stanford) Amir Roshan Zamir (UCF) Hamed Pirsiavash (MIT) Walterio Mayol-Cuevas (Univ. of Bristol)