About

The workshop on COGNITIVE VISION is oganised as part of the IJCAI-ECAI 2018, the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence (IJCAI-ECAI 2018, http://www.ijcai-18.org) to be held in Stockholm, Sweden on Saturday July 14 2018 (Morning Workshop / B3).


WORKSHOP CHAIRS

Mehul Bhatt
Örebro University, Sweden
University of Bremen, Germany

Alessandra Russo
Imperial College London, United Kingdom

Parisa Kordjamshidi
Tulane University, United States

University of Bremen

Imperial College London, United kingdom

Örebro University, Sweden

Tulane University, United States

Call for Papers

The workshop on COGNITIVE VISION solicits contributions addressing computational vision and perception at the interface of language, logic, cognition, and artificial intelligence. The workshop brings together a novel & unique combination of academics and research methodologies encompassing AI, Cognition, and Interaction.

The workshop will feature invited and contributed research advancing the practice of Cognitive Vision particularly from the viewpoints of theories and methods developed within the fields of:

  • Artificial Intelligence
  • Computer Vision
  • Spatial Cognition and Computation
  • Cognitive Science and Psychology
  • Visual Perception
  • Cognitive Linguistics

Application domains being addressed include, but are not limited to:

  • autonomous driving
  • cognitive robotics
  • vision for UAVs
  • visual art, fashion, cultural heritage
  • vision in biology (e.g., animal, plant)
  • vision and VR
  • vision for social science, humanities
  • vision for psychology, human behaviour studies
  • social signal processing, social media
  • remote sensing, GIS
  • medical imaging

Technical Focus. The principal emphasis of the workshop is on the integration of vision and artificial intelligence from the viewpoints of embodied perception, interaction, and autonomous control. In addition to basic research questions, the workshop addresses diverse application areas where, for instance, the processing and semantic interpretation of (potentially large volumes of) highly dynamic visuo-spatial imagery is central: autonomous systems, cognitive robotics, medical & biological computing, social media, cultural heritage & art, social media, psychology and behavioural research domains where data-centred analytical methods are gaining momentum. Particular themes of high interest solicited by the workshop include:

  • methodological integrations between Vision and AI
  • declarative representation and reasoning about spatio-temporal dynamics
  • deep semantics and explainable visual computing (e.g., about space and motion)
  • vision and computational models of narrative
  • cognitive vision and multimodality (e.g., multimodal semantic interpretation)
  • visual perception (e.g., high-level event perception, eye-tracking)
  • applications of visual sensemanking for social science, humanities, and human behaviour studies

The workshop emphasises application areas where explainability and semantic interpretation of dynamic visuo-spatial imagery are central, e.g., for commonsense scene understanding; vision for robotics and HRI; narrative interpretation from the viewpoints of visuo-auditory perception & digital media, sensemaking from (possibly multimodal) human-behaviour data where the principal component is visual imagery.

We welcome contributions, position statements, perspectives addressing the workshop themes from formal, cognitive, computational, engineering, empirical, psychological, and philosophical perspectives. Select indicative topics include:

  • deep visuo-spatial semantics
  • commonsense scene understanding
  • semantic question-answering with image, video, point-clouds
  • concept learning and inference from visual stimuli
  • explainable visual interpretation
  • learning relational knowledge from dynamic visuo-spatial stimuli
  • knowledge-based vision systems
  • ontological modelling for scene semantics
  • visual analysis of sketches
  • motion representation (e.g., for embodied control)
  • action, anticipation, and visual stimuli
  • vision, AI, and eye-tracking

  • high-level visual perception and eye-tracking
  • egocentric vision, perception
  • declarative reasoning about space and motion
  • computational models of narratives
  • narrative models for storytelling (from stimuli)
  • vision and linguistic summarization
    (e.g., of social interaction, human behavior)
  • visual perception and embodiment research
    (e.g., involving eye-tracking)
  • biological and artificial vision
  • biological motion
  • visuo-auditory perception
  • multimodal media annotation tools

Submission Requirements. Submitted papers must be formatted according to IJCAI guidelines (details here: IJCAI-ECAI 18 guidelines); in summary, all contributions should be no longer than 7 single-spaced pages (6 pages max for content, 1 page max for references). Contributions may be submitted as: (1). technical papers; (2) position statements; or (3) work in progress. Submissions should only be made electronically as PDF documents via the paper submission site: (Easychair).

Important Dates in 2018

  • Optionally, register an abstract and title: (anytime before submission deadline)
  • Submissions: May 20
  • Notification: June 4
  • Camera Ready: June 20
  • Workshop Date: To be confirmed
  • Conference: July 13 - 19

Workshop Schedule | Papers

WORKSHOP COMMITTEE

Workshop Chairs:

Program Committee:

  • To be announced