MULTIMODAL SENSEMAKING
The training school on ``Representation Mediated Multimodality'' provides a consolidated perspective on the theoretical, methodological, and applied understanding of representation mediated multimodal sensemaking at the interface of language, knowledge representation and reasoning, and visuo-auditory computing. Intended purposes -addressed in the school- encompass diverse operative needs such as explainable multimodal commonsense understanding, multimodal generation/synthesis for communication, multimodal summarisation, multimodal interpretation guided decision-support, adaptation & autonomy, and analytical visualisation.
The principal training school programme --consisting of keynotes, invited talks, and tutorials-- will categorically address questions such as:
- Explainability and transparency in multimodal models
- Neurosymbolism / Interaction between symbolic & sub-symbolic representations in models
- Commonsense knowledge representation and reasoning, and its role in multimodal models
- Complementarity / redundancy among data sources or modalities
- Situated reasoning, e.g., for language generation, decision-making
The training school will focus on human-machine interaction considerations in select application areas of emerging societal significance, such as autonomous systems, cognitive robotics, multi-lingual language technologies.