(2018) Answering Visual What-If Questions: From Actions to Predicted Scene Descriptions.
Abstract
In-depth scene descriptions and question answering tasks have greatly increased the scope of today's definition of scene understanding. While such tasks are in principle open ended, current formulations primarily focus on describing only the current state of the scenes under consideration. In contrast, in this paper, we focus on the future states of the scenes which are also conditioned on actions. We posit this as a question answering task, where an answer has to be given about a future scene state, given observations of the current scene, and a question that includes a hypothetical action. Our solution is a hybrid model which integrates a physics engine into a question answering architecture in order to anticipate future scene states resulting from object-object interactions caused by an action. We demonstrate first results on this challenging new problem and compare to baselines, where we outperform fully data-driven end-to-end learning approaches.
Item Type: | Conference or Workshop Item (A Paper) (Paper) |
---|---|
Divisions: | Mario Fritz (MF) |
Conference: | ECCV European Conference on Computer Vision |
Depositing User: | Mario Fritz |
Date Deposited: | 01 Feb 2019 16:54 |
Last Modified: | 17 May 2021 09:41 |
Primary Research Area: | NRA1: Trustworthy Information Processing |
URI: | https://publications.cispa.saarland/id/eprint/2796 |
Actions
Actions (login required)
View Item |