최진우 교수 연구실 (Vision and Learning Lab), ECCV 2024 Oral Presentation
최진우 교수 연구실 (Vision and Learning Lab)의 컴퓨터공학과 석사 졸업생 배경호, 석사과정 학생 안지오, KAIST 석사 졸업생 김영래를 포함한 최진우 교수 연구팀의 논문이 Computer Vision 분야 세계 최고 권위의 학술대회인 European Conference on Computer Vision (ECCV) 2024 에 Oral Presentation으로 accept 되었습니다. 이 논문은 경희대 컴퓨터공학부 최초의 Top Computer Vision Conference (CVPR/ICCV/ECCV) Oral Paper 입니다. 논문은 2024년 가을 이탈리아 밀라노에서 발표할 예정입니다.
[논문 정보]
Title: DEVIAS: Learning Disentangled Video Representations of Action and Scene
Authors: Kyungho Bae†, Geo Ahn†, Youngrae Kim†, Jinwoo Choi* (†: 공동 제1저자, *: 교신 저자)
Venue: European Conference on Computer Vision (ECCV) 2024
TL;DR
We propose a disentangled action and scene representation learning method that can accurately recognize both action and scene in both in-context and out-of-context scenarios.
Abstract
Video recognition models often learn scene-biased action representation due to the spurious correlation between actions and scenes in the training data. Such models show poor performance when the test data consists of videos with unseen action-scene combinations. Although Scene-debiased action recognition models might address the issue, they often overlook valuable scene information in the data. To address this challenge, we propose to learn Disentangled VIdeo representations of Action and Scene (DEVIAS), for more holistic video understanding. We propose an encoder-decoder architecture to learn disentangled action and scene representations with a single model. The architecture consists of a disentangling encoder (DE), an action mask decoder (AMD), and a prediction head. The key to achieving the disentanglement is employing both DE and AMD during training time. The DE uses the slot attention mechanism to learn disentangled action and scene representations. For further disentanglement, an AMD learns to predict action masks, given an action slot. With the resulting disentangled representations, we can achieve robust performance across diverse scenarios, including both seen and unseen action-scene combinations. We rigorously validate the proposed method on the UCF-101, Kinetics-400, and HVU datasets for the seen, and the SCUBA, HAT, and HVU datasets for unseen action-scene combination scenarios. Furthermore, DEVIAS provides flexibility to adjust the emphasis on action or scene information depending on dataset characteristics for downstream tasks. DEVIAS shows favorable performance in various downstream tasks: Diving48, Something-Something-V2, UCF-101, and ActivityNet.
arXiv pre-print: https://arxiv.org/abs/2312.00826
2024.08.13