최진우 교수 연구실 (Vision and Learning Lab), CVPR 2025에 Video LLM 관련 논문 Highlight Presentation
최진우 교수 연구실 (Vision and Learning Lab)의 컴퓨터공학과 석사 졸업생 배경호가 LG AI Research에 인턴으로 근무하며 작성한 논문이 Computer Vision 분야 세계 최고 권위의 학술대회인 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025 에 Highlight으로 선정되었습니다. 본 논문은 2025년 여름 미국 내슈빌에서 발표할 예정입니다.
[논문 정보]
Title: MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations
Authors: Kyungho Bae, Jinhyung Kim, Sihaeng Lee, Soonyoung Lee, Gunhee Lee*, Jinwoo Choi* (*: 교신 저자)
Venue: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025
TL;DR
To mitigate action-scene hallucination in Video-LLMs, we propose a disentangled spatio-temporal representation learning method.
Abstract
In this work, we tackle action-scene hallucination in Video Large Language Models (Video-LLMs), where models incorrectly predict actions based on the scene context or scenes based on observed actions. We observe that existing Video-LLMs often suffer from action-scene hallucination due to two main factors. First, existing Video-LLMs intermingle spatial and temporal features by applying an attention operation across all tokens. Second, they use the standard Rotary Position Embedding (RoPE), which causes the text tokens to overemphasize certain types of tokens depending on their sequential orders. To address these issues, we introduce MASH-VLM, Mitigating Action-Scene Hallucination in Video-LLMs through disentangled spatial-temporal representations. Our approach includes two key innovations: (1) DST-attention, a novel attention mechanism that disentangles the spatial and temporal tokens within the LLM by using masked attention to restrict direct interactions between the spatial and temporal tokens; (2) Harmonic-RoPE, which extends the dimensionality of the positional IDs, allowing the spatial and temporal tokens to maintain balanced positions relative to the text tokens. To evaluate the action-scene hallucination in Video-LLMs, we introduce the UNSCENE benchmark with 1,320 videos and 4,078 QA pairs. Extensive experiments demonstrate that MASH-VLM achieves state-of-the-art results on the UNSCENE benchmark, as well as on existing video understanding benchmarks.
arXiv pre-print: https://arxiv.org/abs/2503.15871
2025.04.07