Distraction-aware Shadow Detection
City University of Hong Kong
Abstract
Shadow detection is an important and challenging task for scene understanding. Despite promising results from recent deep learning based methods. Existing works still struggle with ambiguous cases where the visual appearances of shadow and non-shadow regions are similar (referred to as distraction in our context). In this paper, we propose a Distraction-aware Shadow Detection Network (DSDNet) by explicitly learning and integrating the semantics of visual distraction regions in an end-to-end framework. At the core of our framework is a novel standalone, differentiable distraction-aware module, which allows us to learn distraction-aware, discriminative features for robust shadow detection. Our proposed module learns to extract and fuse distraction-indicative features into the visual features of the input image, by explicitly predicting false positives and false negatives. We conduct extensive experiments on three public shadow detection datasets, SBU, UCF and ISTD, to evaluate our method. Experimental results demonstrate that our model can boost shadow detection performance, by effectively suppressing the detection of false positives and false negatives, achieving state-of-the-art results.
Architecture
DS Module
Citation
@InProceedings{Zheng_2019_CVPR, author = {Zheng, Quanlong and Qiao, Xiaotian and Cao, Ying and Lau, Rynson W.H.}, title = {Distraction-Aware Shadow Detection}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019} }