Dataset for "NPF-200: A Multi-Modal Eye Fixation Dataset and Method for Non-Photorealistic Videos"
Dataset for NPF-200: A Multi-Modal Eye Fixation Dataset and Method for Non-Photorealistic Videos.
The full code repository is available on GitHub https://github.com/Yangziyu/NPF200
Non-photorealistic videos are in demand with the wave of the metaverse, but lack of sufficient research studies. This work aims to take a step forward to understand how humans perceive non-photorealistic videos with eye fixation (i.e., saliency detection), which is critical for enhancing media production, artistic design, and game user experience. To fill in the gap of missing a suitable dataset for this research line, we present NPF-200, the first large-scale multi-modal dataset of purely non-photorealistic videos with eye fixations. Our dataset has three characteristics: 1) it contains soundtracks that are essential according to vision and psychological studies; 2) it includes diverse semantic content and videos are of high-quality; 3) it has rich motions across and within videos. We conduct a series of analyses to gain deeper insights into this task and compare several state-of-the-art methods to explore the gap between natural images and non-photorealistic data. Additionally, as the human attention system tends to extract visual and audio features with different frequencies, we propose a universal frequency-aware multi-modal non-photorealistic saliency detection model called NPSNet, demonstrating the state-of-the-art performance of our task. The results uncover strengths and weaknesses of multi-modal network design and multi-domain training, opening up promising directions for future works. Our dataset and code can be found at https://github.com/Yangziyu/NPF200
History
Confidential or personally identifiable information
- I confirm that the uploaded data has no confidential or personally identifiable information.