Learning a Perceptual Manifold with Deep Features for Animation Video Resequencing
Charles C. Morace
Thi-Ngoc-Hanh Le
Sheng-Yi Yao
Shang-Wei Zhang
Tong-Yee Lee
Department of Computer Science and Information Engineering
National Cheng Kung University
[Paper]
[Main Video]
[Additional Results]
[Appendix]
This paper is accepted for Publication in Journal of Multimedia Tools and Applications (2022/01/14)
|
Abstract
We propose a novel deep learning framework for animation video resequencing.
Our system produces new video sequences by minimizing a perceptual distance of images from an existing animation video clip.
To measure perceptual distance, utilize the activations of convolutional neural networks and learn a perceptual distance by training these features on a small network
with data comprised of human perceptual judgments.
We show that with this perceptual metric and graph-based manifold learning techniques,
our framework can produce smooth and visually appealing animation video results for a variety of animation video styles.
In contrast to previous work on animation video resequencing,
the proposed framework applies to wide range of image styles and does not require hand-crafted feature extraction,
background subtraction, or feature correspondence.
In addition, we also show that our framework has applications to appealingly arrange unordered collections of images.