Learning
a Perceptual Manifold with Deep Features for Animation Video Resequencing
[Paper] [Main Video] [Additional Results] [Appendix]
2022 81:23687–23707
|
Abstract
We propose a
novel deep learning framework for animation video resequencing. Our system
produces new video sequences by minimizing a perceptual distance of images
from an existing animation video clip. To measure perceptual distance,
utilize the activations of convolutional neural networks and learn a perceptual
distance by training these features on a small network with data comprised of
human perceptual judgments. We show that with this perceptual metric and
graph-based manifold learning techniques, our framework can produce smooth
and visually appealing animation video results for a variety of animation
video styles. In contrast to previous work on animation video resequencing,
the proposed framework applies to wide range of image styles and does not
require hand-crafted feature extraction, background subtraction, or feature
correspondence. In addition, we also show that our framework has applications
to appealingly arrange unordered collections of images. |