Deep Face Swapping via Cross-Identity Adversarial Training

Abstract

Generative Adversarial Networks (GANs) have shown promising improvements in face synthesis and image manipulation. However, it remains difficult to swap the faces in videos with a specific target. The most well-known face swapping method, Deepfakes, focuses on reconstructing the face image with auto-encoder while paying less attention to the identity gap between the source and target faces, which causes the swapped face looks like both the source face and the target face. In this work, we propose to incorporate cross-identity adversarial training mechanism for highly photo-realistic face swapping. Specifically, we introduce corresponding discriminator to faithfully try to distinguish the swapped faces, reconstructed faces and real faces in the training process. In addition, attention mechanism is applied to make our network robust to variation of illumination. Comprehensive experiments are conducted to demonstrate the superiority of our method over baseline models in quantitative and qualitative fashion.

Publication
MultiMedia Modeling
Shuhui Yang
Shuhui Yang
Master Student
Han Xue
Han Xue
PhD Student
Jun Ling
Jun Ling
PhD Student

I’m now a PhD student at SJTU MediaLab, supervised by Prof. Li Song. Prior to join Song’s MediaLab, I had got my bachelor degree and master degree from University of Sience and Technology of China and Shanghai Jiao Tong University, in 2018 and 2021 respectively. My research interests focus on image and video generation, deep learning and computer vision.

Li Song
Li Song
Professor, IEEE Senior Member