Toward Fine-grained Facial Expression Manipulation

Abstract

Facial expression manipulation aims at editing facial expression with a given condition. Previous methods edit an input image under the guidance of a discrete emotion label or absolute condition (e.g., facial action units) to possess the desired expression. However, these methods either suffer from changing condition-irrelevant regions or are inefficient for fine-grained editing. In this study, we take these two objectives into consideration and propose a novel method. First, we replace continuous absolute condition with relative condition, specifically, relative action units. With relative action units, the generator learns to only transform regions of interest which are specified by non-zero-valued relative AUs. Second, our generator is built on U-Net but strengthened by Multi-Scale Feature Fusion (MSF) mechanism for high-quality expression editing purposes. Extensive experiments on both quantitative and qualitative evaluation demonstrate the improvements of our proposed approach compared to the state-of-the-art expression editing methods. Code is available at urlhttps://github.com/junleen/Expression-manipulator.

Publication
arXiv:2004.03132 [cs]
Jun Ling
Jun Ling
PhD Student

I’m now a PhD student at SJTU MediaLab, supervised by Prof. Li Song. Prior to join Song’s MediaLab, I had got my bachelor degree and master degree from University of Sience and Technology of China and Shanghai Jiao Tong University, in 2018 and 2021 respectively. My research interests focus on image and video generation, deep learning and computer vision.

Han Xue
Han Xue
PhD Student
Li Song
Li Song
Professor, IEEE Senior Member
Shuhui Yang
Shuhui Yang
Master Student

Related