摘要

Facial micro-expression (FME) refers to a brief spontaneous facial movement that can disclose a person's genuine emotion. The investigations of FMEs are hampered by the lack of data. Fortunately, generative deep neural network models can help synthesize new images with desired FMEs. However, FMEs are too subtle to capture and generate. Therefore, we developed an edge-aware motion based FME generation (EAM-FMEG) method to address these challenges. First, we introduced an auxiliary edge prediction (AEP) task for estimating facial edges to aid in the subtle feature extraction. Second, we proposed an edge-intensified multi-head self-attention (EIMHSA) module for focusing on important facial regions to enhance the generation in response to subtle changes. The method was tested on three FME databases and showed satisfactory results. The ablation study demonstrated that the method is capable of producing objects with clear edges, and is robust to texture disturbance, shape distortion, and background defects. Furthermore, the method demonstrated strong cross-database generalization ability, even from RGB to grayscale images or vice versa, enabling general applications.