AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation
While considerable progress has been made in achieving accurate lip synchronization for 3D speech-driven talking face generation, the task of incorporating expressive facial detail synthesis aligned with the speaker’s speaking status remains challenging. Existing efforts either focus on l...
| Published in: | IEEE Access |
|---|---|
| Main Authors: | , , , , |
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10504116/ |
| _version_ | 1850043915108876288 |
|---|---|
| author | Yasheng Sun Wenqing Chu Hang Zhou Kaisiyuan Wang Hideki Koike |
| author_facet | Yasheng Sun Wenqing Chu Hang Zhou Kaisiyuan Wang Hideki Koike |
| author_sort | Yasheng Sun |
| collection | DOAJ |
| container_title | IEEE Access |
| description | While considerable progress has been made in achieving accurate lip synchronization for 3D speech-driven talking face generation, the task of incorporating expressive facial detail synthesis aligned with the speaker’s speaking status remains challenging. Existing efforts either focus on learning a dynamic talking head pose synchronized with speech rhythm or aim for stylized facial movements guided by external reference such as emotional labels or reference video clips. The former works often yield coarse alignment, neglecting the emotional nuances present in the audio content while the latter studies lead to unnatural applications, requiring manual style source selection by users. Our goal is to directly leverage the inherent style information conveyed by human speech for generating an expressive talking face that aligns with the speaking status. In this paper, we propose AVI-Talking, an Audio-Visual Instruction system for expressive Talking face generation. This system harnesses the robust contextual reasoning and hallucination capability offered by Large Language Models (LLMs) to instruct the realistic synthesis of 3D talking faces. Instead of directly learning facial movements from human speech, our two-stage strategy involves the LLMs first comprehending audio information and generating instructions implying expressive facial details seamlessly corresponding to the speech. Subsequently, a diffusion-based generative network executes these instructions. This two-stage process, coupled with the incorporation of LLMs, enhances model interpretability and provides users with flexibility to comprehend instructions and specify desired operations or modifications. Specifically, given a speech clip, we first employ a Q-Former for contrastive alignment the speech features with visual instructions, which is then projected to input text embedding of LLMs. It functions as a prompting strategy, prompting LLMs to generate plausible visual instructions that encompass diverse facial details. In order to use these predicted instructions, a language-guided talking face generation system with disentangled latent space is delicately derived, where the speech content related lip movements and emotion correlated facial expressions are separately represented in speech content space and content irrelevant space. Additionally, we introduce a contrastive instruction-style alignment and diffusion technique within the content-irrelevant space to fully exploit the talking prior network for diverse instruction-following synthesis. Extensive experiments showcase the effectiveness of our approach in producing vivid talking faces with expressive facial movements and consistent emotional status. |
| format | Article |
| id | doaj-art-2de39d95a4964ebbbb868fca10a6dfda |
| institution | Directory of Open Access Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | IEEE |
| record_format | Article |
| spelling | doaj-art-2de39d95a4964ebbbb868fca10a6dfda2025-08-20T00:30:14ZengIEEEIEEE Access2169-35362024-01-0112572885730110.1109/ACCESS.2024.339018210504116AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face GenerationYasheng Sun0https://orcid.org/0000-0002-0589-4424Wenqing Chu1https://orcid.org/0000-0003-0816-7975Hang Zhou2Kaisiyuan Wang3Hideki Koike4https://orcid.org/0000-0002-8989-6434Tokyo Institute of Technology, Tokyo, JapanBaidu Inc., Beijing, ChinaBaidu Inc., Beijing, ChinaSchool of Electrical and Computer Engineering, The University of Sydney, Darlington, NSW, AustraliaTokyo Institute of Technology, Tokyo, JapanWhile considerable progress has been made in achieving accurate lip synchronization for 3D speech-driven talking face generation, the task of incorporating expressive facial detail synthesis aligned with the speaker’s speaking status remains challenging. Existing efforts either focus on learning a dynamic talking head pose synchronized with speech rhythm or aim for stylized facial movements guided by external reference such as emotional labels or reference video clips. The former works often yield coarse alignment, neglecting the emotional nuances present in the audio content while the latter studies lead to unnatural applications, requiring manual style source selection by users. Our goal is to directly leverage the inherent style information conveyed by human speech for generating an expressive talking face that aligns with the speaking status. In this paper, we propose AVI-Talking, an Audio-Visual Instruction system for expressive Talking face generation. This system harnesses the robust contextual reasoning and hallucination capability offered by Large Language Models (LLMs) to instruct the realistic synthesis of 3D talking faces. Instead of directly learning facial movements from human speech, our two-stage strategy involves the LLMs first comprehending audio information and generating instructions implying expressive facial details seamlessly corresponding to the speech. Subsequently, a diffusion-based generative network executes these instructions. This two-stage process, coupled with the incorporation of LLMs, enhances model interpretability and provides users with flexibility to comprehend instructions and specify desired operations or modifications. Specifically, given a speech clip, we first employ a Q-Former for contrastive alignment the speech features with visual instructions, which is then projected to input text embedding of LLMs. It functions as a prompting strategy, prompting LLMs to generate plausible visual instructions that encompass diverse facial details. In order to use these predicted instructions, a language-guided talking face generation system with disentangled latent space is delicately derived, where the speech content related lip movements and emotion correlated facial expressions are separately represented in speech content space and content irrelevant space. Additionally, we introduce a contrastive instruction-style alignment and diffusion technique within the content-irrelevant space to fully exploit the talking prior network for diverse instruction-following synthesis. Extensive experiments showcase the effectiveness of our approach in producing vivid talking faces with expressive facial movements and consistent emotional status.https://ieeexplore.ieee.org/document/10504116/Large language modelsaudio-visual instructiondiffusion modelexpressive talking face generationcontrastive learning |
| spellingShingle | Yasheng Sun Wenqing Chu Hang Zhou Kaisiyuan Wang Hideki Koike AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation Large language models audio-visual instruction diffusion model expressive talking face generation contrastive learning |
| title | AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation |
| title_full | AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation |
| title_fullStr | AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation |
| title_full_unstemmed | AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation |
| title_short | AVI-Talking: Learning Audio-Visual Instructions for Expressive 3D Talking Face Generation |
| title_sort | avi talking learning audio visual instructions for expressive 3d talking face generation |
| topic | Large language models audio-visual instruction diffusion model expressive talking face generation contrastive learning |
| url | https://ieeexplore.ieee.org/document/10504116/ |
| work_keys_str_mv | AT yashengsun avitalkinglearningaudiovisualinstructionsforexpressive3dtalkingfacegeneration AT wenqingchu avitalkinglearningaudiovisualinstructionsforexpressive3dtalkingfacegeneration AT hangzhou avitalkinglearningaudiovisualinstructionsforexpressive3dtalkingfacegeneration AT kaisiyuanwang avitalkinglearningaudiovisualinstructionsforexpressive3dtalkingfacegeneration AT hidekikoike avitalkinglearningaudiovisualinstructionsforexpressive3dtalkingfacegeneration |
