時間:下午2:20-4:20
地點:臺師大公館校區 綜合館三樓國際會議廳
講題:Virtual Musician: An Automated System for Generating Expressive Virtual Violin Performances from Music
講綱:
MOCAP (Motion capture)-free music-to-performance generation using deep generative models has emerged as a promising solution for next-generation animation technologies, enabling the creation of animated musical performances without relying on motion capture. However, building such systems presents substantial challenges, particularly in integrating multiple independent models responsible for different aspects of avatar control, such as facial expression generation for emotive dynamics and fingering generation for instrumental articulation. Moreover, most existing approaches primarily focus on human-only performance generation, overlooking the critical role of human-instrument interactions in achieving expressive and realistic musical performances. To address these limitations, this work proposes a comprehensive system for generating expressive virtual violin performances. The system integrates five key modules, expressive music synthesis, facial expression generation, fingering generation, body movement generation, and video shot generation, into a unified framework. By eliminating the need for MOCAP and explicitly modeling human-instrument interactions, this work advances the field of MOCAP-free content-to-performance generation.
演講者:林鼎崴 博士
任職單位/職稱:國立中興大學資訊工程學系 助理教授
主持人:王鈞右 教授