Research & Papers
New AI model generates 45-minute lip-synced video from one photo and runs in real time
Matthias BastianThe Decoder
AI Summary
LPM 1.0 is a new AI model that generates 45-minute lip-synced videos in real time from a single photograph, with realistic facial expressions and emotional reactions. Currently functioning as a research project, the model demonstrates significant advancement in video synthesis and facial animation technology.
This article was originally published on The Decoder. Read the full story at the source.
Read Full Article at The DecoderRelated Articles

Just ten minutes of using AI as an answer machine can measurably erode problem-solving skills, new study finds
The Decoder

Building Transformer-Based NQS for Frustrated Spin Systems with NetKet
MarkTechPost
Nvidia wants to scale robot simulation training with Lyra 2.0
The Decoder

UCSD and Together AI Research Introduces Parcae: A Stable Architecture for Looped Language Models That Achieves the Quality of a Transformer Twice the Size
MarkTechPost