Luma AI has announced the launch of Ray3 Modify, a new AI model designed to bring greater control, continuity, and predictability to AI-generated video workflows involving human performances. Now available on the company’s Dream Machine platform, Ray3 Modify targets professional use cases across film production, advertising, visual effects, and post-production.
The announcement was made on December 19, 2025, with Luma AI highlighting Ray3 Modify as a response to a longstanding limitation in generative video systems: the difficulty of preserving authentic human acting while transforming scenes using AI. Early AI video models often struggled to maintain timing, motion consistency, and emotional intent when modifying footage. Ray3 Modify aims to address these challenges by anchoring AI generation directly to real-world, human-led performances.
At the core of Ray3 Modify is a hybrid-AI workflow that conditions AI output on recorded footage of actors, camera movement, and physical staging. Rather than replacing human performance, the model uses it as the primary source of direction. This approach allows creative teams to alter environments, visual styles, and cinematic elements while maintaining the original actor’s motion, eye line, and emotional delivery.
According to Luma AI, this capability enables brands to work with actors to create personalized or localized content without reshooting, while filmmakers can place actors into different worlds, settings, or visual styles after the initial shoot. By reducing unpredictability in AI-generated edits, Ray3 Modify is positioned for production environments where continuity and repeatability are critical.
Amit Jain, CEO and co-founder of Luma AI, said the new model is intended to balance creative expression with control. He noted that while generative video models are highly expressive, they have traditionally been difficult to direct with precision. Ray3 Modify, he said, allows creative teams to capture a performance once and then rework locations, costumes, or even entire scenes using AI, without recreating the physical production.
Ray3 Modify introduces several new capabilities that extend the Ray3 model’s role in AI-driven production pipelines. One of the most significant additions is Keyframe Control, which allows users to define start and end frames in video-to-video workflows. This enables smoother transitions, improved spatial continuity, and better control over character behavior during complex camera movements or scene reveals.
Another feature, Character Reference, allows creators to apply a consistent character identity—covering likeness, costume, and visual continuity—across an entire shot. This is particularly relevant for actor-led projects where maintaining identity consistency is essential.
The model also emphasizes performance preservation, ensuring that the original physical and emotional elements of an actor’s performance remain intact while visual elements are transformed. Supporting this is an enhanced Modify Video pipeline, built on a new high-signal model architecture designed to more accurately follow real-world motion, framing, and composition.
With its integration into Dream Machine, Luma AI is targeting professional workflows that demand both cinematic quality and precise creative control, signaling a continued push toward AI systems that augment, rather than override, human creativity.





