Uncategorized

How AI can turn your home video into a Hollywood blockbuster

Runway debuts Act One AI tool to turn film of people into animated characters.

Want to star in an animated film as an anthropomorphic animal version of yourself? Runway’s AI video creation platform has a new AI tool to do just that. The new Act-One feature may make motion-capture suits and manual computer animation unnecessary to match live action.

Act-One streamlines what is usually a long process for facial animation. All you need is a video camera facing an actor and able to capture their face as they perform.

The AI fueling Act-One reworks the facial movements and expressions from the inputted video to fit an animated character. Runway claims even the most nuanced emotions are visible through micro-expressions, eyeliners, and other facets of the performance. Act-One can even produce multi-character dialogue scenes, which Runway suggests are difficult for most generative AI video models.

To produce one, a single actor performs multiple roles, and the AI animates the different performances mapped onto different characters in one scene as though they are talking to each other.

That’s a far cry from the laborious traditional animation requirements and makes animation far more accessible to creators with limited budgets or technical experience. Not that it’s always going to match the skills of talented teams of animators with big movie budgets, but the relatively low barrier of entry might let amateurs and those with limited resources have the chance to play with character designs that are still realistic in portraying emotions, all without breaking the bank or missing deadlines. You can see some demonstrations below.

Animated Runway

Act-One is, in some ways, an enhancement for Runway’s video-to-video feature within its Gen-3 Alpha model. But while that tool uses a video and a text prompt to adjust the setting, performers, or other elements, Act-One skips straight to mapping human expressions onto animated characters. It also fits with how Runway has been pushing out more features and options for its platform, such as the Gen-3 Alpha Turbo version of its model, which sacrifices some functionality for speed.

Like its other AI video tools, Runway has some restrictions on Act-One to prevent people from misusing it or breaking its terms and conditions. You can’t make content with public figures, for instance, and it employs techniques to ensure anyone whose voice is used in the final video has given their permission. The model is continuously monitored to spot any attempts to break those or other rules.

“We’re excited to see what forms of creative storytelling Act-One brings to animation and character performance. Act-One is another step forward in our goal to bringing previously sophisticated techniques to a broader range of creators and artists,” Runway wrote in its announcement. “We look forward to seeing how artists and storytellers will use Act-One to bring their visions to life in new and exciting ways.”

Act-One may be somewhat unique among AI video generators, though Adobe Firefly and Meta’s MovieGen have some similar efforts in their portfolio. Runway’s Act-One seems to be much easier to use than Firefly’s equivalent and more available than the restricted MovieGen model.

Still, there’s s ever more AI video competition as OpenAI’s Sora model starts to spread, and Stability AI, Pika, Luma Labs’ Dream Machine, and others push out a steady stream of features for AI video production. If you want to try Act-One, Runway’s paid plans start at $12 a month.

You might also like…

This AI can turn your mundane video into a special effects spectacularForget Sora, Runway is the AI video maker coming to blow your mindTurn your selfie into an action star with this new AI image-to-video feature

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy