top of page
Search

Where AI in Filmmaking Is Right Now - April 2025

  • Writer: Mariessa
    Mariessa
  • Apr 20
  • 3 min read

Updated: Apr 20




Where AI in Filmmaking Is Right Now - April 2025
Where AI in Filmmaking Is Right Now - April 2025



Let's explore the current landscape of AI in filmmaking as of April 2025.

Please note: this information is subject to change rapidly, as new models and updates are released frequently. However, here's an overview of what's currently possible. I will highlight the tools that we use in Yellow.



Current AI Video Models & Platforms (Paid and Open Source)


Paid Platforms: User-Friendly and Ready to Use


Runway Gen-4 The latest iteration of Runway's text-to-video model emphasizes consistent character identity, dynamic camera movement, and multishot continuity - significant advancements for narrative storytellers. Runway also offers tools like background removal, inpainting, and motion brush.


Kling 2.0 Developed by Kuaishou, Kling 2.0 is a multimodal video and image editing platform that enables users to generate longer, coherent AI-generated clips. It's particularly effective in visual storytelling and maintaining stylistic consistency over time.


Pika Labs 1.5 Pika Labs focuses on short-form, stylized content. The 1.5 update introduces creative effects such as inflating, melting, and exploding objects, along with enhanced prompt capabilities and automatic sound integration, making it suitable for quick idea visualization and experimentation.


Google Veo 2 Veo 2 allows users to generate high-resolution, eight-second MP4 video clips at 720p directly from text prompts. It features improved understanding of real-world physics and human motion, resulting in more fluid character movements and lifelike visuals. Veo 2 is available to Gemini Advanced and Google One AI Premium subscribers.


Open Source / Local Models: Greater Control and Customization


ComfyUI A node-based, open-source interface for building highly customizable workflows for AI video and image generation. ComfyUI is ideal for creators seeking control over styles and motion, and it pairs well with models like Stable Video Diffusion or WAN 2.1.


WAN 2.1 WAN 2.1 is a comprehensive open-source suite of video foundation models designed to advance video generation capabilities. It includes models capable of generating videos from text and images, supporting resolutions up to 720p. The 1.3B parameter model can run on consumer-grade GPUs with as little as 8.19 GB VRAM, making it accessible for a wide range of users.


HunyuanVideo Developed by Tencent, HunyuanVideo is a large-scale open-source video generation model featuring a unified image and video architecture with 13 billion parameters. It incorporates a Multimodal Large Language Model (MLLM) text encoder for enhanced semantic understanding, a 3D Variational Autoencoder (VAE) for efficient compression, and supports dynamic shot transitions while maintaining character consistency.


FramePack FramePack is an innovative open-source video diffusion architecture that enables local AI-generated video creation using GPUs with as little as 6GB of VRAM. It employs multi-stage optimization to condense input frames into a fixed-length temporal context, significantly reducing GPU memory requirements. This innovation allows users with mid-range GPUs to generate high-quality, 60-second video clips.


Potat One Potat One is an AI video generator that empowers developers and researchers to explore the possibilities of high-resolution, coherent video generation. With its impressive capabilities and accessibility, Potat One paves the way for future advancements in the field.


AI Integration in Video Editing Software

AI is increasingly integrated into video editing platforms, enhancing efficiency and creative possibilities.


DaVinci Resolve 20

  • Magic Mask 2: AI-powered tracking and masking

  • AI Voice Convert: Seamlessly swap voices while maintaining tone and pacing

  • Smart Cut & AI-Enhanced Keyframes: Accelerate editing processes

  • AI Dialogue Matcher: Automatically matches the tone, level, and room environment of dialogue

  • AI Music Extender: Adjusts music track length to fit video content

  • AI Animated Subtitles: Generates animated subtitles synchronized with spoken words

  • AI Multicam SmartSwitch: Automatically switches multi-cam angles based on the active speaker

  • AI IntelliCut: Automates clip-based audio processing tasks

  • AI Audio Assistant: Creates a professional audio mix by organizing tracks and balancing levels

  • AI Detect Music Beats: Analyzes music to place beat markers for editing


Adobe Premiere Pro (with Adobe Sensei)

  • Auto Reframe: Adapts content to different aspect ratios automatically

  • Scene Edit Detection: Automatically breaks long clips into individual shots

  • Lumetri AI: Suggests color correction based on shot analysis


Final Cut Pro

  • Smart Conform: Reframes content for various platforms like TikTok and Instagram

  • Voice Isolation: Separates speech from background noise

  • Machine Learning-Driven Suggestions: Automates repetitive tasks


The Current State of AI in Filmmaking


Are we producing full-length AI-generated feature films? Not yet.

Currently, AI excels in concept development and short-form content. It assists in generating ideas, creating stylized inserts or visual metaphors, aiding in rough cuts, and visualizing potential scenes. While there are impressive short films and experimental projects utilizing AI, it is not yet capable of handling the emotional depth, narrative coherence, or directorial nuances required for full-length films independently.

However, AI is no longer just a novelty. It is a practical tool that is already transforming the creative process from pre-visualization to post-production.



Comentários


M&E Films. Charlotte, London & Worldwide. 

bottom of page