Details
- Runway has launched early access to Gen-4.5, an upgraded version of its Gen-4 text-to-video model.
- Demo videos reveal objects moving with realistic weight and momentum, effectively handling challenging zero-gravity effects that previously suffered from glitches.
- According to Runway, a redesigned physics engine, higher temporal resolution, and new diffusion techniques have reduced motion artefacts by 38 percent compared to Gen-4 in their tests.
- Gen-4.5 is accessible via Runway’s web editor and REST API, with Studio and Enterprise users receiving early trial credits and a broader release expected in Q1 2026.
- The model supports video clips up to 16 seconds with new features like variable frame rates, camera interpolation, and advanced color grading controls for post-production.
- Training for Gen-4.5 utilized mixed A100/H100 GPU clusters and a proprietary film dataset curated to strengthen copyright compliance ahead of stricter AI watermark regulations in the EU.
Impact
The Gen-4.5 release intensifies competitive pressure on rivals such as OpenAI’s Sora and Pika Labs, who have also advanced in physics-aware video generation. This leap in realism could significantly lower costs for indie filmmakers and advertisers by replacing expensive pre-visualization methods. However, as synthetic video quality improves, deep-fake risks will prompt faster regulatory moves on watermarking and content provenance worldwide.
