Details

  • Runway has released Gen-4.5, a cutting-edge text-to-video model previously known as “Whisper Thunder (David)”.
  • This model serves as the company’s new world-modeling engine, offering improved motion accuracy, adherence to prompts, and heightened visual quality.
  • Gen-4.5 achieved 1,247 Elo points on the Artificial Analysis Text-to-Video benchmark, surpassing Runway’s earlier Gen-4 model.
  • Users can now incorporate advanced instructions—including camera moves, scene composition, timed cues, and shifts in atmosphere—directly within a single prompt.
  • The training and optimization, conducted fully on NVIDIA GPUs in collaboration with NVIDIA, have made both model development and inference more efficient.
  • Physical realism is notably improved: objects move with believable weight and surfaces respond realistically, while users retain the ability to override physics if desired.
  • Rollout starts today, with web app and API access to be available to all subscribers soon, marking two years since Runway’s Gen-1 launch.

Impact

This launch ups the ante for rivals like OpenAI’s Sora, Google’s VideoPoet, and Luma Labs by raising the bar for frame-to-frame realism in AI video. Enhanced prompt fidelity may speed up animation and ad production for smaller studios, while Runway's collaboration with NVIDIA highlights the chipmaker's growing influence over generative AI at a time of tight GPU supply. The tech’s growing sophistication echoes an industry-wide move toward more dynamic, temporally consistent video generation, all as regulators eye new rules for synthetic media ahead of major elections.