Runway launches Aleph, a breakthrough AI model for advanced video editing
Runway has launched Aleph, a powerful in-context video model that lets users perform advanced edits on videos using text prompts. It can add or remove objects, change scenes, modify lighting, and generate new angles, pushing the boundaries of AI-based video editing.
Published Date - 30 July 2025, 05:00 PM
Hyderabad: Runway has unveiled Aleph, its latest in-context video generation model that pushes the boundaries of multi-task visual editing and manipulation. The new AI-powered tool allows users to transform, extend, or edit video content with simple text prompts — from adding or removing objects to generating new camera angles, styles, and lighting conditions.
Aleph enables creators to seamlessly modify existing videos by applying new aesthetics, generating next shots in sequences, altering characters’ appearances, or changing time of day, location, or weather in footage. The model also introduces capabilities like motion transfer from one video to a static image, green screen isolation, and even changing environments or props within scenes — all with precise detail and natural integration.
Users can prompt Aleph with instructions like “add fireworks,” “make it dawn,” or “change the car into chariots with horses” to instantly generate hyper-realistic, contextually accurate outputs. With this release, Runway is aiming to democratize high-end post-production tasks that previously required significant time, skill, or resources.
Early access to Runway Aleph will roll out to Enterprise and Creative Partners, with wider availability coming soon.