Runway launches ground-breaking Gen-1 video generation AI system
Runway, the company best known for its text-to-image model Stable Diffusion, has just released Gen-1, a video generation AI system. Similarly to the company's Stable Diffusion technology, users may use text input to transform videos using the AI model.
A short demonstration video, published on the company's official YouTube channel, shows how Gen1 can turn a video clip of people walking down the street into claymation puppets. A simple command, Claymation style, is all that is required to make the transformation.
Later on, in the same video, Runway reveals that its video generation AI system accepts text and image input to create new video content using existing video clips. Apart from direct transformations of video clips, Gen1 supports what Runway calls Storyboard.
Storyboard turns mockups into animated renders. The video shows how a stack of books is turned into a skyline by night. Then there is mask mode, which allows video editors to isolate objects in the video and modify them. The example this time shows how Gen1 was used to add spots to the dog. The short clip highlights an issue, as the AI put two of the spots directly on the dog's eyes.
Render mode may turn untextured renders into realistic outputs through text prompts or providing an image.
Customization mode, finally, allows users to customize the model for "even higher fidelity results".
You can watch the full video below:
Gen-1 by Runway
Several companies released text to video models in 2022. Meta unveiled Make-a-Video and Google Phenaki and Muse. Both solutions support the creation of short video clips using a user's text input. Google launched Dreamix last week, which looks to be the most similar of technologies when compared to Gen-1. Just like Runway's solution Dreamix takes existing video content and applies new styles to it.
Judging from Runway's demonstration video, it appears that the company's Gen-1 model unlocks new abilities that the competing products lack. For one, Runway allows users to modify existing content and accepts text and image inputs to do so. Runway claims that its GEN-1 video content was "preferred over existing methods for image-to-image and video-to-video transitions" by more than 73% (Stable Diffusion 1.5) and 88% (text2Live).
The company wants to reveal technical details on its Internet website in the coming days. Only a few users have been invited to try Gen-1 so far. There is a waitlist, but it is unclear when the technology is made available to mode users.
Gen-1 takes existing video content and transforms it to new video content using text or image instructions. The technology unlocks new possibilities, not only in commercial environments, but also for hobby and home use. It is probably only a matter of time before similar tools are launched on popular video hosting and streaming websites.
A research paper, published on February 6, 2023, provides technical details for those interested.
Now You: What would you use Gen-1 for, if you had access?Advertisement