Runway Expands AI World Models into Robotics and Self-Driving Cars for Cost-Effective Training Simulations
For over seven years, Runway has been developing advanced visual-generating tools catering to the creative sector. Now, the company sees a promising application for its technology in robotics.
Based in New York City, Runway is renowned for its AI-driven world models that create a digital replica of reality, particularly in the realm of videos and photos. Notably, the firm launched Gen-4, its video-generating model, in March, followed by Runway Aleph, its video editing tool, in July.
As the realism of these world models improved, Runway started receiving inbound interest from robotics and autonomous vehicle companies seeking to leverage the technology. Anastasis Germanidis, Runway’s co-founder and CTO, explained this development during an interview with a media outlet.
“Our technology’s ability to simulate the world has applications beyond entertainment, although entertainment remains a significant and expanding area for us,” said Germanidis. “This capability makes it more economical and efficient to train robotics policies that interact with the real world, whether in robotics or autonomous vehicles.”
Germanidis mentioned that working with robotics and autonomous vehicle companies was not part of Runway’s initial plans when it was first established in 2018. However, as robotics and other industries approached them, the company realized their models had potential uses beyond what they initially envisioned.
Robotics firms are utilizing Runway’s technology for training simulations, Germanidis revealed. He added that real-world training for robots and autonomous vehicles is costly, time-consuming, and difficult to scale.
While Runway acknowledges it won’t replace real-world training entirely, Germanidis stated that companies can gain significant value by running simulations on Runway’s models due to their ability to test specific variables and scenarios without altering the rest of the scenario.
“You can step back and then simulate the outcome of different actions,” he said. “For instance, if a car takes a certain turn or performs a specific action, what will be the result? Creating such simulations from the same context is challenging in the physical world, as it requires keeping all other aspects of the environment constant while only testing the effect of the desired action.”
Runway isn’t alone in pursuing this opportunity. For example, Nvidia recently unveiled an updated version of its Cosmos world models, along with other robot training infrastructure.
Runway doesn’t plan to release a separate line of models for its robotics and autonomous vehicle customers. Instead, the company will refine its existing models to better cater to these industries while also forming a dedicated robotics team. Germanidis added that although these sectors weren’t initially discussed in investor pitches, they are supportive of this expansion. Runway has secured over $500 million in funding from investors such as Nvidia, Google, and General Atlantic at a $3 billion valuation.
“Our approach to the company is rooted in a principle rather than being market-driven,” Germanidis concluded. “That principle is the idea of simulation, building ever more powerful models of the world. Once we have these potent models, they can be applied across various industries and markets.”