U-M CEE to Help Lead Adoption of Autonomous Transportation Technologies Through DARPA Contest
Professor Henry Liu is the Principal Investigator of this interdisciplinary effort.
Professor Henry Liu is the Principal Investigator of this interdisciplinary effort.
Accelerating the testing of autonomous transportation technologies is key to bringing their benefits to the wider public, and U-M CEE will engage with several U-M engineers in a new federal challenge to make that happen.
U-M engineers are competing in a Defense Advanced Research Projects Agency (DARPA) program which officially kicked off this year. The federal research agency awarded U-M $3.14 million in July to participate in the three-year effort, called the Transfer from Imprecise and Abstract Models to Autonomous Technologies (TIAMAT) program. It will involve multiple research institutions across the U.S. and be separated into two, 18-month competition segments.
The first segment began in September 2024, during which U-M researchers started to focus on coordinating and combining data from different kinds of simulations to rapidly expand the knowledge base supporting new technologies. In the second segment, researchers will see how well simulation-based training translates to the real world.
U-M’s participation brings together a diverse slate of research interests: civil and environmental engineering, mechanical engineering, naval architecture and marine engineering, and robotics.
Real world testing of autonomous technologies is a long and costly process. Researchers rely on simulations to reduce both factors as they train the algorithms behind autonomous technologies.
For example, to create a real world traffic scenario designed to challenge an autonomous vehicle algorithm with specific conditions, could take months to get right. Simulations can immediately set those conditions and carry out the action repeatedly to generate data.
Low-resolution simulation models lack the highest level of detail, but are faster to perform and less time-consuming. High-resolution simulations, with their increased detail, add time and cost to the process.
If those two sources of data can be effectively combined, the training process can be expedited, helping bring new technologies for cars, ships and drones to real world use.
“We’re looking to transfer autonomy either from one simulation model to another simulation model or from simulation to reality,” said Henry Liu, U-M professor of civil & environmental engineering and director of both Mcity and the Center for Connected and Automated Transportation. “Our goal is to develop AI models that are robust enough so that they can be used in different situations.”
For DARPA’s defense interests, the ability to change and transfer autonomous programming quickly is key. Last September, TIAMAT’s program director described a scenario where a monitoring drone assigned to covering an urban area might be rerouted in response to a missile strike along the coast.
“We would want to quickly repurpose or transfer the autonomy of this platform from surveillance in an urban city to missile defense in a coastal environment,” said DARPA’s Alvaro Velasquez, Ph.D., and the TIAMAT program director. “…The problems are that it can take months to learn the autonomy in the first place and that the transfer is actually very brutal because, of course, we don’t know how to model this perfectly.
“Our modeling and simulation environments are imprecise by definition, and so the current way of learning and transferring autonomy is antithetical to the kind of fast, robust transfer we need.”
Transferring the autonomy to the real world, called the sim-to-real gap, will be the focus of the second, 18-month segment of the competition, likely to begin in 2026.
“Humans can watch a YouTube video on anything, like how to swing a golf club, and transfer that information to our own biomechanics,” said Jason Corso, a U-M professor of robotics and electrical engineering and computer science. “It’s very difficult for AI systems to do that.”
Corso will be joined by Prof. Ram Vasudevan, a U-M associate professor of mechanical engineering, as well as robotics.
For robots and autonomous vehicles (AVs) to see the world accurately, they must consistently associate objects such as trees and cars with the proper labels, said Maani Ghaffari Jadidi, a U-M assistant professor of naval architecture and marine engineering. Modern language modeling allows them to identify these objects the same way, whether it’s in a real or simulated environment.
By building a stable understanding of the objects they detect, a robot or AV can make smarter decisions, even in new or unexpected situations.
“The resulting designed autonomy should be resilient to rapid and inevitable changes in dynamic environments and adaptable across many platforms and domains,” Jadidi said.
Marketing Communications Specialist
Department of Civil and Environmental Engineering