A decade ago, the idea of hacking a car seemed about as feasible as downloading one. After all, cars were physical objects driven by people, with all their accompanying human flaws. Today, as artificial intelligence (AI) technology is making a future full of self-driving cars increasingly likely, hacking has become a serious potential concern. Intelligent vehicles have the potential to offer enormous safety benefits. However, fully utilizing their potential will require innovative new cybersecurity protocols, which may strain pre-existing regulatory frameworks. The stakes are high, though, since without effective regulation, consumers will likely shy away from autonomous vehicle technology.

“With intelligent vehicles, there is a promise of saving tens of thousands of lives each year in the U.S. alone, around the world it is probably in the hundreds of thousands. And yet, at the same time, fear and distrust will cause people to abandon these technologies,” said Beau Woods, a cyber safety innovation fellow at the Atlantic Council, who spoke at an event sponsored by George Mason University’s Antonin Scalia Law School and the R Street Institute.

Autonomous vehicles offer increased safety to users who are willing to step away from being “drivers” and to surrender control of their vehicles. This requires a high degree of public trust in the technology itself. As a result, cybersecurity will be nearly as important as crash testing for the new age of self-driving cars.

“In my estimation it is going to be a single fatality or very small number of fatalities that get people to say, ‘Wait, if I’m not driving that car, I’m not going to buy that,” said Woods.

Although autonomous vehicles are still in the testing phases, they have the potential to dramatically effect both national security and the economy. According to a recent report by R Street, the market for connected cars, which interface with the internet and with other cars on the road to facilitate information sharing, is predicted to see dramatic growth in the coming decade. These cars, which are not necessarily fully autonomous, are expected to grow from 5.1 million units in 2015 to 37.7 million units by 2022, an increase of more than 35 percent. Since more than 90 percent of crashes are attributed to human error, these shifts are predicted to save billions of dollars each year.

Like any other computer system, autonomous vehicles could potentially be hacked, though.

Cybersecurity professionals warn that the threats posed by hackers may be different than many people expect. The recent Fate of the Furious movie included scenes where fleets of hacked self-driving cars raced through the streets. In theory, this could happen, but it is more likely that hackers would target cars with ransomware, locking the ignition until a certain sum had been paid, or use them in order to gain access to personally identifiable information that they might contain. Manufacturers are exploring ways in which cars could come to serve as payment systems, which could make them more attractive targets for hackers.

“Cars are another example of where what we consider our identity is going to change,” said Bryson Bort, the founder and CEO of SCYTHE, a cybersecurity consulting firm. “Today we are a social security number, a date of birth, and a full name. In the future, it is going to be an amalgamation of information systems, cars being one of them, that become your digital footprint and digital identity.

Bort believes that this will result in a more comprehensive digital identity, making cybersecurity all the more crucial.

“It’s going to be more than just payment,” he says. “It’s who I am.”

As a result, it will be critical for new autonomous vehicles to have both strong security standards and a means for these standards to be updated. Unlike most personal computers, cars are routinely used for decades, meaning manufacturers must plan for future modifications needed to combat threats that, so far, have not yet been imagined.

“Given the average lifespan of a car, things we put out today will be around until 2040,” said Woods, who described how state of the art software originally developed by governments for espionage purposes was already leaking into the wider world of hackers. “In that time frame, will even low-level terrorists have access to these types of code?”

To combat these threats will require an attitude of continual improvement. Some of this has already been adopted by manufacturers. Alphabet (Google)’s self-driving division, Waymo, is working on an autonomous vehicle that will use limited windows of connectivity to minimize the windows hackers have to compromise the system.

The stakes are high in the nascent world of autonomous vehicles, particularly since it is unlikely that a company could survive the public relations backlash of a hacked car.

“The risk is a civil justice issue,” said David Strickland, former Administrator of the National Highway Traffic Safety Administration (NHTSA) and now counsel for the Self-Driving Coalition for Safer Streets. “People in this space are worried about a class action suit. If you get this wrong, it could be the end of your small, or large, company.”

Autonomous vehicles would require a new form of regulation. Operating in the intersection between traditional and software liability. As a result, industry and government must work together to develop a framework to both promote cybersecurity and also to mediate potential problems that develop.

From a regulatory standpoint, Strickland says, both sides should be realistic about the speed of the current framework. Passing rules is a lengthy process with extended periods dedicated to comments and hearings. He argues that, as a result, consumers and the industry would benefit from a shift to a compliance metric, rather than a defect one.

With this understanding, manufacturers would be required to meet cybersecurity standards, rather than being fined or otherwise punished after a violation. This would protect consumers by obviating the need to wait for a crash or other negative outcome.

Another opportunity could come from industry openness, said Woods. He advocated for the wider recognition of the fact that eventually all systems will fail. Realizing this could allow manufacturers to be more agile, learning how to better anticipate or avoid failures and also to take help from others. These approaches would be aimed at learning how to respond to failure with great agility, learning how to isolate and contain the problems.

Research into autonomous vehicles is continuing and doubtless the cars are not as far down the road as it might seem. However, before American garages fill up with cars that drive themselves, both regulators and security professionals have their work cut out for them.

Follow Erin on Twitter.