Most people know that flying is the safest mode of transportation, but we certainly don’t always feel this to be true. Although driving offers far more potential pitfalls than flying, many people aren’t as apprehensive when getting into their car and driving to work as they are when boarding a plane and then climbing to an altitude of 33,000 feet.
Yet, a heightened level of apprehension is a factor today where autonomous vehicles are concerned. As we develop human error out of vehicles and improve sensor and artificial intelligence systems, autonomous vehicles will become safer than human-driven vehicles, but it will be some time before people trust them. Part of this is because of the novelty of the technology and people’s tendency to be somewhat wary of new technologies, and part of it is because of a human tendency to fear less likely and more catastrophic events, such as a shark attack or a robot car deciding to kill its occupants. Much of the fear comes from misreporting and overreporting.
On average, 3,287 car crash-related fatalities occur each day on U.S. roads. Although undoubtedly tragic, none are likely to make national news unless they involve someone famous. However, any fatality involving an autonomous vehicle will undoubtedly make national news because the technology is still novel, so it’s likely that every instance of bad news related to autonomous vehicles will be tremendously amplified. Does this mean it will be impossible to persuade most people to trust autonomous vehicles? No, but it will take a lot of work to build and maintain that trust, and security will be foundational to that trust.
The intersection of autonomous vehicles and cybersecurity is critical considering that safety, security, and trust are essentially inseparable. The relative insecurity of most every connected thing is top of mind due to the heightened awareness of the impact of cyberattacks and breaches, which are highlighted in the media and popular culture depictions of nefarious hacking exploits. This makes achieving security right from outset extremely critical to avoid becoming the organization highlighted by this type of media attention.
The massive mobile endpoint that is the modern vehicle comes with more than its share of security concerns, and the question remains as to whether today's security solutions are going to translate well to autonomous vehicles, which are complex systems that require the same protections as any other network, such as firewalls, antivirus (EPP), endpoint detection and response (EDR), data loss prevention, and more.
Effective cybersecurity efforts must ensure that these vehicles are protected against malware attacks or takeover by bad actors, and the array of components that make up autonomous vehicles must also be assured to work together harmoniously without the introduction of unanticipated vulnerabilities. That’s the challenge both the automotive and security industries are facing today, and nothing less than the widespread market acceptance of autonomous vehicle technologies is at stake.
Autonomous Vehicle Adoption Challenges
As John Chen, Executive Chairman and Chief Executive Officer of BlackBerry, wrote in his column Mobility Explodes Opportunities for Automotive, Let’s Seize the Moment, published in the eBook The Road to Mobility, the global autonomous vehicle market is set to rise dramatically in the future. The market reached $27.9 billion in 2017 and is expected to grow nearly 42% and reach $615 billion by 2026. Autonomous global light vehicle sales could account for 15% of the market share by 2030.
The evolution toward autonomous mobility is certainly promising, but it’s not guaranteed. With autonomous vehicles being a newer technology and requiring such a dramatic change in how people use that technology, any cyberattacks that undermine safety and security in these vehicles will increase market fears, damage trust, and slow down adoption dramatically.
Now consider the state of autonomous vehicle cybersecurity: a survey from the Ponemon Institute that found 62% of auto manufacturers believe that autonomous vehicle software and related components face short-term risks from malicious attacks. Ensuring that those attacks aren’t successful is critical for the industry to succeed, because even a handful of successful attacks could erode trust to the point where it would take years to recover.
However, the news doesn’t look good. The same Ponemon Institute survey found that, when it comes to cybersecurity efforts, 84% of automakers and their suppliers aren’t confident that they are keeping up. More worryingly, 30% said their organization has not established a cybersecurity program. What’s needed are cybersecurity efforts and industry regulations to get out in front of these challenges, rather than lagging behind them.
Parham Eftekhari, Executive Director at the Institute for Critical Infrastructure Technology (ICIT), and Drew Spaniel, Lead Researcher at ICIT, agreed in the article Connected and Autonomous Vehicles: Policy, Performance and Peace of Mind (also found in The Road to Mobility) that there needs to be an agreed-upon regulatory framework to get out in front of the challenge. “Without meaningful regulatory oversight, autonomous vehicles risk being similarly developed without security controls sufficient to protect consumers from life-threatening risks,” they wrote.
“Most consumers lack the capacity to evaluate the security of the products that they purchase. Therefore, there is little external pressure for technology manufacturers to ensure that they develop products with layered security controls throughout the software development lifecycle,” they continued.
Effective industry or government regulation would certainly help put forward the right set of standards to help maintain the safety of autonomous vehicles. It could help establish the right software and hardware design controls, promote the best practices for software development and certification, and make certain such devices will be updated in a timely manner when patches are needed.
Effective Regulations Challenges
Still, developing effective regulations won’t be easy, and poorly conceived regulations can be a disincentive to innovation and even possibly incentivize the wrong activities. “One of the challenges in developing regulatory legislation and frameworks that rely on non-compliance penalties is that they often fail to encompass the scope of the risk regarding insecure software,” Eftekhari and Spaniel wrote. “This is because policymakers may lack a comprehensive understanding of cybersecurity best practices and the underlying technology being regulated.”
This is largely because of the complexities involved when it comes to modern software development, networking, and manufacturing processes – building autonomous vehicles involves all three. Eftekhari and Spaniel clearly detailed how many distinct disciplines autonomous vehicle security crosses:
- Supply chain security
- Secure coding practices
- Security-by-design throughout development
- Layered security throughout the hardware and software stack
- Threat intelligence sharing
- Consumer privacy protections
- System reliability and autonomy controls
- Manufacturer accountability
- Secure update procedures
- Penetration testing to reduce zero-day vulnerabilities
- Compliance with NIST and other best practice frameworks
Getting the regulations right is going to require considerable individual and collaborative work among diverse stakeholders in the private sector and government. After all, few would argue that regulatory frameworks don’t have a controversial history of effectiveness, but we’ve learned quite a bit from previous attempts, and today, we know what’s needed to succeed.
Regulatory Lessons Learned
Prior to PCI-DSS (Payment Card Industry Data Security Standard), the security at online retailers and in-store point-of-sale systems was undeniably dreadful. PCI-DSS certainly improved retailers’ security, yet it did not stop credit card breaches. Meanwhile, when it comes to healthcare security, few would argue that HIPAA has had a dramatic impact on healthcare data security.
What could be different when it comes to designing regulations for autonomous vehicles? How could this regulatory and security challenge be approached differently? One big difference is that we now have significant data and effective artificial intelligence and machine learning to analyze. The amount of data autonomous cars collect about themselves and the nature of the roadways is staggering. Thanks to such extensive data and machine learning, engineers will be able to understand the nature of these systems to a level that just wasn’t possible before.
As I wrote in my recent article, Security Confidence Through Artificial Intelligence and Machine Learning for Smart Mobility (also found in The Road to Mobility), the ability to dynamically route vehicles, manage rules, and ensure safe conduct can be achieved through supervised and unsupervised machine learning.
This, combined with effective planning, scheduling, and optimization processing of autonomous vehicles, will help increase safety and security. With such insights, malware and bad behavior will be more readily identified and corrected while maintaining the overall security of autonomous vehicles.
The Takeaway
In vastly lowering the number of annual driving fatalities, reducing carbon emissions, and nearly eliminating street congestion, autonomous vehicles promise to dramatically improve the world. But to get there, we’re also going to require consumers to trust in the technology and trust that it is resilient to malware attacks, denial of service attacks, and other types of attacks we have become so familiar with online. After all, it’s one thing if an attacker takes over your bank account, but it’s quite another for an attacker to take control of your vehicle. Safety and security are key.