Monday, 28 September 2015

Highway to hack: why we’re just at the beginning of the auto-hacking era . . .


Highway to hack: why we’re just at the beginning of the auto-hacking era

A slew of recently-revealed exploits show gaps in carmakers' security fit and finish.

Cited at:

A display by the cybersecurity firm Grimm used in a tutorial at DEF CON 23's Car Hacking Village at Bally's Hotel in Las Vegas.
Sean Gallagher

Imagine it’s 1995, and you’re about to put your company’s office on the Internet. Your security has been solid in the past—you’ve banned people from bringing floppies to work with games, you’ve installed virus scanners, and you run file server backups every night. So, you set up the Internet router and give everyone TCP/IP addresses. It’s not like you’re NASA or the Pentagon or something, so what could go wrong?

That, in essence, is the security posture of many modern automobiles—a network of sensors and controllers that have been tuned to perform flawlessly under normal use, with little more than a firewall (or in some cases, not even that) protecting it from attack once connected to the big, bad Internet world. This month at three separate security conferences, five sets of researchers presented proof-of-concept attacks on vehicles from multiple manufacturers plus an add-on device that spies on drivers for insurance companies, taking advantage of always-on cellular connectivity and other wireless vehicle communications to defeat security measures, gain access to vehicles, and—in three cases—gain access to the car’s internal network in a way that could take remote control of the vehicle in frightening ways.

While the automakers and telematics vendors with targeted products were largely receptive to this work—in most cases, they deployed fixes immediately that patched the attack paths found—not everything is happy in auto land. Not all of the vehicles that might be vulnerable (including vehicles equipped with the Mobile Devices telematics dongle) can be patched easily. Fiat Chrysler suffered a dramatic stock price drop when video of a Jeep Cherokee exploit (and information that the bug could affect more than a million vehicles) triggered a large-scale recall of Jeep and Dodge vehicles.

And all this has played out as the auto industry as a whole struggles to understand security researchers and their approach to disclosure—some automakers feel like they’re the victim of a hit-and-run. The industry's insular culture and traditional approach to safety have kept most from collaborating with outside researchers, and their default response to disclosures of security threats has been to make it harder for researchers to work with them. In some cases, car companies have even sued researchers to shut them up.
Sticker shock

In contrast, Tesla has embraced a coordinated disclosure policy. The company recently announced a vehicle security bug bounty program that offers $10,000 for reproducible security vulnerabilities. Tesla even participated in the presentation of vulnerabilities discovered by outside researchers in the Tesla S's systems at DEF CON. The company's chief technology officer JB Straubel appeared on stage with the researchers who performed the penetration test of the Tesla S—Marc Rogers of Cloudflare and Lookout Security CTO and co-founder Kevin Mahaffey—in order to present them with Tesla "challenge coins" for their work.

But no one from Fiat Chrysler was anywhere near the stage when Charlie Miller and Chris Valasek presented their findings on Uconnect. And it might be a while before any other carmaker makes a move to embrace the security community in the wake of the Chrysler recall.

It's not like Miller and Valasek caught Fiat Chrysler by surprise. Miller told Ars that he worked with Fiat-Chrysler throughout his many months of research, advising them of what he and Valasek found. The company had already issued a patch to fix the problems, but it was only a voluntary update to be performed using USB. Sprint moved to block remote access to the network connection on Chrysler vehicles that Miller's and Valasek's attack exploited just before the pair revealed their research at Black Hat. 

Tesla CTO JB Straubel (center) speaks after Marc Rogers of Cloudflare (left) and Lookout Security CTO and co-founder Kevin Mahaffey (right) presented their research on the Tesla S at DEF CON.

Still, it wasn't until after Wired published video of reporter Andy Greenberg in the driver's seat on an interstate highway reacting to the vehicle's throttle being remotely taken over that Chrysler issued a recall on over a million affected vehicles. Miller said that the demonstration for Wired was completely safe. "It wasn't nearly as bad as the Wired video made it look," he said, explaining that what he and Valasek had done to Greenberg was the same as would happen to any driver during a typical breakdown. Greenberg still had control of the wheel and limited acceleration, according to Miller, and the reporter would have been able to maneuver to a shoulder. But even if things looked a tad over dramatic, Miller felt that the highway demonstration was needed to make the problem real to the American public and to Chrysler. After all, other researchers funded by DARPA—the same program that had funded previous research by Miller and Valasek— demonstrated the same sort of attack for60 Minutes only few months earlier with reporter Leslie Stahl driving on a closed course in a parking lot. That time, however, the brand of the car was concealed, and the test took place on a closed circuit. "People couldn't relate it to real life," Miller said.

Beyond awareness, the video of the researchers shouting "You're doomed!" to Greenberg as they took remote control may have other consequences. At least two automakers that planned to announce new initiatives to cooperate with security researchers during the DEF CON 23 pulled back in the wake of the Uconnect disclosure, according to Joshua Corman, one of the founders of I Am The Cavalry. This grassroots organization focuses on cybersecurity and public safety issues by lobbying automakers to adopt better software security practices. According to Corman, the news coverage triggered intervention by company attorneys who saw the Wired video as a "reckless stunt."

"Right now people are cheering [Miller and Valasek] as heroes," Corman told Ars. "But what they don't see are the hidden costs that have been paid. Right now it could just set us back for a little while, or it could set us back permanently."

No one at Ford, GM, and Chrysler would talk with Ars about their strategy for uncovering potential security issues in software that could be used for "cyber-physical" attacks—hacks that could have impact in the physical world by interfering with the operation of cars. Ford would only provide the following statement:

Ford takes cyber security very seriously. We invest in security solutions that are built into the product from the outset. We are not aware of any instance in which a Ford vehicle was infiltrated or compromised in the field. Our cyber security team routinely monitors, investigates, resolves concerns, and works to mitigate threats. Our communications and entertainment systems feature a different architecture than what was hacked, but we are interested in learning more about the Chrysler Uconnect, GM Onstar, and Tesla issues and whether there are additional enhancements we can make in our vehicles. Our security team has developed hardware and software safeguards as well as specific processes to help mitigate remote access risks in all our vehicles, whether they feature embedded cellular connections or not.

For his part, Miller said he's done with car hacking for now. He achieved his goals, but there are plenty of other security researchers in line to try to help the industry. Corman believes automakers need to work with them because the number of potential security bugs is only going to grow as vehicles continue to add software-based functionality and connectivity to the Internet.

Crumple zones
Enlarge / The wireless attack surfaces of a typical late-model "connected" car. Your hackage may vary.
Sean Gallagher

It used to be that car hacking basically meant voiding the factory warranty. As anyone who’s watched enough Top Gear knows, car enthusiasts and “tuners” have used electronic hacks such as “launch control” devices to bypass environmentally friendly settings in engine control systems for some time. Making these kinds of changes can, if you don’t know what you’re doing, “brick” your engine (or worse). But that’s not the sort of thing the general driving public has had to worry about.

The “attack surfaces” of cars that get the most attention are the ones designed to keep people from driving away with cars they don’t own—electronic keyless entry systems or locks, and vehicle immobilizers that use low-power radio to detect the presence of a valid car key before allowing a car to start for example. Both of those types of systems, which use cryptographic keys transmitted by radio from a key or key fob, have been targeted by researchers. Engine immobilizers for a number of luxury brands were successfully attacked as part of a study by researchers at Radboud University (that was suppressed by Volkswagen’s lawyers for two years). Remote keyless entry systems have also been targeted in a number of ways, including signal amplification attacks and brute-force crypto breaking (as detailed in research by Qualys’ Silvio Cesare).

There are still areas of potential radio hacking that haven’t been fully explored. For example, tire pressure monitoring systems use radio communications to alert low tire pressure. Some commercial vehicles use remote automatic tire inflation systems, activated by pressure sensors, that communicate wirelessly. These systems could be targeted by hijackers to potentially fool a driver into pulling off the road or to blow out the tires on a trailer if an attacker successfully fooled them. (Though because of the design of some of these systems, a blow-out seems unlikely.)

Three of the exploits discussed at conferences this month were focused on simply gaining access to vehicles. As Ars reported last week, Dutch researchers finally were able to present the (almost) full findings of their research on defeating engine immobilizer systems used in cars from Volkswagen and its luxury brands as well as other automakers at USENIX Security in Washington. At DEF CON, Samy Kamkar unveiled two potential attacks on auto security. One, called "RollJam," targeted remote keyless entry systems on cars by performing a type of man-in-the-middle attack against the rolling keys used by the systems. By jamming the reception of the signal by the vehicle's receiver, the RollJam device could record the attempts made by the keyfob to authenticate and then rebroadcast the first of them to the car to unlock it.

The second vulnerability Kamkar revealed was not in the vehicles themselves but in the mobile apps that connect to car makers' telematics services. Kamkar showed he could capture data from the RemoteLink iOS mobile app for General Motors' OnStar service, allowing him to track, open, and remote start some GM vehicles. This past week, Kamkar announced that he had been able to use the same sort of attack on mobile apps for BMW, Mercedes-Benz, and Fiat Chrysler cars. The cloud services behind these telematics systems remain a potentially rich source of vulnerabilities that hackers could attack, especially as they get tied to third-party services.

Most of these potential attacks require at least initial proximity to pull off, and they pose relatively little threat to cars while they are being driven. But connected car services such as GM’s OnStar, Fiat Chrysler’s Uconnect, Ford’s Sync, and add-on services such as those based on Mobile Devices’ C4 OBD2 “dongle” greatly extend the range of a potential attack—especially if the attacker’s goal is to do damage by interfering with the driver’s ability to operate the vehicle. In some cases, as Miller and Valasek demonstrated with the Jeep Cherokee, that can mean gaining access to even control the brakes and steering wheel.

These are just the attack approaches that are being tried now. Corman said he believes, as In-Q-Tel chief information security officer Dan Geer has suggested, that "bugs are dense" —meaning there are sure to be a given number of potentially exploitable defects in every thousand lines of code. "The total number of bugs will go up as the total number of lines of code goes up," he said. "The total number of access points to that exploitable codes go up as the number of devices on the network go up. And the total number of adversaries go up because now we've taken car hacks from theoretical to demonstrable." Car companies, Corman said, have to be prepared for software failures, because it's not a question of if they will happen, but when. The more important question becomes how car makers will respond.

Telematics systems like Uconnect, OnStar, and Sync continue to add features in an attempt to make them more attractive to car owners (and keep them locked into a continuing revenue stream for carmakers). Combine those cloud-driven services with the increased addition of automated systems in vehicles driven by sensors—collision avoidance systems, adaptive cruise control, anti-lock braking, anti-theft features, and increasingly sophisticated remote diagnostics to name a few—and the network effect of a vulnerable remote connection to a vehicle increases the odds that something can be hacked. Given that many car owners never even respond to recalls on things like vehicle software, such vulnerabilities could live on for as long as those cars are on the road.
Opening the CAN

Nearly every car built since the Reagan administration has at least one onboard network: the Controller Area Network architecture, first developed by engineers at Bosch in the early 1980s. It’s been nearly 30 years since the automotive industry embraced CAN as a standard, with the first models using the microcontroller network arriving in 1988 starting with BMW’s 8 Series. Since then, CAN has gone through several revisions, and it has been picked up as a standard for embedded systems in industrial automation applications from building controls to medical equipment because of its “plug and play” nature.

While automakers initially went their own way on CAN protocols, any vehicle built after 2007 uses a common configuration based on the International Organization for Standardization’s ISO 11898-1. That means that suppliers can build systems for multiple automakers, and it’s lowered the cost of CAN controller chips.

In the automotive domain, not everything in a car necessarily rides on the same CAN bus. Most of the critical components of the car’s operation—engine controller, ignition, anti-lock brakes, primary instrumentation, and even steering—are usually on a network separate from the one that handles accessories like the radio and entertainment center, power windows and doors, and so on. In the 2014 Jeep Cherokee used in the research by Miller and Valasek, the two separate CAN networks were referred to as CAN-C (as in "Classic") and CAN IHS (Internal High Speed). CAN IHS handled the radio and "comfort" features of the vehicle.
Enlarge / A diagram of the CAN networks and connectivity of the 2014 Jeep Cherokee, as presented by Charlie Miller and Chris Valasek.

In cars where information from the first CAN has to be communicated to the second, automakers have generally tried to make it read-only—so that nothing that gets plugged into your radio could, say, start sending CAN bus signals to your engine controller and theoretically make your engine explode.

Before people started making “connected” cars, this was pretty much not a problem. And even early “connected” applications, like GM’s OnStar service, only offered very controlled and limited access to vehicle systems, sending sensor data back to OnStar’s data center and providing basic remote diagnostics.

But when the auto industry started introducing in-car, always-on connectivity as a feature, automakers also introduced an opportunity for some emergent behaviors—and for people outside the car to start acting directly on the vehicle. Demand for features like remote kill-switches for law enforcement to use to stop car thieves and auto-parking capabilities started to whittle away at that firewall around the primary CAN network.

And then there’s the On-board Diagnostics II (OBD II) port. Since 1996, thanks to the Environmental Protection Agency’s enforcement of the Clean Air Act Amendments, every gasoline-powered car and light truck has been required to have an OBD II port for emissions testing. To extract engine diagnostic data, the port allows devices to jack directly into the CAN bus—in other words, all you have to do is plug into a port near the driver’s side door under the dashboard, and you’ve entered the vehicle’s inner sanctum.

OBD II dongles have become a common way to tap into car data since they push such info to mobile apps over Bluetooth connections. Telematic control units (TCUs) such as Mobile Devices’ devices, have a 2G or 3G cellular modem built in to send telemetry data back to a supervisory party—corporate fleet managers, insurers, or Uber, for example, anyway tracking how a car is being driven. These devices can also respond to crash sensor alerts to send automatic alerts to 911.

But because it taps right into the CAN bus, the OBD port can be used to send CAN messages as well. Car “tuners” interested in cranking up the performance of vehicles (often in complete disregard of EPA standards) have taken advantage of that ability, using the OBD II port to make adjustments to the engine’s programming. Normally, that wouldn't be a problem—except TCU devices have that cellular network connection.

"In spite of the fact that most aftermarket TCUs are designed for monitoring only, CAN is a multi-master bus and thus any device with a CAN transceiver is able to send messages as well as receive," wrote University of California San Diego researchers Ian Foster, Andrew Prudhomme, Karl Koscher, and Stefan Savage in the paper they presented at Usenix Security recently. "This presents a key security problem since as we and others have shown, transmit access to the CAN bus is frequently sufficient to obtain arbitrary control over all key vehicular systems (including throttle and brakes)."

UCSD researchers exploiting the Mobile Devices OBD II dongle used by MetroMile Insurance and Uber.

The root of the matter

CAN's open model is fine when all the things on its network are model citizens. And much of the engineering work done by automakers focuses on making sure that the devices in the car are exactly that—that they perform precisely as designed under expected conditions. The trouble starts when something on the CAN starts doing unexpected things like masquerading as other devices on the bus and issuing control commands. The hacks of the Tesla S, the Jeep Cherokee's Uconnect system, and the Mobile Devices OBD II device all managed to send commands to the CAN bus.

In the case of the Tesla S, it did take physical access to the vehicle first to gain remote access. Marc Rogers of Cloudflare and Lookout Security CTO and co-founder Kevin Mahaffey had to disassemble the Tesla's head unit to gain access to an Ethernet port on its board, which gave them access to the Tesla's CAN. It also took cracking passwords in a (plaintext) shadow file on a memory card and stealing rolling Tesla security tokens to get root-level access. But the head unit controllers in both the Jeep and the Tesla and the microcontroller in the Mobile Devices dongle all easily awarded root access or equivalent without having to lay a hand on the vehicle they were in.

The Miller-Valasek proof-of-concept attacks on the Jeep Cherokee and UCSD's exploit of a Mobile Devices-equipped Corvette, however, didn't require physical access to succeed. The attacks could be mounted without even touching the hardware over cellular data networks (and in the case of the Mobile Devices dongle, even via SMS text messages once the device was compromised). In many ways, the Mobile Devices hack was more dangerous than the Jeep hack, because it could work across multiple makes and models of automobiles—just about any recent vehicle with anti-lock brakes and other computer-powered features. The hack didn't require doing anything that an observant vehicle owner might notice.
Hacking the CAN: three examples
Type of access gained Physical: The researchers found an Ethernet port built into instrument cluster and central information display, and plugged in their own Ethernet switch to connect to the in-car network. Remote: TCP/IP Cellular data connection to Uconnect "head unit" over port 667 provided ability to access scripts that could execute arbitrary remote code. Access to CAN was blocked by a gateway chip running separate code. Remote: Limited TCP/IP access due to device's built-in NAT firewall, but administrative access was available via SMS messages—instructing the download of an update from an arbitrary server. The remote secure shell key was the same for all devices, and updates were over secure file transfer (SCP) using those credentials, without any further authentication of packages.
Discovery of target Physical access/break-in Port scan over Sprint mobile network for port 667 (now disabled), VIN number of vehicle could be queried in automated scan. Network fingerprint search for devices using specific SSH server, and "war dialer" SMS messages of administrative commands to cellular numbers with 566 area code ("reserved for 'personal communication services'").
Level of access Root on instrument panel, but no access to the CAN other than via Vehicle Application Programming Interface (VAPI) calls to a gateway to the CAN. The researchers were able to extract a shadow password file and a Tesla security token from flash memory and firmware analysis. Root-level access to QNX operating system and Uconnect's D-Bus services daemon via cellular data connection (Fiat-Chrysler's implementation of D-Bus lacked authentication). Through a fake software update, they "reflashed" the gateway to the CAN and gained the ability to send direct messages to it. Root-level access through SMS-driven remote installation of new software, establishing a reverse shell connection. Direct access to CAN network achieved.
Physical effects possible with exploit Interfering with instrument display by exploiting the X11 protocol, and remote shutdown of vehicle using the Vehicle application programming interface (VAPI). At speed, remote shutdown warned driver and shifted car into neutral, allowing driver to brake and steer until speed was lowered; at under 5 miles per hour, parking brake engaged and car shut down. Crafted remote update to reflash the firmware of the chip acting as gatekeeper to the primary CAN allowed jumping the "airgap" between the systems, sending CAN messages that turned on wipers and sprayed window fluid, turned on air conditioning, activated or disabled brakes, killed throttle, and controlled steering while in reverse (exploiting "parking assist" features of the vehicle). Also took control of radio and display. Reverse shell allowed execution of arbitrary CAN bus commands, similar to Uconnect proof of concept—turned off or activated brakes, turned wipers on and off.

The Uconnect hack by Miller and Valasek and the vulnerabilities in the Mobile Devices dongle demonstrated by Foster and the other UCSD researchers weren't caused by bugs in the software behind them, per se, but rather by poor configuration of the basic services they needed to operate. These issues would likely never be uncovered by the auto industry's traditional approach to software testing, which focuses on intentional use of the code.

In the case of the Jeep hack, Miller and Valasek discovered that the software acting as the communications broker for the vehicle's accessories network (CAN-IHS) was accessible through every network interface of the vehicle, and all these interfaces neglected to use the authentication system supported by the software. As a result, anyone who could connect to the car's Wi-Fi or cellular network could essentially gain root access to the Uconnect system. That software, D-Bus, is a Python-based inter-process communications "bus" that allows separate software components to communicate with each other. While its original purpose was to handle communications between applications running on a single computer, it can also be used to make remote procedure calls—allowing other systems to execute code remotely.

The current version of D-Bus supports multiple security features, including authentication systems and the fine-grain security controls of AppArmor. However, the version implemented in the Harmon-made Uconnect head unit had no authentication turned on. As a result, all of the programming interfaces on the system were wide open to remote attack. And while the primary CAN bus of the vehicle was only read-only accessible through a gateway, Miller and Valasek were able to gain access to the firmware for that system and craft a malicious "update" once they had gained control of the Uconnect system. That update gave them direct access to the CAN-C bus and the ability to access features exposed by some of the Cherokee's computer-driven features:
Adaptive cruise control programmatic access to the Jeep's throttle control, and allowed Miller and Valasek to essentially lock out the gas pedal.
Anti-lock braking offered access to brake control, allowing the researchers to turn brakes all-on or all-off.
Parking assist, the auto-parallel and perpendicular parking feature, exposed control of the steering wheel (but only while the vehicle was in reverse).

Foster, Prudhomme, Koscher, and Savage found at first that their attempts to connect to the Mobile Devices dongle over its cellular data interface were initially foiled by a network address translation (NAT) firewall in the device's software. But that was overcome fairly easily when they discovered the device's administrative interface for over-the-air updates was driven not by TCP/IP communications, but SMS text messages. They were able to modify an over-the-air update to the device and connect it with their own rogue server. In turn, they installed a reverse SSH shell from the device back to the server, which gave them a direct interface to the ARM-based system running on the dongle.
A diagram of the exploit of the Mobile Devices OBD II dongle by UCSD researchers—establishing a reverse remote shell session to attack the vehicle's CAN bus.
Ian Foster, Andrew Prudhomme, Karl Koscher, and Stefan Savage

The good news in both the Chrysler and Mobile Devices cases is that the problems were at least partially fixed by sending updates over-the-air to the telematics systems to update their software. The bad news is that over-the-air updates can't fix potential problems in millions of other vehicles—problems that may be in systems that can't be remote updated, or updated at all. And those problems are more likely to be uncovered by outsiders than the automakers, particularly because the problems could come from anywhere in the industry's supply chain.

"While Tesla can upgrade its head unit, I'm not sure they can update other attack surfaces," Corman said. "So it's actually much worse than it looks. Right now, the systems in cars are very fragmented, and even a company that can over-the-air update is vulnerable—it doesn't mean they can OTA update everything that could be attacked."

For example, the software update to Fiat-Chrysler's Uconnect system may have mitigated some of the issues around remote cellular attacks, but other systems built by Chrysler's suppliers not tied to the Uconnect head unit directly. These items, things on the CAN-C bus, can't be updated through the system the way it's currently designed. Making a vehicle entirely updatable over the air would require a fundamental redesign of the communications systems in vehicles along with a much stronger level of trust to let them touch all of the onboard systems.

Ironically, some car companies have embraced car hackers of a different sort while shutting out security researchers. Some automakers have given developers access to tools that access the CAN bus to prototype new applications or APIs to remotely hook into telematics. Ford has even open-sourced some of its "infotainment" center technology. But when it comes to people who want to help them improve the security of their vehicles' systems, car manufacturers have been reluctant to come to the table. Rather than figuring out ways to work with security researchers to find and fix holes, Corman said, the automakers "are seeing this more as a hacker management problem." Automakers essentially hope that by deterring security researchers from investigating their systems, they can keep potential vulnerabilities hidden.

That's not going to help according to Corman. "We're going to have a lot of car hacking," he said. "It's just not going to be your normal script kiddies or other criminals. I want car makers to be prepared for that, and ready to respond when it happens."
Avoiding the "antibody" response

Even the most professional of software companies with mature security programs have problems with their software. Microsoft's security process, which Corman points to as an example, is one of the best-managed in the software industry—and yet the company still has to patch dozens of vulnerabilities monthly. But in the auto industry, engineers often oppose having a way of patching bugs. "There's a school of thought in industrial controls that you should never update software—that making things updatable makes them more vulnerable," Corman said. "That's incredibly naive in a hyper-connected world. What they haven't paid attention to on the extreme end of things, is that when you use something like OpenSSL or Bash that is vulnerable and you can't update it, that makes it far worse. If they choose not to be updateable, it's a permanent vulnerability or you have to throw the thing away. Yes, having remote updates does introduce a small attack surface and more complexity, but it's far worth it because of the agility and flexibility of response. if it can only be updated manually, it will be less comprehensive and slower, and you can't reach all of the affected vehicles."

Before forming I Am the Cavalry, Corman looked at previous attempts by people in the security industry to engage automakers, "and they all had failed," he said. "I talked with (DEF CON and Black Hat founder) Jeff Moss about it, and he said, 'You have to do it in such a way that you don't deploy antibodies—if you do too much, you catalyze retaliation.'" So Corman and his colleagues have tried to engage the industry carefully in the hopes of walking them toward better security practices.

Part of that approach is the "Five Star" automotive cybersecurity program—an effort outlined in an open letter sent a year ago to automakers. The document urged them to adopt a set of metrics that would let consumers see what measures carmakers had taken to protect against cyber-attacks on their vehicles. The five "stars" are:
Safety by Design—Do you have a published attestation of your Secure Software Development Lifecycle, summarizing your design, development, and adversarial resilience testing programs for your products and your supply chain? This step, Corman said, is to get the auto companies "communicating their competence" in security, and to "start making the cultural shift required" for all that follows.
Third Party Collaboration—Do you have a published Coordinated Disclosure policy inviting the assistance of third-party researchers acting in good faith? "Star number two is about making sure my friends (security researchers) don't get sued," Corman said. "But also, if you can cast a wider net, and get things reported to you quietly, it helps create the cultural transformation required" to support growing a security culture in the company.
Evidence Capture—Do your vehicle systems provide tamper evident, forensically-sound logging and evidence capture to facilitate safety investigations?
Security Updates—Can your vehicles be securely updated in a prompt and agile manner?
Segmentation and Isolation—Do you have a published attestation of the physical and logical isolation measures you have implemented to separate critical systems from non-critical systems?

Tesla is the first company to get to the second star, but it didn't get there overnight. Earlier this year, the company unveiled its first bug bounty program—but it was one that targeted bugs in its corporate website and back-end systems rather than in vehicles. "It took forever for Tesla to say they'd do coordinated disclosure," Corman said. "And they couldn't do a bug bounty right away because they were afraid they would get flooded with submissions. So they started small so that they didn't get inundated." Pulling all this off, Corman noted, took "champions and safety advocates" within the company to help drag the company forward.

But Tesla is a different sort of company. Other automakers may have champions for cybersecurity and safety advocates within them, but they also have a different sales model than Tesla and a different engineering culture. Getting forward motion on a bug bounty program at General Motors, Ford, or Fiat Chrysler requires a good deal of political will within the company to make it happen. (They'd also need to overcome the corporate reflex to sue, like Volkswagen did when researchers uncovered the vulnerability in its engine immobilizer cryptography.)

Corman thinks that car makers will start to move his way, however. His pitch is that they adopt standards like I Am the Cavalry's five-star program before Congress gets involved, which may be looming. Senators Ed Markey and Richard Blumenthal announced that they would introduce legislation on auto cyber security the same day that the Wired story on the Jeep hack broke.

The threat of more regulation could move carmakers toward a voluntary standard more quickly. But market forces could do the same, Corman suggested. "When people start to look at which of these five stars they have...once people start to say, 'I don't want a car that has a radio that can to kill me,' then we'll see separation and isolation, even if the government never asks," he said. " If Jeep takes a hit for the next two quarters, they're going to have to do something different."

Miller isn't sure that trying to talk to the auto industry will have any long term benefit. He said that the only way he thought car companies would change is if people got upset with them. When asked if he was concerned about people having a negative response to his work, he said, "If that's their response, then good. If they want to freak out, maybe they'll go to the car companies and ask. 'What are you going to do about this?' The more people are upset and talking about this, especially with their congressmen, the more likely that car companies are going to spend money on this."

But for now, Miller insists he'll stay out of the game. After demonstrating that cars can be remotely hacked, he said he doesn't see any other project in car hacking "that I want to invest another year of my life in."

There seem to be plenty of people in line to fill any void. The car hacking village at DEF CON brought in hundreds of attendees to get the basic skills to do analysis of CAN buses and other automotive systems. With Miller and Valasek's paper providing a road map, it could be a lot easier for others to exploit vehicles going forward. And as the skills and tools required for car hacking become more well established, flipping on the windshield wipers of someone's car could become what website defacements were in the 1990s—something all the script kiddies can do. That, and not disagreements with security researchers, is what should have the auto industry worried.

About Driver CPC    Drivercpc drivercpc DriverCPC driverCPC driver CPC

No comments:

Post a Comment

easy CPC

easyCPC offer CPC training courses for drivers across the UK and Ireland.