Daniel Gruss of Graz University of Technology works on identifying vulnerabilities in smart systems.
Daniel Gruss of Graz University of Technology works on identifying vulnerabilities in smart systems. © TU Graz

As connected devices become more entwined in everyday life, hackers are finding new footholds to exploit the Internet of Things, from entering a casino database via an aquarium thermostat to wirelessly taking control of cars.

As manufacturers and device-makers bring products to market with insufficient cryptographic and cyber security defences, academics have taken it upon themselves to spot new risks.

Flavio Garcia, professor of computer security at the University of Birmingham, has spent a decade identifying vulnerabilities in smart systems ranging from contactless cards and connected vehicles to banking apps. Sometimes his discoveries lead to thank-you emails, remedial actions and even offers of work from product makers. Occasionally, they spark lawsuits.

Prof Garcia, along with a team of researchers and two Belgian academic institutions, once discovered that a pacemaker, which shares data with external units, could be hacked. “[The pacemaker] does not use any form of strong encryption, which means an attacker with £200 of equipment could send commands to an implantable device and change the configuration, with potentially lethal consequences,” he says.

The US Food and Drug Administration (FDA) issued a warning subsequent to the findings and Medtronic, the device-maker, announced updates. No attacks occurred, but such pre-emptive flaw-spotting is crucial given what such insights could have enabled for malicious actors.

However, some companies fear that publicising flaws will damage their public image and give criminals new ideas. When Prof Garcia and collaborators at a Dutch university exploited cryptographic weaknesses to hack into the ignition system used in millions of Volkswagen cars, the automaker took legal action to suppress the findings.

Prof Garcia says academics and companies have opposite incentives. “Companies are used to a culture of non-disclosure agreements, and we are in the business of knowledge dissemination,” he says. He adds that academics have no interest in giving their insights to criminals and that “responsible disclosure” can balance the needs of companies with those of academics keen to make use of valuable hack-finding work.

Prof Garcia suggests that flexible NDAs — where researchers agree not to disclose company secrets but are permitted to publish their findings — are a compromise.

“Some companies thank us for letting them know, tell us when it will be fixed, and let us publish after that. Others ask us to help them fix it,” he says. When legal teams get involved, however, “there is fear. They are afraid we will publish their secrets.”

Overall, businesses are becoming more comfortable working with external, beneficent cyber explorers, he says. “Once [companies have] been through this a couple of times, they realise it’s not one crazy academic breaking their system but a whole community of researchers who are just trying to get things fixed,” he adds.

Cyber academics are proficient at spotting security flaws at companies because they are outsiders without access to source code and look at risk from a different perspective, experts say.

“The way we search vulnerabilities is odd for people not in our area,” says Daniel Gruss, assistant professor at Graz University of Technology, who was among a community of cyber academics who uncovered a hardware flaw in Intel processors. “We follow the natural sciences, starting with a hypothesis and designing an experiment to prove or disprove it,” he says.

Jo Van Bulck, a PhD student at KU Leuven in Belgium — also among those who spotted the Intel flaw — believes being an outsider encourages holistic thinking. “The fact that we are forced to look at processors as black boxes is a key reason why we made this progress as a community . . . we don’t care about the thousands of components. We see a black box and interface with it, while maintaining the big picture.”

Companies are not always well-placed to think like this, even in the tech sector. The problem, say experts, is hyper-specialisation, combined with the growing complexity of hardware and software. “People can focus on optimising the tiniest part of a processor for their whole lives and never see other parts of the processor in any depth, and they often do not care about software, or security, because if they did, they would have a less deep knowledge in their domain,” says Mr Gruss.

“This means they fail to think about how introducing a new feature might create, for instance, a side channel that can leak information to hackers,” Mr Gruss adds. Engineers might also focus on optimising performance and underestimate how a malign actor might find openings in the system as a whole. “Cyber academics, on the other hand, always focus on what could go wrong,” he says.

Some companies create units to uncover flaws, known as “red teams”, but their scope and resources are often limited and they may not have the time or latitude to engage in the highly speculative work that academics do.

Mr Gruss says companies need to engage more with academic communities, especially in an era of connected physical devices in which vulnerabilities will proliferate. “For increasingly computerised industries like automobiles and avionics, companies have to learn how to deal with vulnerability disclosure,” he says. “As products get more complex and connected, there are more ways for an attacker to jump in.”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments