Why cyber threats are a C-suite issue
Simply sign up to the Cyber Security myFT Digest -- delivered directly to your inbox.
If it was inconceivable two years ago that working from home would be the norm for a large part of the workforce, today it seems equally hard to countenance a full return to the office. While Omicron may fade into the alphabet soup of Covid, hybrid working is here to stay.
For business schools educating the next generation of executives, the new flexible world requires teaching of some topics that were not obviously necessary in 2019, such as working out how to ensure remote colleagues are not at a disadvantage to those in the office.
Other lessons were relevant in the “before times” but have been amplified by the pandemic. Most notable among these is cyber security, and that it is not only a task for IT departments but must be understood as a problem for every employee, from the chief executive down.
Fraud and scams are one of the greatest threats to companies. Ransomware may make the headlines but the most common criminal tool remains social engineering, or confidence tricks designed to persuade people to hand over passwords or other sensitive information. These might be a phishing email supposedly from an IT technician, or a romance scammer requesting money for a plane ticket.
An era in which individuals and employees are so often out of the office only makes these threats more dangerous.
“The cost of fraud becomes the cost to a consumer and the cost to a product,” says Dimitrie Dorgan, senior fraud risk manager at Onfido, an identity verification company specialising in facial biometrics. “There are really creative ways they can abuse things which end up causing damage to companies.
One trend he sees is fraudsters attempting to find new weak spots. “Fraudulent activity is not a straight line,” he emphasises — fraudsters, after all, are seeking to minimise their time and energy.
“After the pandemic, we’ve seen attacks peak at the weekend, when [businesses] are under a lot more pressure to deliver the same kind of products with lower staffing,” Dorgan adds.
Among his suggestions is the need for businesses to increase the number of layers of security an attacker must penetrate, and not merely adding in new passwords. “Based on the data in our report, biometric checks can play an important role in adding friction,” he says. “There’s one extra layer of having to present your face which displaces fraud.”
Adding such systems haphazardly will be ineffective, however — they must be implemented as a core part of the business. “Building with security in mind means you can service your customers better,” says Dorgan.
While new permutations of old-fashioned fraud are the most obvious online threat, MBA programmes will also need to ensure that participants are well versed in handling the next generation of risks. Matthew Ferraro, counsel at law firm Wilmer Cutler Pickering Hale and Dorr in Washington, calls this “disinformation and deepfakes risk management”, or DDRM.
Since 2016, there has been a growth in online disinformation, a problem heightened during the Covid pandemic, when conspiracy theories about vaccines and related ideas such as QAnon went viral. “Disinformation is a problem that should not be the concern only of the IT department but also of the C-suite,” says Ferraro. “The dangers posed by viral false narratives and realistic bogus media require more than technical solutions.”
Deepfakes — synthetically generated content used for illicit purposes — have long been feared as a political tool for propagandists. But Ferraro notes that the Federal Bureau of Investigation in the US has been warning that attackers will “almost certainly” use deepfakes to attack businesses within the next year.
“We have already seen reports of malefactors using computer-enabled audio impersonation programmes to trick institutions into wiring tens of millions of dollars right into the criminals’ hands,” he says. “Preparing for and responding to growing business risks needs to be the responsibility of business leadership, not just cyber-security departments.”
Businesses have a long way to go on countering this threat, Ferraro adds. “One way to think about this issue is that disinformation and deepfakes risk is today where cyber security was 15 years ago,” he warns. “But the dangers are coming — and closing quickly.”
But he is careful to emphasise that artificial intelligence-generated media have good uses as well as bad. For businesses, the positives range from customisable AI-generated human resources avatars to computer-generated faces for advertising campaigns.
“Weighing the benefits of this kind of synthetic media with the business, reputational and even social risks of creating and propagating fake personas is exactly the kind of decision leaders, not IT departments, need to make,” he says.
Nevertheless, as with fraud, protecting reputations requires companies to be fast-moving and reactive from their leaders down, says Ferraro. “Today, online conversations drive brand identities. Given the speed, scale and power of viral disinformation, its greatest immediate risk to business is reputational harm.”