NewsDeepfake threat looms: Companies brace for new wave of fraud

Deepfake threat looms: Companies brace for new wave of fraud

The phenomenon of disinformation and fake news is becoming an increasingly significant problem for companies. According to the latest data, up to 75% of enterprises experienced at least one incident related to deepfakes in the past year. The number of such attacks may soon surpass ransomware cases.

One of the European ministers during a conference on deepfakes
One of the European ministers during a conference on deepfakes
Images source: © Flickr | beekman
Robert Kędzierski

It's important to differentiate between the concepts of disinformation and misinformation, which are often confused. The American Psychological Association defines misinformation as false content spread by individuals who believe it to be true. Meanwhile, disinformation is the deliberate dissemination of false information with the intent to cause harm. It's this second category that poses a serious threat to companies.

Deepfakes, which are realistic-looking but fabricated video, audio, graphic, or text materials, have become a main tool of disinformation. Advances in artificial intelligence, particularly generative networks (GAN), have significantly facilitated the creation of such content. In GAN technology, two neural networks work together—one generates false content, and the other evaluates its credibility, leading to the creation of materials that are difficult to distinguish from authentic ones.

The different faces of attacks using deepfakes

Cybercriminals use various attack techniques on enterprises. After obtaining samples of email correspondence and the victim’s address book, they can use AI to create personalized messages that mimic the communication style of a given person. Such actions significantly increase the effectiveness of phishing attacks, as the messages appear to come from trusted senders.

Criminals also create deepfakes impersonating clients, business partners, or board members to authorize false transfers or transactions. According to Deloitte’s Centre for Financial Services, losses from AI-based fraud in the United States are growing at a rate of 32% per year and could reach a value of $40 (CAD 55.5) billion by 2027. There have already been cases where company accounting departments made transfers to criminals' accounts based on fake recordings of supposed superiors.

The 2024 Ironscales report reveals that almost two-thirds of companies expect the number of attacks using deepfakes to soon surpass the number of ransomware attacks. Particularly dangerous are voice deepfakes that can undermine the effectiveness of voice verification systems used in telephone banking.

It should be emphasized that the creators of deepfakes are not exclusively hackers. They can also be dissatisfied former or current company employees who have knowledge of the company's internal processes. The threat may also come from competitors or dishonest investors seeking to influence stock prices or strengthen their negotiating position.

Defense strategies against deepfakes

Deepfakes are primarily used to compromise the security of corporate networks and harm brand reputations. Although large, well-known enterprises are more frequently attacked, smaller companies may suffer proportionally greater reputational damage because it's more challenging for them to counter false narratives.

The phenomenon of repeatedly spreading even false information increases the likelihood that recipients will perceive it as true. Social media significantly accelerates the spread of disinformation, posing an additional challenge for companies trying to protect their reputation.

To effectively defend against deepfake fraud, companies should implement a multi-layered approach to cybersecurity. This includes not only technological solutions but also changes in operational procedures and comprehensive employee training.

According to a Forrester report, only 20% of the surveyed companies have a response and communication plan that includes deepfake attacks. Jim Richberg, Vice President of Cybersecurity Strategy and Global Security Leader at Fortinet, emphasizes that "deepfakes undermine trust in the information on which consumers, investors, and employees base their decisions. In the world of cybersecurity, there is a reason why people talk about a 'human firewall'—in the fight against deepfakes, it's the human who becomes the first and often most important line of defence."

IT and security teams should continuously monitor the development of deepfake techniques and educate employees at all levels, including management. It's also worth incorporating monitoring of threats related to deepfakes into brand reputation protection strategies, including the dark web, where early signals of planned attacks may appear.

Related content