The Conduct Chronicles – Deepfakes. Who are you really dealing with?

Emma Parry explores the growing risk of deepfake fraud, highlighting real world cases that have led to the loss of millions of dollars through scams and manipulation.

22 September 2025 6 mins read
Emma Parry profile picture By Emma Parry
Written by humans

Written by a human

In a world of deepfakes, scams, and fraudsters, how can we be sure we know with whom we are dealing?

High-profile cases that have hit the headlines highlight the very real dangers companies and governments are facing from deepfake fraud, alongside the variety of ways these scams are evolving. It’s no longer a case of just traditional cyberattacks. Recent incidents demonstrate that fraudsters are successfully deploying technology-enabled social engineering, combined with sophisticated AI deepfakes, to execute their criminal schemes.

ARUP

One of the most frequently cited examples of deepfake fraud involved ARUP, the British multinational design and engineering company behind the iconic Sydney Opera House, as well as London’s Crossrail.

ARUP confirmed in May 2024 that it was the target of a deepfake that led to a Hong Kong-based employee moving $25 million on the instruction of people purporting to be ARUP senior management.

The elaborate scam commenced with the HK employee, who worked in the finance department, receiving a message about a “confidential transaction” from a person claiming to be the company’s UK-based chief financial officer.

The employee was invited to join a video conference with multiple individuals, including one that was using deepfake technology to impersonate ARUP’s CFO. During the call, the fake senior management team persuaded the employee to instruct 15 bank transfers to five different Hong Kong bank accounts.

The scam was eventually discovered when the employee contacted ARUP’s headquarters.

This wasn’t a traditional cyberattack. No systems were breached. This was technology-enabled social engineering, leveraging psychology and sophisticated AI deepfake capabilities to gain the employee’s trust.

Commenting on the case, an ARUP spokesperson stated: “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.”

The White House

However, it’s not just the private sector at risk. Fraudsters are aiming high in their criminal endeavors, and no government is safe! Earlier this year, a fraudster hacked the phone of White House Chief of Staff Susie Wiles. Alarmingly, her high-profile contacts were accessed – including the details of senators, governors, and business executives. But of even greater concern, AI-generated voice cloning technology was then used to impersonate Wiles on calls.

Given her role, Wiles has the highest level of security clearance. It’s currently not clear what else the fraudster may have accessed, and the matter is still under investigation.

However, in May, the FBI issued a public service announcement warning of an “ongoing malicious text and voice messaging campaign,” involving “malicious actors” impersonating senior U.S. officials to target other senior government officials and their contacts.

In an unsettling warning, the FBI asserted that “If you receive a message claiming to be from a senior US official, do not assume it is authentic.” It would seem that hearing is no longer believing within the U.S Government.

The White House incident is a particularly chilling case in the context of the ongoing geopolitical tensions. With fraudsters using deepfake technologies now able to convincingly impersonate the highest levels of government, adversaries no longer need to breach physical security to carry out criminal activities. Instead, criminals can potentially interfere with and manipulate sensitive decisions, or even issue false commands, from a distance. It’s now a strategic vulnerability.

DIY Fake Passports

Meanwhile, it’s not just the stories from companies and governments that are highlighting the industry-wide weaknesses to deepfake fraud. Individuals who are testing Generative AI capabilities are speaking about their successes and revealing new and disturbing possibilities.

For example, earlier this year, a Polish researcher used ChatGPT-4o to create a fake passport. It took just five minutes. Worryingly, the researcher claimed that the AI-generated passport successfully bypassed automated ‘Know Your Customer’ (KYC) checks on some financial services’ online platforms, raising concerns over the processes some firms are using to verify identity. Specifically, those firms whose KYC processes do not include verification via chip are at significant risk.

Identity verification is only as strong as the weakest link, and with fake documentation potentially available in just minutes, the industry is increasingly susceptible to turbo-boosted deepfake fraud.

Of note, ChatGPT responded very swiftly to the experiment and soon began rejecting requests to forge documents.

En Garde!

In fencing, the referee gives the command “en garde!” to alert the fencers to assume a ready position and to prepare for attack.

Akin to a fencer, firms must be prepared for the oncoming assault. But unlike fencing, firms are not just facing one attacker and from just one angle. Firms must now be prepared for various types of attacks, on multiple fronts, and to be prepared for regular and often sustained attacks – be it, phishing scams, WhatsApp voice spoofing, or deepfakes!

The types of defenses firms must deploy will depend on the nature of their businesses. However, here are some areas upon which firms should focus.

In the first instance, to understand the threats specific to their businesses, firms must perform a fraud risk assessment. This will highlight the types of fraud to which the business is susceptible, alongside highlighting the controls required to mitigate the risks.

Firms must have a robust ‘speak up’ culture whereby employees feel confident and comfortable to raise concerns. For example, if your “CFO” instructs your finance team to move millions into multiple bank accounts, would the team feel confident to raise concerns if something about the request didn’t seem right?

Firms must also implement robust validation steps to authorize large payments. Some firms now incorporate manual codes into their payment authorizations processes to counter the risks posed by AI deepfakes.

Firms should also implement steps to validate their own employees during significant transactions (eg. Mergers and Acquisitions) and especially if those transactions are being conducted primarily online.

Additionally, firms that perform KYC must review their processes and technologies to assess how easily a fraudster could open an account using fake documentation.

Finally, firms must maintain robust audit trails, alongside accurate and complete records, that will support swift and effective investigations in the event of a fraud incident, and that will ultimately underpin successful criminal prosecutions.

We must always remember that fraudsters will never play by gentlemen’s rules. We are not in a fencing salle awaiting the referee to give the command – “en garde!” Instead, we must understand the threats, remain ever vigilant for the attack, and begin thinking like a criminal!

About Article

Published 22 September 2025

About Author

Share Article

SUPPORT 24 Hour