This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minutes read

Deepfakes and AI: How a 200 Million Scam Highlights the Importance of Cybersecurity Vigilance

The Cyber Security Division of the Hong Kong Police Force received its first case of deepfake related fraud in January this year, involving a deepfake video conference scam resulting in an HKD 200 million (USD 26 million) loss by a multinational company. While this may be the first widely publicized AI scam in the region, it certainly won’t be the last. This raises the crucial question: how can businesses shield themselves from continually evolving cybersecurity risks?  

This deepfake incident demonstrates that cybersecurity presents a threat to all segments of an organisation, not just IT personnel, CPOs, and CISOs. According to FTI Consulting’s “2024 Global CFO Report” cybersecurity and data privacy rank as top concerns for global and Asian C-suite executives due to the threat they pose to an organisation’s financial, legal, and reputational well-being. The report reveals that 81% of the 98 APAC-based senior finance executives considered cyber attacks as the second-largest threat to their business, only falling short to commercial competition.   

Emerging cyber threats under AI regime  

As the world continues to open up after covid-19, financial professionals have resumed cross-border travel while working norms continue to evolve, such as remote or hybrid working, leading to a greater dependence on digital tools and cloud solutions. While many are looking to harness AI technology to improve productivity in the corporate space, this incident demonstrates how threat actors are also empowered to leverage AI with malicious intent. According to a legal perspective survey in Asia jointly published by FTI Consulting and the Association of Corporate Counsel in Singapore, 95% of surveyed respondents said that they are using generative AI tools to assist them in legal tasks, but almost two-thirds (65%) acknowledged receiving inadequate guidance on the use of the technology, including associated risks.   

Frontline employees are assuming an increasingly crucial role in corporate cybersecurity, and these individuals, as well as the organisations they represent, must be trained and prepared to face the emerging threats posed by AI.  

Best practices to mitigate cyber risk 

It is critical for organisations to routinely review and update their cybersecurity policies and incident response protocols, constantly identifying and addressing gaps based on the new threat landscape.  

This is especially important as AI-fueled cyber attacks, like deepfakes and phishing campaigns, become increasingly sophisticated and more difficult to detect. Organisations must perform regular cybersecurity program assessments to ensure existing protections are keeping critical assets secure, and to identify and address vulnerabilities. 

Proper training for employees in the form of table-top and simulation exercises are essential, leveraging communications functions to protect an organisation’s underlying freedom to operate, preserve its reputation, and engage relevant stakeholders throughout an incident response scenario.  

Training should also include policies regarding how employees can properly leverage AI to avoid misuse, e.g., not uploading sensitive information into AI tools, helping mitigate cyber risks.  

Additionally, employees must be aware of potential threats, best practices for staying vigilant, and the resources available to support them should they encounter a potential threat.  Employees who are well-informed about emerging cybersecurity trends can respond promptly, decisively, and safely to potential threats, collectively working to mitigate the adverse impacts of emerging types of cyber crime and better safeguard their organisations.  

A multinational company lost HK$200 million (US$25.6 million) in a scam after employees at its Hong Kong branch were fooled by deepfake technology, with one incident involving a digitally recreated version of its chief financial officer ordering money transfers in a video conference call, police said.


cybersecurity, strategic communications