KUALA LUMPUR, Aug 4 — Imagine receiving a voice note on WhatsApp from someone who sounds exactly like your younger brother — his voice, tone, and even the way he says your name are all spot on.
He says he’s stuck at work, left his wallet behind, and needs RM1,500 urgently to sort something out before his boss finds out. There’s even a familiar laugh at the end and you don’t think twice because you really think it was him.
But what if that voice was not real?
CyberSecurity Malaysia (CSM) chief executive officer Datuk Amirudin Abdul Wahab has warned about a rise in scams involving AI-generated voice cloning, where scammers use artificial intelligence to impersonate family members, friends or colleagues.
In many cases, the goal is to trick victims into sending money by creating a false sense of urgency and trust.
“Scammers use AI-generated voices to mimic friends, family members, or colleagues often via WhatsApp or phone calls to request urgent transfers or loans.
“Since early 2024, the police have investigated over 454 such cases, with losses totalling approximately RM2.72 million,” he said when contacted by Malay Mail.
He then went on to say in the first three months of 2025, the country recorded 12,110 online fraud cases, involving scams such as fake e-commerce deals, bogus loans, and non-existent investment schemes, with total losses amounting to RM573.7 million.
Citing Bukit Aman’s Commercial Crime Investigation Department (CCID), he said generative AI tools including deepfake videos, cloned voices, fake digital identities, and chatbots are increasingly being used to carry out these scams.
“There has also been a rise in scams involving AI-generated cloned voices. In one case, scammers mimicked the voice of a family member to simulate an emergency situation via WhatsApp voice notes, urging the recipient to urgently transfer funds,” he said.
He noted that the voice was cloned from short public TikTok videos.
Amirudin added that deepfake scams have also involved national icons like Datuk Seri Siti Nurhaliza and Datuk Lee Chong Wei, whose altered images and voices were used in fake advertisements promoting cryptocurrency and investment platforms.
“As of March 2025, CCID Bukit Aman confirmed the discovery of at least five deepfake videos impersonating both national and international personalities. Among the names falsely used were Prime Minister Datuk Seri Anwar Ibrahim, Elon Musk, Donald Trump, Teresa Kok, and a senior Petronas executive.
“The manipulated clips were widely circulated online to promote fake investment platforms, many of which falsely promised returns of up to 100 times the original amount,” he added.
He said the scams relied heavily on the authority and familiarity of well-known figures to convince unsuspecting viewers, especially on social media platforms where verification is often overlooked.
Why it poses a serious threat, Amirudin explained that the rise of deepfake technology is alarming not just for its technical sophistication, but for the far-reaching impact it can have on society.
At the individual level, he said deepfakes are being used to exploit public emotions, especially in scams that mimic the voices of family members, government officials, or well-known personalities.
These tactics create a false sense of urgency, pushing victims into making quick decisions often involving money before they have a chance to think critically.
“Beyond personal safety, there is growing concern over the effect deepfakes have on public trust in the media. As manipulated content becomes increasingly indistinguishable from real footage or audio, it blurs the line between fact and fiction,” Amirudin said.
He also said that this erosion of trust can sow confusion, making it easier for false narratives, misinformation, and disinformation to spread particularly on social media.
At a broader level, he highlighted that national security is also at stake because the content that convincingly imitates political leaders or high-ranking officials could be weaponised to stir panic, manipulate public sentiment, or create political instability.
How to verify and report suspicious AI-generated content
With deepfakes becoming more difficult to detect, CSM is urging the public to stay vigilant and take advantage of available resources to verify suspicious content.
He said the agency’s Cyber999 Incident Response Centre supports both individuals and organisations in identifying cyber threats that involve technical components such as phishing, malware, or manipulated digital content.
Members of the public can report suspicious activity through several channels:
Online form and mobile application
- Email: cyber999[@]cybersecurity.my
- Hotline: 1-300-88-2999 (during office hours) or +60 19-266 5850 (24/7)
“Cyber999 also provides technical analysis of suspicious emails which users are encouraged to forward the full email header and content for expert review.
“In addition, the team shares regular security advisories and best practices, helping Malaysians keep up with the latest online threats and how to avoid them,” he said.
He explained that Cyber999 handles technical cyber threats like phishing and malware, while deepfake cases without clear technical elements are usually referred to law enforcement or regulators.
For small businesses, Amiruddin said CSM has developed the CyberSAFE SME Guidelines, which offer a simple checklist to help organisations detect, verify, and respond to suspicious online content.
Wrapping up in our final part: It’s not just tech — it’s trust. We look at why media literacy is your best line of defence in the age of deepfakes, and how you can help protect not just yourself — but your family too.
Recommended reading:
AI scams are getting real: Here are the cases happening in Malaysia that you should know about