Malaysia Oversight

Spike in use of nudify deepfakes raises concern

By TheSun in September 19, 2025 – Reading time 3 minute
Spike in use of nudify deepfakes raises concern


PETALING JAYA: The rapid spread of “nudify” deepfakes, which are
AI-generated sexualised images that digitally strip victims of their clothing, has sparked alarm among experts, who warn of its dangerous ease, devastating impact and weak safeguards against abuse.

Universiti Malaysia Institute for AI and Big Data director Dr Muhammad Akmal Remli said the latest “nudify” tools make exploitation disturbingly simple.

“AI-based nudify applications, also known as synthetic non-consensual explicit AI-created imagery, allow anyone to generate sexualised images without needing graphic skills.

“In the past, it required editing software, such as Photoshop. Now, even someone without training could do it. Victims may not even realise their pictures are being misused.”

He said the algorithms could fabricate entirely new images with “realistic textures, skin tones, shadows and lighting”, making them almost impossible to detect with the naked eye.

Akmal explained that distribution often begins in the shadows.

“Images uploaded to the internet could be taken and processed into nudify deepfakes. At first, they might circulate on dark web forums, but once they go viral, they appear on open social media platforms, sometimes even through paid ads.

“The spread is driven by pornography, shaming of individuals and business models that profit from such services.”

He warned that current detection tools remain limited.

“Some innovations are being developed, such as digital watermarks and machine learning analysis to spot subtle patterns, but these are not foolproof. Technology companies must also strengthen safeguards, such as identity checks and keyword filters to reduce abuse.”

Universiti Teknikal Malaysia Malacca AI and cybersecurity expert Prof Dr Azah Kamilah Muda said nudify deepfakes are especially insidious because they alter only part of an image.

“They usually change only one part of the photo, such as removing clothes, and leave everything else untouched. Advanced tools blend the fake parts so smoothly that the edits look natural. Once the picture is uploaded, compression makes the remaining clues even harder to see.

“Most detection systems are designed for face-swaps, not this type of edit.”

She noted that the manipulated content could spread at lightning speed.

“These services are promoted through Telegram bots, affiliate links or ads on Instagram and Facebook. Once created, they are quickly shared on social media and messaging apps. Stopping this requires cooperation between companies, governments and the community.”

Beyond technology, the emotional fallout for victims is severe, said Universiti Kebangsaan Malaysia psychologist and Malaysian Psychological Association president Prof Dr Shazli Ezzat Ghazali.

He urged families to prepare children early by teaching them that fake images could look real while partners and relatives must offer reassurance and open communication.

He said for long-term recovery, cognitive behavioural therapy, resilience training, digital detox periods and community support are recommended.

The Communications Ministry recently revealed the scale of the challenge.

Between 2022 and August this year, 42,399 misleading AI-related posts were removed by social media platforms at the request of the Malaysian Communications and Multimedia Commission.

It stressed that under the Communications and Multimedia Act 1998, offences involving false or fraudulent content are punishable by a maximum fine of RM500,000, up to two years’ jail or both.



Source link