📈 Explore REIT Investing with a Smarter Trading App

Perfect for investors focused on steady income and long-term growth.

📈 Start Trading Smarter with moomoo Malaysia →

(Sponsored — Trade REITs & stocks with professional tools and real-time market data)

Website blocking alone cannot stop deepfake misuse due to open-source AI tools and VPNs, says cybersecurity expert, urging digital watermarking.

PETALING JAYA: Blocking websites that generate AI videos may slow the spread of deepfake content but the measure alone is unlikely to fully prevent misuse due to the availability of open-source tools and other ways to bypass restrictions, said Universiti Malaya cybersecurity specialist Prof Dr Ainuddin Wahid Abdul Wahab.

He said technical barriers such as site blocking may affect casual users but are unlikely to stop determined actors.

“From a technical perspective, blocking AI video-generation websites is only an initial step that addresses the problem from the outside.

“While it may make things harder for ordinary users, those with malicious intent could still bypass such restrictions by using VPNs, which act like digital ‘back alleys’.”

He said the challenge is compounded by the fact that many AI technologies are open source and can be downloaded and run directly on personal computers without visiting any website.

He also said blocking specific platforms would have limited impact because similar tools could still be accessed offline or through mirror sites, meaning internet restrictions alone cannot fully stop the production of deepfake content.

Ainuddin added that from a digital forensics perspective, investigators are able to identify traces left behind by AI systems but detection remains a complex process.

“We can detect what we call ‘digital scars’ left by AI, such as unnatural heartbeat patterns on a face or inconsistencies in light reflections in the eyes.

“However, the challenge is that as detection techniques become more advanced, deepfake generation technology also evolves to hide its weaknesses,” he explained.

“The biggest difficulty arises when the video is uploaded to social media platforms. These systems often compress the video, which removes subtle evidence or ‘digital fingerprints’ needed for forensic analysis.”

He said technical safeguards within digital platforms may be more effective than simple access restrictions.

“Mechanisms such as digital watermarking and authenticity standards such as Coalition for Content Provenance and Authenticity (a digital content authenticity standard) could function like a birth certificate for digital content.

“Every AI-generated video would carry hidden information showing it was produced by a machine. If the video is altered, the mark would be damaged, allowing social media systems to automatically flag it as manipulated content.”

He said the approach prioritises verifying the origin of digital content rather than blocking access, similar to installing scanners to ensure every item entering a system carries a valid label.

 The Sun Malaysia

📈 Explore REIT Investing with a Smarter Trading App

Perfect for investors focused on steady income and long-term growth.

📈 Start Trading Smarter with moomoo Malaysia →

(Sponsored — Trade REITs & stocks with professional tools and real-time market data)

About the Author

Danny H

Seasoned sales executive and real estate agent specializing in both condominiums and landed properties.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}