Featuring Alvin Toh, Co-founder of Straits Interactive
In the next five years, cybercrime is expected to evolve significantly, driven by the increasing sophistication of artificial intelligence (AI) technology. According to Straits Interactive co-founder Alvin Toh, AI is increasingly being leveraged by malicious actors to craft sophisticated attacks, presenting significant threats to both individuals and organisations.
“Ransomware-as-a-service, for instance, is on the rise and deepfake services are becoming more accessible,” he said.
It is estimated that creating a single deepfake image can cost as little as US$10, while a one-minute video might cost around US$500.
“These deepfakes can bypass verification using stolen IDs enhanced with fake AI images. Audio deepfake scams are another concern, as they are generally cheaper and easier to create, requiring less data from the subject to produce convincing results. This can be particularly devastating in cases of fake kidnapping scams,” he said.
The continued development of machine learning algorithms has enabled attackers to orchestrate their attacks with greater sophistication, precision and scale. Earlier this year, a finance worker in Hong Kong was duped into transferring US$25 million to a deep-faked chief financial officer.
“The software used for such deceptions is almost as accessible as off-the shelf apps and can be used to create deepfakes for deceptive purposes in a matter of minutes.” said Toh.
"These deepfakes can then be easily programmed to respond to anticipated questions, making the interaction appear as though it was with a real person,” he said.
Besides that, AI also enables phishing scams to adopt a more realistic tone and include content that makes the scam more difficult to detect.
It helps scammers target individuals or organisations more accurately, allowing sensitive information to be exploited more easily.
Toh said small and medium enterprises, in particular, face heightened vulnerability due to limited resources and cybersecurity awareness.
High-profile individuals and government officials are more likely to be used in spoofing attacks, given the large volume of information about them available online.
As AI becomes more ubiquitous, some employees may use it for work without clearance from their organisation.
“Placing proprietary and sensitive data within these models carries the inherent risk of unintentional leakage of personally identifiable information. Without a sound understanding of the risks of AI, including how data can be used to train the large language model (LLM), employees might inadvertently leak sensitive organisational data when using AI models, particularly free versions,” said Toh.
He said threats to data security, such as SQL injection, now extend to LLMs that pose new challenges.
“Prompt injection, prompt leakage and jailbreaking are becoming synonymous with security issues in LLMs. There is also Jailbreak-as-a-Service in the market, where adversarial prompts are used to break the guardrails of AI chatbot models to divulge information safeguarded in the system prompts.”
To ensure AI security, Toh suggests that organisations select a Generative AI solution that has undergone rigorous internal testing of adversarial prompts against its chatbots. It’s akin to a new Penetration Testing for AI.
Collaborative efforts involving government agencies, cybersecurity firms and individuals are crucial to combating AI-powered cyber threats.
Syahrul Nizam Junaini, who is research fellow at Universiti Malaysia Sarawak’s Data Science Centre, said as AI evolves, law enforcement and cybersecurity efforts must adapt.
“IT expertise within police ranks must be enhanced to handle cybercrime effectively. To keep pace with technological developments such as AI and the growing threat of cybercrime, Malaysia is taking proactive steps to enhance digital safety,” he said.
Toh said it was essential for organisations to boost their cybercrime awareness, especially in the age of AI.
“Education is key, not just for those in the infotech sector but also for all employees. Effective cybersecurity begins with a well-informed workforce equipped with the knowledge and skills to identify and mitigate risks proactively,” he said.
“Comprehensive cybersecurity training programmes that include realistic simulations and tests, tailored to the specific roles and responsibilities of employees, can promote a culture of vigilance and accountability. This is crucial for an organisation’s resilience against social engineering tactics and human error-induced breaches.”
The proliferation of connected devices and the emergence of 5G technology have further exacerbated cyber threats, rendering traditional perimeter-based defences obsolete.
Supply chain risks have also increased as more capabilities have become interconnected.
“To stay ahead of the curve, organisations must adopt a holistic approach to cybersecurity, encompassing proactive threat intelligence, robust third party due diligence, strong data protection and encryption protocols, and continuous monitoring mechanisms,” he added.
This article was first published on The New Straits Times on 26 July 2024.
Get access to news, enforcement cases, events, and actionable tips and guides
Get regular email updates and offers
Job opportunities, mentorship and career guidance
Exclusive access to Data Protection community - ask questions, network and share knowledge with peers and experts via WhatsApp and Linkedin
DPEX Network is a Community Initiative of Straits Interactive.
Copyright © Straits Interactive Pte Ltd. All Rights Reserved.
All intellectual property rights to logos and brands featured on this website remain the property of their respective owners.