Abstract

The rapid advancement of generative artificial intelligence (AI) has reshaped the nature of social engineering attacks, extending far beyond conventional romance and investment scams. Rather than relying on static deception techniques, contemporary attacks increasingly leverage hyper-personalised content, real-time adaptation, and synthetic media to exploit trusted communication environments. Encrypted messaging platforms and short-form video applications, in particular, have become fertile ground for these evolving threats. This study presents a forward-looking comparative analysis of AI-enabled social engineering attacks across WhatsApp, Telegram, and TikTok, with a specific focus on how platform architecture influences attacker capability in the post-2025 landscape. Using a mixed-methods approach, 1,247 threat intelligence reports published between 2023 and 2025 were examined alongside three representative case studies: HackOnChat (WhatsApp session hijacking), DarkGram (Telegram’s cybercriminal ecosystem), and AI-generated scam content on TikTok. The findings reveal marked differences in platform risk profiles. Telegram demonstrates the most resilient criminal infrastructure, with hundreds of channels sustaining large-scale malicious activity and high engagement rates. WhatsApp, while hosting fewer attack vectors, presents the greatest financial impact due to business-targeted attacks driven by voice cloning and contextual phishing. TikTok emerges as the fastest-growing vector, where algorithmic amplification enables AI-generated scam content to reach large audiences before moderation occurs. By synthesising these findings, this study proposes a platform-specific vulnerability framework and evidence-based mitigation strategies that address both technical and behavioural dimensions of risk. The research extends Social Cognitive Theory by incorporating the influence of synthetic media and offers practical recommendations for platform governance, organisational security training, and AI-assisted detection mechanisms necessary to navigate the evolving threat environment of 2026–2027