https://majmuah.com/journal/index.php/bij/issue/feedBorneo International Journal eISSN 2636-98262025-12-31T11:00:54+00:00Editor-in-Chiefborneointernationaljournal@gmail.comOpen Journal Systems<p class="font_7" style="font-size: 16px; text-align: justify;"><span style="font-size: 16px;">Borneo International Journal ISSN 2636-9826 (online) is a single blind peer-reviewed, Open Access journal that publishes original research and reviews covering a wide range of subjects in Islamic studies, Arabic language, science, technology, business, management, social science, architecture and medicine. </span><span style="font-size: 16px;">It also publishes special issues of selected conference papers.</span></p> <p class="font_7" style="font-size: 16px; text-align: justify;"> </p>https://majmuah.com/journal/index.php/bij/article/view/976Phishing Attacks and Credential Theft on Social Media Platforms: A Review of Recent Trends, Case Studies, and Mitigation Insights2025-11-30T14:49:22+00:00Muhammad Fadilah Alfarizyalfarizy_muhammad2@ahsgs.uum.edu.myMohamad Fadli Bin Zolkiplim.fadli.zolkipli@uum.edu.my<p>Social media platforms have transformed communication, work collaboration, and online identity expression, yet they have simultaneously become fertile ground for phishing attacks designed to steal user credentials and compromise privacy. This study reviews current research, industry reports, and empirical findings to examine how phishing functions within social media ecosystems. Using a qualitative literature review, the study identifies dominant attack vectors such as impersonation, direct-message phishing, and credential-harvesting links. Findings show that user behaviour such as oversharing, impulsive clicking, and trust bias plays a larger role in attack success than technical vulnerabilities. While protective measures like multi-factor authentication and automated detection algorithms exist, their effectiveness is constrained by inconsistent user adoption and platform governance. This study argues for integrated mitigation involving behavioural awareness, platform-level enforcement, and adaptive technological measures. The insights aim to support organisations, policymakers, and platform providers in improving user resilience and reducing phishing-driven credential theft.</p>2025-11-30T14:10:58+00:00##submission.copyrightStatement##https://majmuah.com/journal/index.php/bij/article/view/979An Assessment of User Awareness on Cybersecurity Best Practices on Social Media Platforms2025-11-30T14:49:22+00:00Muhammad Eizzat Abdul Razzakm_eizzat_abdul@ahsgs.uum.edu.myMohammad Fadli Zolkiplim.fadli.zolkipli@uum.edu.my<p>Social media platforms are increasingly vulnerable to sophisticated cyberattacks such as phishing, malware, identity theft, data scraping, and social engineering. These risks stem from technical flaws and risky user behaviors, including poor password management, over-disclosure of personal information, and habitual disregard for security measures. Additionally, psychological factors like security fatigue, privacy resignation, and habituation to security warnings contribute to these challenges, elevating the perceived cost of secure behavior over the risks of data breaches. This assessment explores these vulnerabilities while advocating for a multifaceted approach to enhance cybersecurity awareness on social media. Such an approach includes educational initiatives, technical interventions, and the cultivation of user responsibility to promote secure practices and strengthen trust across these platforms.</p>2025-11-30T14:18:39+00:00##submission.copyrightStatement##https://majmuah.com/journal/index.php/bij/article/view/980Cybersecurity and its Impact on Users’ Digital Safety: An Analytical Study on Social Media Threats2025-11-30T14:49:23+00:00Nakibuuka Jamirahnakibuuka085@gmail.comMohamad Fadli Zolkiplim.fadli.zolkipli@uum.edu.my<p>This rapid expansion of social media has transformed human communication, interaction and information sharing but it has created fertile ground for new cybersecurity threats that compromise users’ digital safety. This study investigates the impact of cybersecurity threats particularly cyberbullying on users’ privacy, identity, reputation and mental health across platforms like Instagram, X, TikTok, Facebook, LinkedIn and others. It examines the cybersecurity implications of social media threats by reviewing scholarly literature and case studies to identify measures. Findings reveal that age, language and platform design significantly influence vulnerability to cyberbullying while current mitigation strategies remain fragmented and reactive. Also reveals that weak privacy settings, anonymity and poor digital illiteracy contribute to the escalation of social media attacks, while emerging technologies such as artificial intelligence (AI) and machine learning are being leveraged to detect and prevent cyberbullying and misinformation. Additionally, emerging technologies such as AI-based content detection demonstrate potential but remain limited by language barriers, data bias and inconsistent enforcement. This study proposes an integrated digital-safety framework that combines platform governance, legal reform, user awareness and AI-driven monitoring. The insights contribute toward building safer social media ecosystems that promote accountability, digital well-being and cybersecurity resilience.</p>2025-11-30T14:29:21+00:00##submission.copyrightStatement##https://majmuah.com/journal/index.php/bij/article/view/973The Spread of Fake News on Social Media and its Implication for Public Safety2025-11-30T14:49:23+00:00Wooi Sin Lailaiwooisin@gmail.comMohamad Fadli Zolkiplim.fadli.zolkipli@uum.edu.my<p><strong>Background:</strong> Social media platforms have become primary channels for news and crisis communication, yet their speed, scale, and engagement optimized ranking systems also enable the rapid spread of false and misleading content ("fake news"). In safety critical contexts, misinformation can distort risk perceptions, erode trust, and precipitate harmful behaviors. <strong>Objective:</strong> To synthesize contemporary evidence on (i) how fake news propagates across platforms; (ii) the mechanisms linking online exposure to offline public‑safety harms in health emergencies and disasters; and (iii) the effectiveness and limitations of technical, design, and governance interventions. <strong>Methods:</strong> We conduct a narrative synthesis (2017–2025) spanning communication science, public health, information systems, and computer science. We map mechanisms along the pathway <em>exposure → belief → behavior → safety outcomes</em>, and evaluate intervention classes: transformer‑based NLP, graph neural networks, multimodal/video methods, warning labels and accuracy prompts, UX friction, provenance cues, and infodemic management frameworks. <strong>Results:</strong> False content benefits from novelty and affect, achieving wider and faster cascades than true content. Text‑only detectors perform well in domain but degrade under domain shift; graph aware and hybrid approaches improve early detection and generalization, with emerging multimodal methods addressing video centric platforms. Field and platform studies show light‑touch accuracy prompts and well-designed labels can reduce sharing of low-quality content, though effects vary by placement, specificity, and audience. Major gaps persist in (i) measuring real‑world safety outcomes, (ii) robustness across topics, languages, and modalities, and (iii) transparency and data access for independent evaluation. <strong>Conclusions:</strong> No single solution suffices. We propose a socio technical framework integrating hybrid detection stacks, privacy safe audit pipelines, crisis aware platform design (friction, provenance, correction UX), and cross‑sector coordination with media literacy and trusted‑messenger strategies. A research and policy agenda is outlined to standardize evaluations, enable privacy safe data sharing, and maintain crisis playbooks that align platform incentives with public safety goals.</p>2025-11-30T14:38:21+00:00##submission.copyrightStatement##https://majmuah.com/journal/index.php/bij/article/view/971Data Privacy and Misuse of Personal Information2025-11-30T14:49:23+00:00Nur Sabrina Mohd Shafawinursabrinams06@gmail.comMohamad Fadli Zolkiplim.fadli.zolkipli@uum.edu.my<p>In the age of digital communication, social media platforms have revolutionised the methods by which individuals share, communicate, and express themselves online. Nonetheless, this simplicity also increased the risks of data privacy infringements and the exploitation of personal information. This article investigates the growing concern of data exploitation on social media platforms, highlighting the methods of user data collection, analysis, and possible misuse for economic, political, or nefarious objectives. The research examines the ethical, legal, and technological difficulties related to personal data protection through an analysis of academic literature, case studies, and privacy breach reports. Frameworks such as GDPR and Malaysia's PDPA are utilised to evaluate privacy threats and user behaviour. It also examines existing preventative measures, like privacy settings, awareness campaigns, and platform accountability, while predicting future risks that stem from AI-driven profiling and cross-platform reconnaissance. The results highlight severe inadequacies in enforcement, user comprehension, and ethical platform design. The article concludes by recommending measures to improve user knowledge, fortify governance, and advocate ethical data practices on social media platforms.</p>2025-11-30T14:44:36+00:00##submission.copyrightStatement##https://majmuah.com/journal/index.php/bij/article/view/975Security Breaches of Celebrity and Corporate Social Media Accounts: Risk Dynamics, Impact, and Preventive Frameworks2025-12-11T15:24:35+00:00Norsyazwani Binti Mohd Puadwani.mp13@gmail.comMohamad Fadli Bin Zolkiplim.fadli.zolkipli@uum.edu.my<p>Social media platforms are now essential channels for organizations and public figures to connect, build brands, and create economic value. Yet, the immense visibility of celebrity and corporate accounts makes them irresistible targets for cybercriminals. Attackers seek quick financial gain, the ability to spread misinformation, and cause significant reputational damage. This paper investigates the common routes to compromise, including deceptive phishing schemes, password reuse, platform weaknesses, and psychological manipulation (social engineering). By reviewing prominent real-world cases and scholarly findings, we uncover the complex blend of psychological, technical, and governance issues driving these breaches. Finally, we propose a layered socio-technical mitigation framework that integrates technology, organizational policies, individual behavior, and platform accountability. The findings underscore a critical need to better protect these influential digital identities from ongoing exploitation.</p>2025-12-01T00:00:00+00:00##submission.copyrightStatement##https://majmuah.com/journal/index.php/bij/article/view/983Beyond Romance and Investment: A Forward-Looking Analysis of AI-Enabled Social Engineering Attacks on WhatsApp, Telegram, and TikTok in the Post-2025 Era2025-12-18T10:27:04+00:00Marhakim Mohamad Mokhtarmarhakimm@gmail.comMohamad Fadli Zolkiplim.fadli.zolkipli@uum.edu.my<p>The rapid advancement of generative artificial intelligence (AI) has reshaped the nature of social engineering attacks, extending far beyond conventional romance and investment scams. Rather than relying on static deception techniques, contemporary attacks increasingly leverage hyper-personalised content, real-time adaptation, and synthetic media to exploit trusted communication environments. Encrypted messaging platforms and short-form video applications, in particular, have become fertile ground for these evolving threats. This study presents a forward-looking comparative analysis of AI-enabled social engineering attacks across WhatsApp, Telegram, and TikTok, with a specific focus on how platform architecture influences attacker capability in the post-2025 landscape. Using a mixed-methods approach, 1,247 threat intelligence reports published between 2023 and 2025 were examined alongside three representative case studies: HackOnChat (WhatsApp session hijacking), DarkGram (Telegram’s cybercriminal ecosystem), and AI-generated scam content on TikTok. The findings reveal marked differences in platform risk profiles. Telegram demonstrates the most resilient criminal infrastructure, with hundreds of channels sustaining large-scale malicious activity and high engagement rates. WhatsApp, while hosting fewer attack vectors, presents the greatest financial impact due to business-targeted attacks driven by voice cloning and contextual phishing. TikTok emerges as the fastest-growing vector, where algorithmic amplification enables AI-generated scam content to reach large audiences before moderation occurs. By synthesising these findings, this study proposes a platform-specific vulnerability framework and evidence-based mitigation strategies that address both technical and behavioural dimensions of risk. The research extends Social Cognitive Theory by incorporating the influence of synthetic media and offers practical recommendations for platform governance, organisational security training, and AI-assisted detection mechanisms necessary to navigate the evolving threat environment of 2026–2027</p>2025-12-18T00:00:00+00:00##submission.copyrightStatement##https://majmuah.com/journal/index.php/bij/article/view/984A Comprehensive Literature Review of Deep Fake Technology and Digital Identity Fraud on Social Media2025-12-25T00:27:30+00:00Muhd Amirul Najhan Shamsudinamirul_najhan@ahsgs.uum.edu.myMohamad Fadli Zolkiplim.fadli.zolkipli@uum.edu.my<p>The rapid evolution of Generative Artificial Intelligence (AI) and specifically Generative Adversarial Networks (GANs) has eroded trust in digital media significantly. These technologies allow for the creation of hyper-realistic "deep fakes" including face swaps and voice cloning that pose a severe threat to digital identity verification. When weaponized on social media platforms these synthetic media become potent tools for sophisticated identity fraud, social engineering and biometric spoofing. This paper conducts a comprehensive literature review to analyze the mechanics of these threats and evaluate the efficacy of current forensic detection methodologies. The analysis reveals that existing reactive defenses suffer from critical limitations in generalizability and robustness. Standard Convolutional Neural Networks (CNNs) often fail against the compressed video formats common on social networks. Relying solely on technical detection is proving insufficient as the "arms race" between generation and detection accelerates. Consequently this study concludes that safeguarding digital ecosystems requires a holistic defense strategy. This includes the development of multi-modal detection models, the enforcement of strict legislative reforms criminalizing non-consensual deep fakes and the adoption of proactive content provenance standards.</p>2025-12-25T00:27:30+00:00##submission.copyrightStatement##https://majmuah.com/journal/index.php/bij/article/view/985The Effectiveness and Adoption Challenges of Multi-Factor Authentication (MFA) on Social Media Platforms2025-12-31T11:00:54+00:00Yuchong Cuicuiyuchong00@gmail.com<p>The contemporary digital landscape is shaped by rapid Digital Transformation (DT), in which high-stakes identity services such as financial systems, e-commerce platforms, and cloud providers collectively function as converged social media platforms. Within this ecosystem, robust authentication is the primary defense against systemic identity threats. Multi-Factor Authentication (MFA) provides statistically strong protection, preventing over 99.9% of account compromise attempts even when primary credentials are exposed. However, this technical effectiveness is undermined by low adoption and inconsistent usage, creating a critical security paradox. This paper analyzes the hierarchical vulnerabilities of MFA modalities, highlighting the elevated risks of legacy SMS-based methods prone to SIM swapping and the exploitation of human factors through sophisticated social engineering, including MFA fatigue attacks. Using an extended Unified Theory of Acceptance and Use of Technology (UTAUT) framework, the study shows that usability friction, increased cognitive load, and low user trust are dominant socio-technical barriers. The discussion advocates a mandatory shift toward phishing-resistant, FIDO2-based authentication and the deployment of adaptive authentication frameworks to align cryptographic strength with sustainable user behavioural compliance.</p>2025-12-31T11:00:54+00:00##submission.copyrightStatement##