Abstract

Background: Social media platforms have become primary channels for news and crisis communication, yet their speed, scale, and engagement optimized ranking systems also enable the rapid spread of false and misleading content ("fake news"). In safety critical contexts, misinformation can distort risk perceptions, erode trust, and precipitate harmful behaviors. Objective: To synthesize contemporary evidence on (i) how fake news propagates across platforms; (ii) the mechanisms linking online exposure to offline public‑safety harms in health emergencies and disasters; and (iii) the effectiveness and limitations of technical, design, and governance interventions. Methods: We conduct a narrative synthesis (2017–2025) spanning communication science, public health, information systems, and computer science. We map mechanisms along the pathway exposure → belief → behavior → safety outcomes, and evaluate intervention classes: transformer‑based NLP, graph neural networks, multimodal/video methods, warning labels and accuracy prompts, UX friction, provenance cues, and infodemic management frameworks. Results: False content benefits from novelty and affect, achieving wider and faster cascades than true content. Text‑only detectors perform well in domain but degrade under domain shift; graph aware and hybrid approaches improve early detection and generalization, with emerging multimodal methods addressing video centric platforms. Field and platform studies show light‑touch accuracy prompts and well-designed labels can reduce sharing of low-quality content, though effects vary by placement, specificity, and audience. Major gaps persist in (i) measuring real‑world safety outcomes, (ii) robustness across topics, languages, and modalities, and (iii) transparency and data access for independent evaluation. Conclusions: No single solution suffices. We propose a socio technical framework integrating hybrid detection stacks, privacy safe audit pipelines, crisis aware platform design (friction, provenance, correction UX), and cross‑sector coordination with media literacy and trusted‑messenger strategies. A research and policy agenda is outlined to standardize evaluations, enable privacy safe data sharing, and maintain crisis playbooks that align platform incentives with public safety goals.