AI falters under pressure in crisis situations
In the face of escalating threats from AI-generated disinformation, areas like Cebu and Eastern Visayas, which are prone to natural disasters such as earthquakes and tsunamis, are particularly vulnerable. The next disaster will test our ability to distinguish between what's real and what's AI-generated.
The public's inability to discern fact from fiction, coupled with the rapid spread of misinformation, can erode trust and potentially endanger lives. Misinformation during crises like the recent earthquake in Russia can confuse responders and the public alike.
Recent incidents, such as Elon Musk's AI tool Grok incorrectly informing users in Hawaii that the tsunami warning had been lifted when it was still active, highlight the urgency of the situation. AI tools can generate fake images, voice clones, and news articles, creating a massive misinformation machine.
To combat this, efforts are being made to watermark AI-generated images and establish fact-checking networks. Government agencies, disaster responders, and local media can strengthen real-time verification systems against AI-generated disinformation during emergencies.
Key approaches include deploying AI-powered monitoring and detection systems, combining AI with human expertise and training, leveraging generative AI for tailored, clear communication, ensuring cultural and linguistic competence in real-time, and collaborating with local media for rapid verification and dissemination.
Utilizing natural language processing (NLP) and machine learning algorithms, these AI tools can scan vast amounts of digital content in real-time, identifying misinformation patterns, sentiment spikes, and emerging false narratives. They can flag suspicious content quickly and generate alerts for human fact-checkers and responders.
Regular training of staff using realistic simulators, such as cloud-based platforms like ROSATOM’s Simula, develops personnel’s adaptive skills to recognize and counter evolving strategies in real emergencies. Generative AI can rapidly summarize complex technical information, aiding in consistent, accurate messaging, but human review is essential to catch AI errors or hallucinations before dissemination.
Real-time AI tools can translate messages instantly and perform sentiment analysis to detect confusion or mistrust, enabling responders to adjust messaging to diverse populations quickly and improve trust. Local media can integrate AI tools to verify information swiftly and share accurate updates, counteracting false reports and reducing public panic.
By systematically combining AI’s speed and scale with human judgment, ongoing training, and community engagement, agencies and media can build resilient verification systems that minimize the impact of AI-generated disinformation during crises.
The public needs to be educated about the dangers of AI-generated disinformation during emergencies. Strengthening real-time verification systems and training people to detect fake content are key to minimizing the impact of AI-generated misinformation during crises. Fact-checking and false news debunking should be part of the arsenal of emergency responders to prevent lost lives in the aftermath of a super typhoon or earthquake.
To mitigate the escalating threat of AI-generated disinformation during emergencies, the Philippine government should collaborate with disaster responders and local media in Cebu and Eastern Visayas, which are constantly grappling with natural disasters like typhoons and earthquakes. They can bolster real-time verification systems against AI-generated hoaxes by utilizing AI-powered monitoring and detection tools, combating misinformation with human expertise and training, and fostering collaboration with local media for rapid verification and dissemination of accurate information. The public needs education about this danger, and fact-checking and false news debunking should be integrated into the emergency response arsenal to save lives in the aftermath of a super typhoon or earthquake.