The SKAI isn’t the limit: How WFP uses satellite imagery and machine learning in emergencies
The WFP Innovation Accelerator partnered with Google Research to set up the SKAI platform to automatically assess damage post disasters and speed up humanitarian response.
By Fiona Huang
Special thanks to Joseph Xu, Pranav Khaitan, Jihyeon Lee, Wenhan Lu, Zebo Li, Steven Chun, Devora Berlowitz, and Maolin Zuo from Google Research who have devoted themselves to developing SKAI and this blog post together with the WFP Innovation Accelerator.
Following disasters, the World Food Programme (WFP) works to assess the magnitude of damage, the needs of local communities, and our humanitarian intervention plans to mobilize resources and coordinate emergency response efficiently.
WFP manually extracts information from satellite imagery and mobile phone images to monitor the impact of disasters. However, this manual process tends to be tedious and can take considerable time. As part of its frontier innovations portfolio, the WFP Innovation Accelerator has been exploring new technologies that can automate this process and speed up response times.
WFP partnered with Google Research to set up SKAI, a humanitarian response mapping project powered by artificial intelligence — an approach that combines statistical methods, data and modern computing techniques to automate specific tasks. SKAI assesses damage to buildings by applying computer vision — computer algorithms that can interpret information extracted from visual materials such as, in this case, satellite images of areas impacted by conflict, climate events, or other disasters. The key to this process is a machine learning model developed specifically for SKAI.
Machine learning is a type of artificial intelligence; it uses computer algorithms to generate insights on new data drawing from historical data and patterns. To illustrate, SKAI’s machine learning model compares satellite image data before and after disasters to carry out damage assessments. Crucially, every input and iteration of data can improve the model’s performance as the system records and learns from the data patterns — the process known as “training” the machine learning model. Well-trained machine learning models can detect damage to buildings in entire cities in a matter of minutes, helping deliver humanitarian assistance faster and more efficiently than before.
Approach: machine learning on satellite imagery
SKAI aims to provide building damage assessment at scale within 24 hours after obtaining clear, high-quality satellite imagery following a disaster. SKAI’s machine learning model detects damaged buildings by comparing imagery of the same buildings before and after the disaster.
Since its inception in 2019, SKAI has been trained by using satellite imagery from three past disaster events: the 2010 Haiti earthquake, the 2017 Santa Rosa wildfire, and the Syrian conflict in 2016. In each of those training cases, SKAI achieved greater than 80 percent accuracy in identifying damaged buildings.
SKAI’s unique value proposition is that it not only vastly reduces the workload of human analyses, it also competes favourably against other machine learning processes, which often need thousands of images to be labelled. Traditional machine learning models must be trained on many thousands of example images hand labeled by human experts (e.g. is the building in this image damaged?). This process is time and labor intensive, and not realistic in a crisis-response scenario.
The type of machine learning used in SKAI, on the other hand, learns from a small number of labeled and a large number of unlabeled images of affected buildings. SKAI uses a semi-supervised learning technique that reduces the required number of labeled examples by an order of magnitude. As such, SKAI models typically only need a couple hundred labeled examples to achieve high accuracy, significantly improving the speed at which accurate results can be obtained
Lessons learned from SKAI deployment in humanitarian operations
WFP trained SKAI during more recent disasters, including the Beirut Port blast in Lebanon in 2020 and the Cyclone Yasa in Fiji in 2021. We have learned that SKAI requires context specific setup and calibration, depending on the type of disaster and the size of the disaster area footprint.
It may take a few days for WFP to obtain the right satellite image of the disaster site from the source. Once received, SKAI generates a city level damage assessment in one to three days for disasters like the 2021 Cyclone Yasa in Fiji. Comparatively, seven analysts working part time spent around 20 working days to manually assess the damage after the Tropical Cyclone Idai in 2017 in Beira; the fourth largest city in Mozambique with a population of 500,000.
In a conflict scenario, where WFP was tasked to assess damage to smaller areas, such as refugee camps, SKAI took a similar amount of time as the manual assessment process. Though the power of SKAI was not entirely utilized, our Country Offices used the output from SKAI to inform decision making as it provided an overview of the changes on the ground when there were no connections at that time.
The 2020 explosion in the Port of Beirut revealed another critical learning about the use of satellite images. Because of the nature and lateral force of the explosion, damage appeared mostly on the vertical surfaces of buildings resulting in broken windows and cracked walls. This type of damage is difficult to detect from satellites, which capture images from a top-down vantage point.
The way forward
SKAI serves as an example of how leading edge technologies and partnerships with the tech sector can optimize monitoring, planning, decision making and overall effectiveness of emergency response after disasters, while safeguarding humanitarian principles.
Computer vision models like SKAI can drastically speed up the process of extracting insights from the ground, reducing humanitarian response times from weeks down to days, and getting help to disaster areas much more efficiently. Hence, there is a growing interest in exploring applications of computer vision technology in emergency response. We plan to continue to work with WFP field teams and Google Research to improve the user experience and performance of SKAI. The ambition for SKAI is to optimize its platform to function across a variety of geographic locations, disasters, and damage types, further increasing WFP’s capabilities to respond to disasters at scale and in complex environments.
- Read more about SKAI and how the SKAI machine learning works.
- Dive deeper into the technical details by reading this research paper: Assessing Post-Disaster Damage from Satellite Imagery using Semi-Supervised Learning Techniques
- Find out about WFP’s frontier innovations portfolio — cutting-edge technologies and ideas that can offer new ways of delivering humanitarian assistance.
- Watch this video to know more about our work with artificial intelligence.
The WFP Innovation Accelerator sources, supports and scales high-potential solutions to end hunger worldwide. We provide WFP staff, entrepreneurs, start-ups, companies and non-governmental organizations with access to funding, mentorship, hands-on support and WFP operations.
Find out more about us: http://innovation.wfp.org. Subscribe to our e-newsletter. Follow us on Twitter and LinkedIn and watch our videos on YouTube.
The benefits of machine learning in emergencies from river DEEP to mountain SKAI
Why automating analysis of aerial images is taking off for humanitarian response
5 innovations powered by artificial intelligence that tackle world hunger
Here’s how innovations using artificial intelligence can add value to WFP’s humanitarian operations