Five ways big data can help Kaikoura's disaster recovery

When Kaikoura was hit by the big one a couple of weeks ago, it was NZ’s second major earthquake in five years. Thankfully the human toll was far lower than Christchurch – it’s fair to say we dodged a bullet in Kaikoura.

Kaikoura bore scary similarities to another catastrophe from 2011… Japan’s Great East earthquake. Big magnitude, relatively shallow events, both quakes lifted offshore seabeds – generating tsunami conditions and mass coastal evacuations. Thankfully however, it’s there the comparisons largely end. In fact, thanks to the big data and disaster recovery lessons learned in Japan (and other global disasters since), Kaikoura’s road to recovery is likely to be faster and better organised than in previous events.

Technology and early warning systems

A magnitude 9.1 event, the Great East Japan earthquake struck seventy kilometres offshore, lifting the seabed by four to seven metres, and generating a series of tsunamis that crippled Fukishima’s nuclear power plant and killed over fifteen thousand people. The combined economic impact of the disasters was estimated at over $US250B.

Amazingly, it could have been worse. By utilising big data in both the immediate impact and the dramatic aftermath of disaster recovery, the authorities were able to revolutionise traditional approaches.

Seconds after the fault ruptured, millions of people received advanced warning of the quake, through mobile, TV, radio, or the net. Tokyo got 80 seconds warning – stopping high speed trains and factory production lines, and giving people time to find shelter. Sendai – the city nearest the epicentre – had a few seconds warning, in real terms the difference between life and death.

Generated by a network of seismic sensors, the warnings were distributed through a series of national media. Minutes later came another warning, this time of tsunamis. Location based data (monitoring cellphone and car navigation system movements) showed 60% of people in coastal areas heeded the warnings and immediately evacuated. Of these only 5% were hit by the waves. In direct contrast 50% of those who stayed put were inundated.

But to understand what followed in the days and weeks after the quake you have to go back another year, to another big quake, this time in Haiti.

The origins of big data discovery

With poor infrastructure and non-existent communication in many parts of the country Haiti was a disaster nightmare. When the quake hit in 2010, aid flooded the country immediately… and that’s where the problems started. With Port au Prince badly hit, and streams of people leaving the capital, no-one knew where the aid was most needed. Data scientists from the Karolinska Institute contacted Haiti’s biggest mobile carrier Digicall, and within days had an accurate map of Haiti’s dispersed population, based on anonymised location data taken from mobile towers. Later that year they revised their models slightly – this time tracking movement from cholera hotspots into areas not yet affected, providing effective early warning of outbreaks to authorities.


Big data's role in recovery

Fast forward a year to Japan, and the big data response had become even more sophisticated. In fact, the use of big data was so central to the recovery that three years after the event, Japan’s national broadcaster (NHK) created a documentary examining several of the initiatives. It’s well worth a watch if you can find the time.

Location data was key, helping track movements of people in and out of the disaster zone, as they looked for help, work, or shelter. In the city of Natori, it became clear municipal population estimates (based on surveys) were overstated by 10-15%. Not only that, the population was highly mobile – something not captured in survey based estimates. This had massive implications for allocation of resources in the recovery and helped the authorities to divert some of the aid originally planned for Natori to other regions.


Prior to the disasters the launch of a unified system to track businesses involved 750,000 organisations nationwide. The data showed that 20,000 companies in the disaster zone nurtured a complex web of 220,000 business relationships across Japan. Two and half years after the earthquake and tsunami, 22,000 of these business relationships were lost – either through bankruptcy or basic infrastructure breakdown.

Understanding this meant the Government could more effectively target aid packages, and target rebuilding infrastructure in support of those relationships. Four years after the disaster, over 1700 businesses were gone… but without the data to drive recovery efforts that number would be much higher.

And it wasn’t just hard data that helped the recovery. Almost 6B disaster related tweets were sent in the two and a half years after the event. 290,000 of those related to Fukishima peaches. Text analysis on these helped the authorities to track public sentiment around contamination, as the first harvests were delivered post-quake., in turn allowing authorities to mitigate concerns through targeted marketing and information programs.


Theories and concepts created in Japan are now relatively common. In New York City, data scientists discovered that postings on Flickr almost perfectly tracked Hurricane Sandy’s progress, giving an option for on the ground storm tracking. In Chennai, the relief effort after devastating floods was largely organised via Twitter, giving scientists measurable data to help understand how the efforts unfolded and to build better processes for next time. And in Nepal Google created a tool to manage information about people’s whereabouts and health in the immediate post-quake chaos. An offshoot of the Karolinska Institute also used mobile phones again to help understand population movements.


Cementing the importance of big data in disaster recovery, last year the US and Japan formed an official group to formally develop a range of projects based in big data. With $2m in funding, the program was launched with a series of initiatives encouraging sharing and processing of open data.

Lessons for New Zealand

All of which leads us to Kaikoura. The lessons of Japan, Haiti, New York and Nepal are all there and available to New Zealand. What are some of the ways Big Data can help to understand (and recover from) the earthquake?

  1. Review evacuation success
    Location based data could be used to analyse the success of evacuation warnings and evacuation routes in the affected zones, improving engagement and future planning.

  2. Look at tourism impact
    So dependent on tourism, location based data could give Kaikoura accurate visibility of returning tourists, and monitor how itineraries are being rewritten by the earthquake.

  3. Allocate reparation budgets
    Broken transport routes and infrastructure will have direct impacts on GDP. Understanding and quantifying this will help authorities in justifying budget allocations for repair work and to mitigate against future impacts.

  4. Analyse social sentiment
    Analysis of social media could show sentiment around Kaikoura’s recovery, both locally and internationally. Identifying pain points could help in allocating recovery resources and getting key business infrastructure up and running more quickly.

  5. Prioritise transport repair work
    Finally, the use of transport related data sources like Google and telematics could help to identify points of congestion on alternate routes, and highlight optimum times for carrying out repairs. This is a significant issue when secondary roads are forced to take additional volume.

We’ll never be able to eradicate the threat of major natural disasters, but the data generated by now commonplace technologies such as mobiles are helping more people to survive, and then recover faster and more efficiently from their impact. If that’s the lasting legacy of Japan and the terrible events of 2011, the world will have gone some way towards creating a real triumph from the depths of tragedy.

Subscribe to the blog