Planet Earth: How to categorize with the “eyes of a machine”

We are a natural force. Humans have modified Planet Earth surface according to their desires. Changes that used to take hundreds, if not thousands, of years to occur can now happen in a matter of weeks.

The land cover map is one of the ways we’ve tried to keep track of this constant recasting of our world.

Researchers will categorize the scene below using aerial pictures or satellite images.

What are the locations of grasslands and forests, roads and buildings, and water, snow, and ice?

These maps show us where the resources are located on Planet Earth and assist us in managing them. They help with urban planning, crop yield estimation, flood risk analysis, and biodiversity monitoring, to name a few applications. The problem is containing the onslaught of new data that threatens to render any land cover map obsolete as soon as it is published.

It’s for this reason that artificial intelligence (AI) techniques are becoming increasingly popular among researchers.

Take, for example, Esri’s Living Atlas, which was released this week. Esri is the main provider of geographic information system (GIS) software in the United States.

From images provided by the European Union’s Sentinel-2 satellite network, Esri has created a global land cover map for 2020.

This is a pair of orbiting satellites that take continuous photos of  Planet Earth surface at a resolution of 10 meters (the size of each pixel in an image). Every day, terabytes of data are lost.

A team of researchers would struggle to completely characterize the contents of all those pixels, but a machine can – and does it quickly.  “A 2020 land cover map would probably not come out until the middle or late this year in a typical workflow because it takes that much processing time, that much verifying and validation work to go on,” Sean Breyer, who oversees Esri’s Living Atlas of the World initiative, noted.

Read Here: Earth Day: Climate Summit between President Xi, Biden and world leaders

“However, with the help of our collaborators at Impact Observatory, we’ve devised a process that employs artificial intelligence.” It took less than a week to compute the entire planet’s land cover. That adds a whole new layer to land cover mapping, allowing us to undertake land cover mapping on a monthly or even daily basis for certain locations,” he told BBC News.

Impact Observatory to build AI land classification algorithm

Impact Observatory used a training dataset of five billion human-labeled image pixels to construct its AI land classification algorithm. The Sentinel-2 2020 scene collection was then supplied to this model to identify, and it processed over 400,000 Earth observations to produce the final map.

“So we had experts categorize these photographs, and then we’re feeding knowledge to the model, much like a child learns,” said Dr. Caitlin Kontgis, Impact Observatory’s head of science and machine learning.

Read Here: Solar Eclipse 2021: The Ring of Fire illuminates the night sky

“The model learns these patterns as iterations and information increase. As a result, if it sees ice in one place, it can find ice in another. “The data set we used to train the model is really unique in that it allows us to look at both spectral and spatial variables, such as the colors in a satellite image. We were able to train this model and then run it over the world in less than a week, resulting in the highest resolution map accessible.”

Recent Articles

Related Stories