Selfie of the Earth

Selfie of the Earth

Want to use land wisely? Then analyse the ‘skin’ of the Earth

This story can also be read on Medium.

It was the seventh of December, 1972. That was when Planet Earth’s wireless, remote-controlled selfie camera, the Apollo 17,  turned around and took the first ever photo of the entire planet at  once. This picture was taken at a far distance, and gave to the viewer a  general impression of what the Earth looked like.

It was round and blue.

Original image credit: NASA (public domain)

There you could see the striking ocean, the African continent, the ice of Antarctica, and some clouds are visible as well.

That  was nearly fifty years ago. Now, technology has largely improved and we  can have more detailed pictures of the Earth. Many satellites are  turning around our blue planet and taking pictures of its surface  everyday. These high-quality pictures are put together for a most  important job: to identify the land cover of the Earth.


Human  population is growing, and, more importantly, people are asking for  more goods and resources for our planet. They want to develop new crop  plantations, build new cities, and dig new holes in the ground from  where they can take minerals and say “it’s mine!”

But  where exactly should these new things happen? You can’t just walk into  unoccupied land and say “okay, I’m building a city here.” Chances are,  even unoccupied land will be occupied — by creatures other than humans.

If  we’re choosing places for projects, we should at least choose a good  spot, as in one that does the least harm. It’s better to plant corps on  bare land than on a forest. It’s better to spare species-rich habitat  when expanding cities.

And if you want to choose a sensible spot, you’ll need to know about land covers.


Land covers are exactly what they sound like: the cover of the land. You could think of them as the skin of the Earth.

A  land cover is basically a description of all the elements on a  particular area of the Earth’s surface: elements such as vegetation,  water and built-up areas. Land covers have a direct effect on the  functioning of the Earth system.

A  land covered by a forest doesn’t have the same temperature or water  cycle as bare land. A building-covered area like a city doesn’t behave  the same way, physically speaking, as an area covered by crops like corn  or wheat. Water evaporates at different rates. Sunlight reflects off  some better than other. Some heat up and cool down quickly, whereas  others stay steady for a long time before giving in.

Land  covers are also a proxy to determine other land properties like  biodiversity or carbon content. Obviously, a primary forest stores more  carbon and contains more animal species than a crop field or a city.


Defining  land covers is about creating categories and finding ways to  differentiate them. You might wonder what the big deal is. So you want  to know if some land has a forest or a city — how hard can that be?

Seeing  the difference between a forest and a city is easy, even from space.  But if you want to differentiate primary forest, secondary forest and  tree plantations of rubber or oil-palm, that’s when things get tricky.  From space, these three land covers just look…green and bushy.

Mutually  exclusive land cover categories are not easy to create. On the one  hand, we are limited by our language or mind. For example, it exists  four words in Indonesian to name a degraded forest, each word  corresponding to a different degree of degradation. In English, we lack  these words and therefore these categories.

On  the other hand, there is a lot at stake when defining the word  “forest”. For example, many tree growers would like their plantation to  be qualified as a forest and thereby hide the negative environmental  impact of growing one species of tree on a large area. A rich, diverse  ecosystem is different from a repetitive plantation of acacia or  oil-palm — but if they’re both counted as “forest”, nobody could tell  the difference.


The  Blue Marble from 1972 gives some very rough information about Earth’s  land cover. From the picture, we can see that Earth is mainly covered by  oceans, and there’s some patches of continent in between.

But we know almost nothing about what’s covering the continent.

Nowadays,  there are satellites orbiting closer to the Earth, which can take such  high-resolution pictures, I can even see my own house in them. Landsat  satellites are one example, with a resolution of 30 square metres— that  means each pixel of the picture represents a square of 30 metres on the  ground.

Other satellites like SPOT, the Satellite Pour l’Observation de la Terre, have a resolution of onesquare kilometre — lower quality pictures, but you can glimpse a greater area all at once.

The  higher the resolution of a satellite, the slower in needs to travel  round the Earth to take a complete picture of its surface. Landsat takes  over two weeks to take a complete set of the Earth’s land cover, while  SPOT can take this same set of pictures twice in one day. That’s the  trade-off between space resolution (the size of each pixel) and time  resolution (how often you can take those pics to get a continuous live  feed of what’s going on).

Each  picture of the Earth will contain many pixels. Two pictures can overlap  and both include identical pixels. Since it’s not convenient to  duplicate this information, Earth pictures are processed: duplicated  pixels are deleted, and filters are applied. It’s like editing a selfie  on Instagram but in a more complicated way.

A  selfie of the Earth taken by a satellite is called a “tile”. All tiles  put altogether side by side can give a complete picture of the Earth  land cover.


Since  the Earth is big, and its tiles contain many pixels of 30 square  metres, its land cover cannot be classified by hand. Even if the  classification is done at a small scale, like a country-scale picture of  just that nation’s cover, it’s cheaper and more efficient to automate  everything.

Classification  can be either be supervised by humans, or be completely automated. If   it’s supervised, a human being must determine land cover classes such as  forest, city, crop land and sea, and then tell the classifying  algorithm “hey dude, these pixels I am showing you are forests, find all  similar pixels and group them as forest; these are cities, groups them  as city…” and so on.

For  unsupervised classification, the algorithm directly groups similar  pixels together without any assistance. The researcher later gives a  label to each group: “these ones are forest, those are urban, that one’s  water” and so on.

Each  classification type has it advantages and drawbacks in terms of costs,  time, precision, and more. A variety of algorithms have been developed  for each classification type.

But  to help the algorithms even more, it’s important to make different  land-cover pixels different from each other. Forests and plantations may  look similar, but we should try to make them look as different from  each other as we can.


Satellite  sensors get information from the light reflected by the Earth surface.  They sort out this information into bands, similar to the strip of a  rainbow. For example, Landsat’s sensors can sort out the light into  seven bands like ‘red’, infrared’, and  ‘green’. Once this information  is sorted out, it’s possible to calculate values like “vegetation index”  and “texture value”.

These  indices and values are calculated with some fancy formulae which I  won’t review here because it’s a bit boring, but the key idea to  remember is that those indices and values help differentiate pixels from  one other. Maybe one type of land cover has more “red” than another,  even if they’re both mainly green.

Theoretically,  the more  indices and values you have, the easier it is to  differentiate pixels. The more details you know about a person, the  easier it is to spot them in a crowd.

Once  all pixels have had their bands, indices, and all the rest calculated,  the classifying algorithm can recognise differences between pixels and  easily group them. The algorithm gives the (hopefully) right label to  each pixel, and draws a colourful map with a legend of the land cover  classes. Looking at the map, you can say: “Hm, there is a lake here” or  “Oh God, farmers have expanded their cropland into the forest without  telling me!”


Once  a map is created, how do you make sure it’s an image of the world as it  is, and not the artistic creation of a childish algorithm?

One  way to check if the algorithm has classified the land cover well, is to  test it with land covers whose type you already know. Before running  the algorithm, you could go to the field and check the land cover of a  defined area, and perhaps take photos or videos to supplement the data.  For large areas — or lazy scientists who do not like long hikes in the  forest and the countryside — it’s also possible to manually identify  land cover with satellite imagery. That’s not as reliable as a ground  truth, but for large areas it is more realistic and cheaper.

When  you’ve identified and labelled the land cover of known pieces of land,  you can run the algorithm and check whether it’s doing well.

Scientists  usually organise the mistakes of the algorithm on a table, and check  which land covers were confounded. “Plantations were often identified as  forest by the algorithm”, they may remark, or “bare ground was  misidentified as water”.

Checking  which land covers the algorithms struggle to identify give valuable  information to improve the classification later. Scientists can either  improve the algorithm, or sample better training and testing areas. They  can also choose images from other satellites, or calculate different  vegetation indices, or go tweak those precise formulae again.

Once  you’ve created a map and know how accurate the classifying algorithm  is, you could compare the map with other available maps or land cover  data. Having a map matching with land cover data or other maps is a good  way to ensure trustworthiness.


Watching  the Earth from space is important to have a global picture of what is  covering our blue planet. Of course there is water in the ocean, but the  land cover of the continent is also a key component of the Earth  system.

Human  beings are completely changing the appearance of the Earth — moving its  skin, reorganising patches, putting some make-up here and there. And  when we do it, satellite imagery and remote sensing are great tools to  ensure we’re not messing up its natural beauty.

Because we wouldn’t want to spoil that selfie, would we?


Want to write with us? To diversify our content, we’re looking out for new authors to write at Snipette. That means you! Aspiring writers: we’ll help you shape your piece. Established writers: Click here to get started.