A more economical way to crowdsource camera trap image classifications?

Pen-Yuan Hsing and Philip Stephens, Conservation Ecology Group, Department of Biosciences, Durham University, Durham, United Kingdom DH1 3LE (@MammalWeb)

This blog post is shared under the Creative Commons Attribution-ShareAlike 4.0 license. You can read the full research paper here.

To conserve biodiversity effectively, we need to know where and in what abundance it occurs. Breeding bird surveys, which happen in many countries every year, are a great example of how high-quality biodiversity data can underpin science and policy. In contrast to birds, however, many mammal species are elusive and surprisingly poorly documented. Motion-sensing camera traps can change this, owing to the relative ease with which they can be set up across a wide area to observe and document mammals in a non-intrusive way. As a result, camera trapping is a highly active focus of research in ecology and conservation (see, for example, recent contributions in Remote Sensing in Ecology and Evolution, such as the special issue in 2017 and a more recent article from June 2018).

A major challenge for camera trapping is dealing with the sheer volume of data that can be produced. Even modest studies can rapidly generate data sets numbering tens or hundreds of thousands of images. Someone must look at each photo and record the animals captured in it. This classification process can be a huge drain on a researcher’s time and can significantly delay the ecological insights that camera trapping can provide.

In recent years, many researchers have turned to online crowdsourcing platforms where anyone who is interested can help with data processing, which includes classifying camera trap photos. For example, the highly successful Snapshot Serengeti project attracted tens of thousands of participants to classify more than a million camera trap photos. An important trick of the trade is to ask multiple participants to classify each photo. This way, researchers can aggregate those “votes” to calculate a consensus classification. Once a consensus is achieved for a photo, it can be “retired” (i.e., no longer shown to visitors) so that users can look at other images in the dataset.

Motivated by the need to find better ways to monitor mammals in the United Kingdom, we started MammalWeb, a citizen science project for monitoring wild mammals in north-east England. The project is unusual, in that MammalWeb citizen scientists can participate in one or both of two ways: by being a “Trapper” who sets up camera traps and uploads photos and associated data to our web platform; and by being a “Spotter” who logs in to help classify those photos (Fig. 1). One challenge for MammalWeb is that we have a much smaller group of Spotters (hundreds of users) than big, international projects like Snapshot Serengeti (tens of thousands of users). Therefore, we wanted to see if there is a way to arrive at those consensus classifications in an even more economical way, so that user effort can be focused on examining photos requiring more scrutiny. If we can do this, crowd-sourced camera trapping projects big and small can all benefit.

Fig 1

Fig. 1. The MammalWeb “Spotter” interface, where users can help to classify camera trap photos. Shared under CC BY-SA license

We started by looking at a “gold standard” subset of images for which we already know the species pictured. By comparing our user-submitted classifications to this gold standard, we can get an idea of how accurate our Spotters are. According to this, MammalWeb Spotters have over a 90% chance of correctly identifying the presence of an animal (if it is indeed present) for 10 out of 16 frequently-seen species. Where user classifications are not correct, the reasons seem to depend on the type of species. For example, classification accuracy for small rodents is lower because, often, they are simply missed by a Spotter. Other species are more frequently mis-identified rather than missed altogether. An example of this is the brown hare (Fig. 2), which is often confused with the European rabbit.

M2E56L97-97R434B310

Fig. 2. Camera trap photo of a brown hare, which is often confused with the European rabbit. Shared under CC BY-SA license

We also calculated the confidence we can have in a consensus classification, given the number and types of user classifications that underlie it. We found, for example, that very few classifications saying a badger is present are needed for us to be confident that it really is there; this is because badgers are fairly easy to identify. However, for more “ambiguous” species, such as the brown hare, we need to have more people look at the photos before we can be certain about whether or not it is there. Users are extremely unlikely to provide “false positive” classifications, suggesting that a species is pictured when the image sequence actually contains no wildlife.  Hence, even when many classifications suggest that an image sequence is devoid of wildlife, a single dissenter is more likely to be correct.

What all this means is that, when crowdsourcing the classification of camera trap photos and calculating consensus classifications, it may be helpful to factor in (1) differences in detectability between species, and (2) the relative influence of different types of incorrect classifications (where species have been missed versus where they have been misclassified). Together, these solutions can better focus user classification efforts on those photos requiring more scrutiny.

As projects like MammalWeb, Snapshot Serengeti and eMammal gather a large body of classified camera trap photos, they can be used as training data to aid machine learning algorithms to automatically classify wildlife photos. The first steps look very promising, emphasising how critical it is for researchers to share their data and results so that we can build on each other’s progress to address the need for large scale monitoring in this time of rapid ecological change.

More generally, the MammalWeb project has also demonstrated that citizen science is not limited to scientists crowdsourcing, or “outsourcing”, their work to volunteers. MammalWeb citizen scientists have not only been instrumental in setting up camera traps to observe wild mammals, but have also taken the initiative and started their own wildlife surveys. Some use the data they collect to inform public planning and engage policy makers, while others develop and deliver camera trapping workshops to other wildlife groups. Can citizen science camera trapping be as successful as other citizen-initiated remote sensing projects such as aerial mapping?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s