Challenges of machine learning model validation using correlated behaviour data: Evaluation of cross-validation strategies and accuracy measures.

Automated monitoring of the movements and behaviour of animals is a valuable research tool. Recently, machine learning tools were applied to many species to classify units of behaviour. For the monitoring of wild species, collecting enough data for training models might be problematic, thus we exami...

Full description

Bibliographic Details
Main Authors: Bence Ferdinandy, Linda Gerencsér, Luca Corrieri, Paula Perez, Dóra Újváry, Gábor Csizmadia, Ádám Miklósi
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2020-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0236092
Description
Summary:Automated monitoring of the movements and behaviour of animals is a valuable research tool. Recently, machine learning tools were applied to many species to classify units of behaviour. For the monitoring of wild species, collecting enough data for training models might be problematic, thus we examine how machine learning models trained on one species can be applied to another closely related species with similar behavioural conformation. We contrast two ways to calculate accuracies, termed here as overall and threshold accuracy, because the field has yet to define solid standards for reporting and measuring classification performances. We measure 21 dogs and 7 wolves, and find that overall accuracies are between 51 and 60% for classifying 8 behaviours (lay, sit, stand, walk, trot, run, eat, drink) when training and testing data are from the same species and between 41 and 51% when training and testing is cross-species. We show that using data from dogs to predict the behaviour of wolves is feasible. We also show that optimising the model for overall accuracy leads to similar overall and threshold accuracies, while optimizing for threshold accuracy leads to threshold accuracies well above 80%, but yielding very low overall accuracies, often below the chance level. Moreover, we show that the most common method for dividing the data between training and testing data (random selection of test data) overestimates the accuracy of models when applied to data of new specimens. Consequently, we argue that for the most common goals of animal behaviour recognition overall accuracy should be the preferred metric. Considering, that often the goal is to collect movement data without other methods of observation, we argue that training data and testing data should be divided by individual and not randomly.
ISSN:1932-6203