Artificial Intelligence identifies bird calls
New results from researchers at CQUniversity Australia promise to improve the speed of bird identification using audio recordings from natural settings using powerful' computer-driven neural networks.
The research was led by PhD student Francisco Bravo Sanchez along with co-authors Professor Steven Moore' Dr Rahat Hossain and Dr Nathan English.
The use of autonomous recordings of animal sounds to detect species is a popular conservation tool' but usually results in thousands of hours of raw audio that in the past needed to be listened to by a trained human ear.
Advances in hardware' software and signals processing allows computers to do this now with success (around 75 per cent accuracy)' but it's still a laborious' processor intensive and technologically complex process.
Current classification software utilises sound features extracted from the recording rather than the sound itself' with varying degrees of success.
Previously' the raw audio recordings were pre-processed and what were thought to be the important bits selected out for further examination and identification.
Bravo Sanchez's work leap-frogs the pre-processing step and instead uses a convoluted neural network (CNN) to process the raw sound and decide what performs best to identify the bird species making calls on the recording— and then the CNN gets to it.
"Wildlife and computers have always been two of my passions and this PhD topic combines both of them'" Bravo Sanchez explained.
"I have a biologist background but computing has always been an important skill in my profession. And while I'm not a computer expert the open source software community makes it feasible for people like me to undertake this type of research."
In addition to eliminating any bias the pre-processing might introduce' Bravo Sanchez said the process of letting the 'machine do the heavy lifting' was about twice as fast and yields results (70 per cent) similar in accuracy to traditional methods.
"It's the difference between watching someone with headphones tap out a beat and lip sync a song or listening to the headphones yourself -- you'll figure out which song is playing more quickly the more directly connected to the music you are'" Dr English further explained.
Bravo Sanchez said the research findings would offer 'a glimpse into a different way of processing animal sounds without relying on tools designed for human speech'.
"Automatically identifying species from autonomous recordings is a very useful conservation tool' but still requires a lot of expertise. Our research shows that we can facilitate the task by reducing the number of steps and the choices required to process animal sounds."
The research also uses open-source software that is accessible to anyone in the world (with the programming skills to use it) and Bravo Sanchez has uploaded his code to GitHub' so that others can use his work.
"We will be trying to improve our results in the future' but we are sharing our code so that others can experiment with their datasets in the hope that we all can come up with better solutions that would help wildlife conservation or the search for rare species."
The team hope this work can be used in medical and industrial settings that also use acoustic monitoring in day-to-day applications.
READ THE RESEARCH PAPER HERE.