AnimalCLAP: Taxonomy-aware language-audio pretraining for species recognition and trait inference
May 1, 2026·,,,,·
0 min read
Risa Shinoda
Kaede Shiohara
Nakamasa Inoue
Hiroaki Santo
Fumio Okura
Abstract
Animal vocalizations provide crucial insights for wildlife assessment, particularly in complex environments such as forests, aiding species identification and ecological monitoring. Recent advances in deep learning have enabled automatic species classification from their vocalizations. However, classifying species unseen during training remains challenging. To address this limitation, we introduce AnimalCLAP, a taxonomy-aware language-audio framework comprising a new dataset and model that incorporate hierarchical biological information. Specifically, our vocalization dataset consists of 4,225 hours of recordings covering 6,823 species, annotated with 22 ecological traits. The AnimalCLAP model is trained on this dataset to align audio and textual representations using taxonomic structures, improving the recognition of unseen species. We demonstrate that our proposed model effectively infers ecological and biological attributes of species directly from their vocalizations, achieving superior performance compared to CLAP.
Type
Publication
Proc. the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP2026)