Google Launches AI to Decode Dolphin Talk, Runs on Pixel Phones
Google has introduced an open-source AI model called DolphinGemma, designed to analyze and decode dolphin communication by studying their clicks and whistles. Released on National Dolphin Day, this model aims to bridge communication between dolphins and humans by identifying structures in their vocalizations. Developed in collaboration with Georgia Tech, DolphinGemma utilizes extensive audio and video data collected since 1985 from the longest-running underwater dolphin research project. This AI model, which contains approximately 400 million parameters, can run on Pixel smartphones used by field researchers, streamlining the analysis of dolphin sounds. The project is also paired with a system called CHAT, which correlates specific dolphin sounds with certain objects, potentially creating a shared vocabulary. The researchers plan to enhance the model to accommodate various dolphin species, fostering deeper insights into dolphin communication patterns. As more groundbreaking efforts arise to decode animal sounds, DolphinGemma represents a significant step toward understanding interspecies communication.
Source 🔗