Speech melody rules may help AI better understand tones, emotions
Xinhua
22 Apr 2025
JERUSALEM, April 22 (Xinhua) -- Israeli researchers have discovered that the speech melody of spoken English, known as prosody, operates much like an independent language, complete with its own "vocabulary" and rules, which may enable artificial intelligence to understand tones and emotions in conversations better, said the Weizmann Institute of Science.
Prosody encompasses subtle elements like pitch, loudness, tempo, and voice quality that go beyond words to convey emotion, intent, and context. These include hundreds of short melodic patterns, such as a quick rise and fall in pitch, that often signal specific emotions like curiosity, surprise, or enthusiasm.
Published in the U.S. journal Proceedings of the National Academy of Sciences, the study revealed that these melodic patterns frequently appear in pairs with similar functions to simple sentences, and each introduces a new idea in a conversation.
The researchers demonstrated that these melodic patterns also follow predictable rules, allowing them to anticipate what kind of melody might come next during dialogue.
By analyzing vast databases of spontaneous phone and face-to-face conversations, the team was able to identify and classify these prosodic structures. Researchers believe their breakthrough could significantly enhance AI systems by teaching them to interpret and respond to human speech with a deeper understanding of tone and emotion.
They said that future automated systems based on the findings will compile a "dictionary" of prosody and identify its syntactic rules for every human language and different speaker populations.
They explained that by teaching prosody to a computer model, a significant layer of human expression would be added to robotic systems.
One possible application is integrating this prosodic dictionary into brain implants for people unable to speak, enhancing the naturalness of synthetic speech, and moving closer to truly human-like communication.