Conveying Emotion in Robotic Speech: Lessons Learned
Crumpton, J., & Bethel, C. L. (2014). Conveying Emotion in Robotic Speech: Lessons Learned. 23rd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2014). Edinburgh, UK: IEEE. 274-279. DOI:10.1109/ROMAN.2014.6926265.
This research explored whether robots can use modern speech synthesizers to convey emotion with their speech. We investigated the use of MARY, an open source speech synthesizer, to convey a robot's emotional intent to novice robot users. The first experiment indicated that participants were able to distinguish the intended emotions of anger, calm, fear, and sadness with success rates of 65.9%, 68.9%, 33.3%, and 49.2%, respectively. An issue was the recognition rate of the intended happiness statements, 18.2%, which was below the 20% level determined for chance. The vocal prosody modifications for the expression of happiness were adjusted and the recognition rates for happiness improved to 30.3% in a second experiment. This is an important benchmarking step in a line of research that investigates the use of emotional speech by robots to improve human-robot interaction. Recommendations and lessons learned from this research are presented.