You can thank me later if this gives you nightmares. But engineers at Kagawa University in Japan are developing a robotic version of the human mouth.
To enable the robot’s speaking abilities, engineers at Japan’s Kagawa University used an air pump, artificial vocal chords, a resonance tube, a nasal cavity, and a microphone attached to a sound analyzer as substitutes for human vocal organs. The robot not only talks, but it uses a learning algorithm to mimic the sounds of human speech. By inputting the voices of both hearing-impaired and non-hearing-impaired people into the microphone, researchers were able to plot the differences in sound on a map. During speech training, the robot “listens” to the subjects talk while comparing their pronunciation to that of subjects who are not hearing-impaired. The robot then generates a personalized visualization that allows subjects to adjust their pronunciation according to the target points on the speech map.
Sounds more like vowelish sounds than anything, but you have to start somewhere.