I heard human ear can get information from frequency modulated ultrasound. If audio signal from the speech is used to modulate the ultrasound frequency, then the information from this speech will get into brain without consciousness filtering. That is, I will know something without understanding from where I get this informattion.
Loud sounds are harmful for human ears. But this is not true about powerful sounds at the high end of frequencies we can perceive. These high frequencies acts as a healing agent, they are used in ultrasonic scanning and ultrasonic therapy and usually improve hearing.
I was not able to find time to implement frequency modulation in some convenient application on my computer.
Hardware is not a problem, I can use high sound frequencies instead of real ultrasound. It's better I can't hear it even as a whistling, but not required. Common earphones reproduce frequencies up to 20khz, and I can't hear anything higher than 15khz. So this defines the band for the experiment, 15-20 khz.
Then finally I've made this application. I have found Audacity has nyquist plugins, and frequency modulation is one of embedded functions in nyquist language.
Are you willing to test it yourself? See http://www.wuala.com/seriv/Documents/autofonix
If you can discern words of original signal in FM signal - lower the volume, make it softer. Usually it is the case of non-linear effects resulting in demodulation.
Thanks in advance for your questions and/or comments.