Mason Bretan

Technology, Art, and Science
   Home            Sound Mapper

Sound Mapper
The Sound Locating Tool Through Polyspectral Analysis

Sound Mapper is a tool that uses polyspectral analysis and the filtering properties of a stereo microphone setup to locate sound.  The setup should include an absorption material between, above, below, and behind both microphones .  Because of the filtering properties of the absorption material there will be noticeable differences and qualities in the signals picked up by each microphone.  Similarly to how humans naturally locate sound (through the filtering properties of one's pinnae, torso, and head), these differences tell which direction the sound source is coming from. 

 In order to see the differences in each signal and understand where the sound is coming from I broke down each signal into 25 frequency bands ranging from 80-14000 Hz.  Depending on the sound that you are trying to locate it may be necessary change the value for each band so that they are more specific to the frequency range that the sound source encompasses.  A sound that covers a larger frequency range (such as white noise) is an easier sound source to locate and more suitable for Sound Mapper as opposed to a sound with only a miniscule frequency range (such as a pure sine tone), which may be near to impossible to locate.  Sound Mapper measures the Interaural Level Difference (ILD) for each frequency band.  The image to left shows one signal being split into 25 and the level of each of those being measured in numerical values.  The image to the right shows is a visual display of the levels.
The image above and the image to the right demonstrate the patching that compares the separate frequency band levels for each signal.  These ratios determine the location of the sound (in 3D).  To determine what the ratios mean in terms of determining location with a particular absorption material, impulse responses can be taken with sounds at different locations around the mic setup.  Depending on the value of the ratios a visual representation is shown on the interface in the form of vector segment originating from the center of the sphere (microphone location) and pointing to a section of the sphere perimeter (sound source location).  The line vector is updated every quarter of a second (this is user changeable depending on CPU availability) so real time location is feasible.
Sound Mapper is still a work in progress and it's biggest weakness preventing it from becoming a real world application is it's inability to distinguish between multiple voices.  It works best in an acoustically dead room containing only one sound source.  However, I am currently looking at some voice recognition and timbre distinguishing algorithms  with hopes of a way to solve this problem.