Apple’s invention for “Passive proximity detection” is basically a form of passive echolocation or a loose interpretation of passive sonar. The filing, published by the US Patent office, describes a system that makes two sound wave samples, a before and an after, that bring back a signal that the phone can detect and interpret.
A before and an after sound wave and then compares the two to determine if an external object’s proximity has changes. “Sampling” occurs when a transducer, like a microphone, picks up ambient sound and sense a corresponding signal to be interpreted. The invention relies on basic acoustic principles and can detect the difference in sound and what they mean.
This effect may be noticed when sound is reflected by soft material as opposed to a hard surface. Generally, sound reflected off the soft surface located at the same distance and angle from an audio transducer and a sound source.
In another one of the inventions permutations, two microphones are situated at different places on a device, and detect the subtle changes caused by interference when a sound wave interacts with an object. It can detect where a person’s face is when using their phone and adjust the mics accordingly for sound quality.
What is the practical upshot of this? As portable electronic devices get smaller and smaller they need to develop space-saving components and methods of accomplishing the same tasks with less. The need to combine parts to serve a number of uses becomes more important. The iPhone 5 is a perfect example. It manages to fit tons of features from the various radio receivers, a better battery, two cameras and much more into a smaller chassis than its predecessor.
It’s unknown whether or not Apple will use this patent in the future, but the iPhone already has three mics for noise cancellation and call quality, so there’s no reason why not.