Skip to content

Listening to Spatial Audio in a Wearable Hats Design

Students in Cornell's ECE4760 program have been developing a spatial audio system incorporated into a hat. The undertaking, led by [Anishka Raina], [Arnav Shah], and [Yoon Kang], allows the user to experience...

Audio Technology Integrated into Wearable Hat for Immersive Sound Experience
Audio Technology Integrated into Wearable Hat for Immersive Sound Experience

Listening to Spatial Audio in a Wearable Hats Design

In the bustling ECE4760 program at Cornell University, a team of innovative students, led by Anishka Raina, Arnav Shah, and Yoon Kang, have been working on a unique project: a spatial audio system built into a hat. This is not the first project of its kind to be featured, but it certainly stands out for its practical application and innovative use of technology.

At the heart of the build is a Raspberry Pi Pico, a compact and powerful microcontroller that takes charge of the project's operations. The Pi Pico processes LIDAR scan data from a TF-Luna LiDAR sensor, mounted on the hat, to determine the range and location of any objects nearby.

The LiDAR sensor, paired with the Pi Pico, creates a stereo audio signal that indicates to the wearer how close those objects are and their relative direction. This is achieved through a spatial audio technique called interaural time difference (ITD).

ITD works by introducing slight differences in the timing of audio signals delivered separately to the left and right ears, simulating the natural delay that occurs because a sound reaches one ear slightly before the other depending on its location. This time difference allows the brain to localize sound direction horizontally (azimuth). Proximity cues can be embedded by adjusting ITD magnitude and combining it with changes in volume and frequency content to simulate distance perception.

The spatial audio system calculates the source's direction relative to the wearer and adjusts audio playback timing between the two ears to reflect those spatial characteristics. For example, a sound source directly to the left will produce an audio signal that arrives at the left ear earlier than the right ear. The amount of delay encodes the angular position around the head, enhancing spatial awareness. Varying delay and related auditory cues can also give a sense of how close or far the source is.

This ITD-based binaural audio rendering mimics natural hearing cues that humans use to build a 3D soundscape, helping a wearer of the hat intuitively identify where sounds originate and their relative distance, thus providing situational awareness through audio alone.

The project at Cornell's ECE4760 program provides physical sensory augmentation through the human auditory system, offering a promising direction for future wearable computing with integrated spatial audio. However, it's important to note that head tracking is not implemented in the current version of the project, so the wearer uses a potentiometer to indicate the direction they are facing as they scan.

This innovative project showcases the potential of wearable computing with integrated spatial audio to create immersive and informative audio experiences, leveraging well-established psychoacoustics principles like ITD.

[1] For more information on interaural time difference (ITD) and its application in spatial audio, please refer to the following resources: - Middlebrooks, D. A., & Green, D. W. (1991). Binaural localization of sound. Cambridge, MA: MIT Press. - Wightman, R. M., & Kistler, B. S. (1989). Auditory space perception. San Diego, CA: Academic Press.

The Raspberry Pi Pico, a microcontroller at the core of the project, processes data from a TF-Luna LiDAR sensor to create a spatial audio system integrated into a hat. Wearables, gadgets, and technology, such as this, offer exciting possibilities for future wearable computing, like providing physical sensory augmentation through the human auditory system and immersive audio experiences.

Read also:

    Latest