Last week, Apple won a major space audio patent on head-to-head transfer function (HRTF) maps.

Apple quietly added space audio to AirPods Pro during the fourth quarter of 2020 and in December 2020 added space audio to AirPods Max. In May 2021, Apple introduced space audio with Dolby Atmos for the entire Apple Music catalog. Last week, Apple obtained a patent related to space audio entitled “Audio system and method of generating an HRTF map.”

Apple points out that headphones can play a spatial audio signal communicated by a device to simulate a soundscape around the user. Effective spatial reproduction of sound can produce sounds so that the user perceives the sound coming from a location within the soundscape external to the user’s head, in the same way that the user would experience the sound if it were found. in the real world.

When a sound travels to a listener from a surrounding real-world environment, the sound propagates along a direct path, for example, through the air to the entrance of the listener’s auditory canal and into the along one or more indirect paths, for example, by reflecting and diffracting around the listeners head or shoulders. As sound travels the indirect pathways, artifacts can be introduced into the acoustic signal received by the ear canal input. User-specific artifacts can be incorporated into binaural audio using signal processing algorithms that use spatial audio filters.

For example, a head-related transfer function (HRTF) is a filter that contains all the acoustic information needed to describe how sound is reflected or diffracted around a listener’s head, torso, and outer ear. before entering your hearing system.

To implement accurate binaural reproduction, a distribution of HRTF at different angles with respect to a listener can be determined. For example, HRTFs can be measured for the listener in a laboratory environment using an HRTF measurement system.

A typical HRTF measurement system includes a statically positioned speaker next to the listener. The speaker can emit sounds directly to the listener’s head.

The listener can carry in-ear microphones, for example, microphones inserted into the listener’s ear canal inputs, to receive the sounds emitted. Meanwhile, the listener can rotate in a controlled manner, for example, continuously or incrementally, on a vertical axis that extends orthogonally in the direction of the emitted sounds.

For example, the listener may sit or stand on a turntable that rotates around the vertical axis while the speaker emits sounds toward the listener’s head. As the listener rotates, it changes a relative angle between the direction it faces and the direction of the sounds emitted. The sounds emitted by the speaker and the sounds received by the microphones (after being reflected and diffracted from the listener’s anatomy) are used to determine HRTFs corresponding to the different relative angles. Consequently, an angle-dependent HRTF dataset can be generated for the listener.

A selected HRTF of the generated set of angle-dependent HRTFs can be applied to an audio input signal to shape the signal so that shaped signal reproductions realistically simulate a sound traveling to the user from the relative angle at which HRTF was selected. Consequently, a listener can use simple stereo headphones to create the illusion of a sound source somewhere in a listening environment by applying HRTF to the audio input signal.

Existing methods for generating angle-dependent head-related transfer function (HRTF) datasets are time-consuming or impractical to perform outside of a laboratory environment.

For example, HRTF measurements currently require an HRTF measurement system to be used in a controlled laboratory. Consequently, accurate HRTF measurements require access to a specialized laboratory, which can be costly, as well as time to visit the specialized laboratory to complete the measurements.

Apple’s patent / invention relates to an audio system and a method of using the audio system to generate an HRTF map for a user..

The HRTF map contains a set of angle-dependent HRTF data at the respective HRTF locations of an azimuth that extends around a user’s head. By applying an HRTF from the HRTF map to an audio input signal, a spatial audio signal corresponding to the respective HRTF location can be generated and played back for the user. When played back, the spatial audio signal can accurately render a spatial sound to the user.

The method of using the audio system to generate the HRTF map may include the generation of sounds at known locations along an azimuthal path that extends along a portion of the azimuth. For example, a mobile device can move, for example, continuously along the azimuth path while a speaker from the device emits sounds within the segments of the azimuth path. Locations that emit sounds may be known locations.

For example, the mobile device may have a structured light scanner to capture images to determine the distance and relative orientation of the mobile device in relation to the headset that the user is using. A headset microphone can detect input signals corresponding to sounds. For example, input signals can represent sounds received directly and sounds received indirectly that propagate to the user from the mobile device as it moves along the azimuth.

One or more audio system processors can determine an HRTF of each path segment based on the input signals, and the HRTF can be assigned to respective HRTF locations along the path segments based on the known locations that are emitted the corresponding sound. Consequently, one or more processors can generate the HRTF map, which includes the measured HRTFs assigned to the respective HRTF locations along the azimuth.

The Apple FIG. 1 below shows an image of a user operating an audio system. An audio system (# 100) can include a device, for example, a mobile device such as AirPods Pro, AirPods Max, an iPhone, a MacBook, and so on. FIG. 2 a block diagram of an audio system. Audio sources # 206 may include phone and / or music playback functions controlled by telephony programs or audio applications running on top of the operating system. In one aspect, an audio application program can generate default audio signals, for example, sweep test signals, which the device speaker will play (# 108). In the same way, audio sources may include an augmented reality (RA) or virtual reality (VR) application program running on top of the operating system.

The Apple FIG. 4 below is a pictorial view of the operations for determining HRTF and the corresponding HRTF locations of an HRTF map.

3 Audio patent granted Apple patent figs

The Apple FIG. Figure 5 shows a pictorial view of the operations for detecting the input signals corresponding to the generated sounds; FIG. 6, a pictorial view of operations to determine an HRTF and an HRTF location in an azimuth.

For more information, see the patent granted by Apple 11,175,773.

Inventors listed by Apple

Marty Johnson: Development of audio technology, distinguished engineer. Johnson came to Apple from Virginia Tech, where he was an associate professor.

Darius Satongar: Interaction architecture

Jonathan Sheaffer: Leader / Manager, Acoustic Technology

Victor Jupin: List of Copenhagen, Denmark. No profile found.

Some of the space audio patent reports in our archives

01: Apple’s spatial audio file format is revealed in the filing of new patents before the debut of Spatial Audio on AirPods Pro

02: Apple presents a new patent ‘Spatial Audio’ for its future HMD and is likely to apply to next-generation AirPods Pro and Apple TV

03: Apple Patent unveils its work on a 3D space audio engine that will take virtual reality games to the next level

04: Apple invents over-ear headphones with virtual controls based on spatial audio that sounds like the imitation physical control

10.52FX - Patent Bar Granted



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *