TY - GEN
T1 - Microphone Aligned Continuous Wearable Device-Related Transfer Function
T2 - 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops, ICASSPW 2024
AU - Manamperi, Wageesha N.
AU - Abhayapala, Thushara D.
AU - Holmberg, Paul
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - With the recent emergence of virtual reality, augmented reality, and mixed-reality technologies, considerable attention has been received to integrate spatial audio recording capabilities into these devices. Extracting higher order spherical harmonics from the recordings is not always possible due to complex scattering effects from the arbitrary-shaped devices, which motivates the need for modeling the acoustic properties beyond free-field convention. In this paper, we propose modeling the wearable device-related transfer function (WDRTF) that incorporates scattering, and diffraction from the device and wearer's body to an incident soundfield. We adopt the spherical wave model to translate the measurements into microphone alignment, by exploiting both magnitude and phase corrections. We use a discrete set of measurements by multiple circular arrays for efficiently estimating the WDRTF coefficients. We illustrate the WDRTF modeling procedure from a head-worn microphone array and obtain accurate reconstruction performance over frequency and 3D directions.
AB - With the recent emergence of virtual reality, augmented reality, and mixed-reality technologies, considerable attention has been received to integrate spatial audio recording capabilities into these devices. Extracting higher order spherical harmonics from the recordings is not always possible due to complex scattering effects from the arbitrary-shaped devices, which motivates the need for modeling the acoustic properties beyond free-field convention. In this paper, we propose modeling the wearable device-related transfer function (WDRTF) that incorporates scattering, and diffraction from the device and wearer's body to an incident soundfield. We adopt the spherical wave model to translate the measurements into microphone alignment, by exploiting both magnitude and phase corrections. We use a discrete set of measurements by multiple circular arrays for efficiently estimating the WDRTF coefficients. We illustrate the WDRTF modeling procedure from a head-worn microphone array and obtain accurate reconstruction performance over frequency and 3D directions.
KW - Device-related transfer function
KW - headworn microphone array
KW - microphone alignment
KW - multiple circular arrays
KW - spherical harmonics
UR - http://www.scopus.com/inward/record.url?scp=85202439987&partnerID=8YFLogxK
U2 - 10.1109/ICASSPW62465.2024.10625970
DO - 10.1109/ICASSPW62465.2024.10625970
M3 - Conference contribution
AN - SCOPUS:85202439987
T3 - 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops, ICASSPW 2024 - Proceedings
SP - 199
EP - 203
BT - 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops, ICASSPW 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 14 April 2024 through 19 April 2024
ER -