Sensory Systems/Simulation Auditory System

Computer Simulations of the Auditory SystemEdit

Working with SoundEdit

Audio signals can be stored in a variety of formats. They can be uncompressed or compressed, and the encoding can be open or proprietary. On Windows systems, the most common format is the WAV-format. It contains a header with information about the number of channels, sample rate, bits per sample etc. This header is followed by the data themselves. The usual bitstream encoding is the linear pulse-code modulation (LPCM) format.

Many programing languages provide commands for reading and writing WAV-files. When working with data in other formats, you have two options:

  • You can either you convert them into WAV-format, and go on from there. A very comprehensive free cross-platform solution to record, convert and stream audio and video is ffmpeg (http://www.ffmpeg.org/).
  • Or you can obtain special programs moduls for reading/writing the desired format.

Reminder of Fourier TransformationsEdit

To transform a continuous function, one uses the Fourier Integral:

F(k)=\int_{-\infty}^{\infty} {f(t)} \cdot e^{-2 \pi ikt} dt

where k represents frequency. Note that F(k) is a complex value: its absolute value gives us the amplitude of the function, and its phase defines the phase-shift between cosine and sine components.

The inverse transform is given by

f(t)=\int_{-\infty}^{\infty} F(k) \cdot e^{2 \pi ikt} dk
Fourier Transformation: a sum of sine-waves can make up any repititive waveform.

If the data are sampled with a constant sampling frequency and there are N data points,

f(\tau)= \sum_{n=0}^{N-1} F_n e^{2 \pi in \tau /N}

The coefficients Fn can be obtained by

 F_n = \sum_{\tau = 0}^{N-1} f(\tau) \cdot e^{-2 \pi in \tau/N}

Since there are a discrete, limited number of data points and with a discrete, limited number of waves, this transform is referred to as Discrete Fourier Transform (DFT). The Fast Fourier Transform (FFT) is just a special case of the DFT, where the number of points is a power of 2: N = 2^n .

Note that each F_n is a complex number: its magnitude defines to the amplitude of the corresponding frequency component in the signal; and the phase of F_n defines the corresponding phase (see illustration). If the signal in the time domain "f(t)" is real valued, as is the case with most measured data, this puts a constraint on the corresponding frequency components: in that case we have

 F_n = F_{N-n}^*

A frequent source of confusion is the question: “Which frequency corresponds to F_n?” If there are N data points and the sampling period is ''T_s'', the n^{th} frequency is given by

 f_n = \frac{n}{N \cdot T_s}, 1 \le n \le N (in \; Hz)

In other words, the lowest frequency is \frac{1}{N \cdot T_s} [in Hz], while the highest independent frequency is  \frac{1}{2T_s} due to the Nyquist-Shannon theorem. Note that in MATLAB, the first return value corresponds to the offset of the function, and the second value to n=1!

Spectral Analysis of Biological SignalsEdit

Power Spectrum of Stationary SignalsEdit

Most FFT functions and algorithms return the complex Fourier coefficients F_n. If we are only interested in the magnitude of the contribution at the corresponding frequency, we can obtain this information by

 P_n = F_n \cdot F_n^* = |F_n|^2

This is the power spectrum of our signal, and tells us how big the contribution of the different frequencies is.

Power Spectrum of Non-stationary SignalsEdit

Often one has to deal with signals that are changing their characteristics over time. In that case, one wants to know how the power spectrum changes with time. The simplest way is to take only a short segment of data at a time, and calculate the corresponding power spectrum. This approach is called Short Time Fourier Transform (STFT). However in that case edge effects can significantly distort the signals, since we are assuming that our signal is periodic.

"Hanning window"

To eliminate edge artifacts, the signals can be filtered, or "windowed". An examples of such a window is shown in the figure above. While some windows provide better frequency resolution (e.g. the rectangular window), others exhibit fewer artifacts such as spectral leakage (e.g. Hanning window). For a selected section of the signal, the data resulting from windowing are obtained by multiplying the signal with the window (left Figure):

Effects of windowing a signal. STFT Example.png

An example can show how cutting a signal, and applying a window to it, can affect the spectral power distribution, is shown in the right figure above. (The corrsponding Python code can be found at [1] ) Note that decreasing the width of the sample window increases the width of the corresponding powerspectrum!

Stimulation strength for one time windowEdit

To obtain the power spectrum for one selected time window, the first step is to calculate the power spectrum through the Fast Fourier Transform (FFT) of the time signal. The result is the sound intensity in frequency domain, and the corresponding frequencies. The second step is to concentrate those intensities on a few distinct frequencies ("binning"). The result is a sound signal consisting of a few distinct frequencies - the location of the electrodes in the simulated cochlea. Back conversion into the time domain gives the simulated sound signal for that time window.

The following Python function does sound processing on a given signal.

import numpy as np
 
def pSpect(data, rate):
    '''Calculation of power spectrum and corresponding frequencies, using a Hamming window'''
    nData = len(data)
    window = np.hamming(nData)
    fftData = np.fft.fft(data*window)
    PowerSpect = fftData * fftData.conj() / nData
    freq = np.arange(nData) * float(rate) / nData
    return (np.real(PowerSpect), freq)
 
def calc_stimstrength(sound, rate=1000, sample_freqs=[100, 200, 400]): 
    '''Calculate the stimulation strength for a given sound'''
 
    # Calculate the powerspectrum
    Pxx, freq = pSpect(sound, rate)
 
    # Generate matrix to sum over the requested bins
    num_electrodes = len(sample_freqs)
    sample_freqs = np.hstack((0, sample_freqs))
    average_freqs = np.zeros([len(freq), num_electrodes])
    for jj in range(num_electrodes):
        average_freqs[((freq>sample_freqs[jj]) * (freq<sample_freqs[jj+1])),jj] = 1
 
    # Calculate the stimulation strength (the square root has to be taken, to get the amplitude)
    StimStrength = np.sqrt(Pxx).dot(average_freqs)
 
    return StimStrength

Sound Transduction by Pinna and Outer EarEdit

The outer ear is divided into two parts: the visible part on the side of the head (the pinna), and the external auditory meatus (outer ear canal) leading to the eardrum, as shown in the figure below. With such a structure, the outer ear contributes the ‘spectral cues’ for people’s sound localization abilities, making people not only have the ability to detect and identify a sound, but also have the ability to localize a sound source. [2]

The Atonamy of Human Ear

Pinna FunctionEdit

The Pinna’s cone shape enables it to gather sound waves and funnel them into the out ear canal. On top of that, its various folds make the pinna a resonant cavity which amplifies certain frequencies. Furthermore, the interference effects resulting from the sound reflection caused by the pinna are directionally dependent and will attenuate other frequencies. Therefore, the pinna could be simulated as a filter function applied to the incoming sound, modulating its amplitude and phase spectra.


Frequency Responses for Sounds from Two Different Directions by the Pinna [3]

The resonance of the pinna cavity can be approximated well by 6 normal modes [4]. Among these normal modes, the first mode, which mainly depends on the concha depth (i.e. the depth of the bowl-shaped part of the pinna nearest the ear canal), is the dominant one.



The cancellation of certain frequencies caused by the pinna reflection is called “pinna notch”. [4] As shown in the right figure [3], sound transmitted by the pinna goes through two paths, a direct path and a longer reflected path. The different paths have different length, and thereby produce phase differences. When the frequency of incoming sound signal reaches certain criterion, which is that the path difference is half of the sound wavelength, the interference of sounds via direct and reflected paths will be destructive. This phenomenon is called “pinna notch”. Normally the notch frequency could happen in the range from 6k Hz to 16k Hz depending on the pinna shape. It is also seen that the frequency response of pinna is directionally dependent. This makes the pinna contribute to the spatial cues for sound localization.


Ear Canal FunctionEdit

The outer ear canal is approximately 25 mm long and 8 mm in diameter, with a tortuous path from the entrance of the canal to the eardrum. The outer ear canal can be modeled as a cylinder closed at one end which leads to a resonant frequency around 3k Hz. This way the outer ear canal amplifies sounds in a frequency range important for human speech. [5]

Simulation of Outer EarEdit

Based on the main functions of the outer ear, it is easy to simulate the sound transduction by the pinna and outer ear canal with a filter, or a filter bank, if we know the characteristics of the filter.

Many researchers are working on the simulation of human auditory system, which includes the simulation of the outer ear. In the next chapter, a Pinna-Related Transfer Function model is first introduced, followed by two MATLAB toolboxes developed by Finnish and British research groups, respectively.

Model of Pinna-Related Transfer Function by SpagnolEdit

This part is entirely from the paper published by S.Spagnol, M.Geronazzo, and F.Avanzini. [6] In order to model the functions of the pinna, Spagnol developed a reconstruction model of the Pinna-Related Transfer Function (PRTF), which is a frequency response characterizing how sound is transduced by the pinna. This model is composed by two distinct filter blocks, accounting for resonance function and reflection function of the pinna respectively, as shown in the figure below.

General Model for the Reconstruction of PRTFs[6]

There are two main resonances in the interesting frequency range of the pinna[6], which can be represented by two second-order peak filters with fixed bandwidth f_b = 5 kHz [7]:

H_{res} (z)=  \frac{V_0 (1-h)(1-z^{-2})}{1+2dhz^{-1}+(2h-1)z^{-2}}

where

h=  \frac{1}{1+\tan(\pi\frac{f_B}{f_s})}
 d= -\cos(2\pi \frac{f_C}{f_s} )
V_0=10^{\frac{G}{20}}

and f_s is the sampling frequency, f_C the central frequency, and G the notch depth.

For the reflection part, three second-order notch filters of the form [8] are designed with the parameters including center frequency f_C, notch depth G, and bandwidth f_B.

H_{refl}(z)=  \frac{1+(1+k)\frac{H_0}{2}+d(1-k)z^{-1}+(-k-(1+k)\frac{H_0}{2})z^{-2}} {1+d(1-k) z^{-1}-kz^{-2}}

where d is the same as previously defined for the resonance function, and

V_0=10^{\frac{-G}{20}}
H_0= V_0-1
k= \frac{\tan(\pi\frac{f_B}{f_s})-V_0}{\tan(\pi\frac{f_B}{f_s})+V_0}

each accounting for a different spectral notch.

By cascading the three in-series placed notch filters after the parallel two peak filters, an eighth-order filter is designed to model the PRTF.
By comparing the synthetic PRTF with the original one, as shown in the figures below, Spagnol concluded that the synthesis model for PRTF was overall effective. This model may have missing notches due to the limitation of cutoff frequency. Approximation errors may also be brought in due to the possible presence of non-modeled interfering resonances.

Original vs Synthetic PRTF Plots[6]

HUTear MATLAB ToolboxEdit

Block Diagram of Generic Auditory Model of HUTear

HUTear is a MATLAB Toolbox for auditory modeling developed by Lab of Acoustics and Audio Signal Processing at Helsinki University of Technology [9]. This open source toolbox could be downloaded from here. The structure of the toolbox is shown in the right figure.

In this model, there is a block for “Outer and Middle Ear” (OME) simulation. This OME model is developed on the basis of Glassberg and Moor [10]. The OME filter is usually a linear filter. Auditory filter is generated with taking the "Equal Loudness Curves at 60 dB"(ELC)/"Minimum Audible Field"(MAF)/"Minimum Audible Pressure at ear canal"(MAP) correction into account. This model accounts for the outer ear simulation. By specifying different parameters with the "OEMtool", you may compare the MAP IIR approximation and MAP data, as shown in the figure below.

UI of OEMtool from HUTear Toolbox

MATLAB Model of the Auditory Periphery (MAP)Edit

MAP is developed by researchers in the Hearing Research Lab at University of Essex, England [11]. Being a computer model of physiological basis of human hearing, MAP is an open-source code package for testing, developing the model, which could be downloaded from here. Its model structure is shown in the right figure.

MAP Model Structure

Within the MAP model, there is the “Outer Middle Ear (OME)” sub-model, allowing the user to test and create an OME model. In this OME model, the function of the outer ear is modeled as a resonance function. The resonances are composed by two parallel bandpass filters, respectively, representing concha resonance and outer ear canal resonance. These two filters are specified by the pass frequency range, gain and order. By adding the output of resonance filters to the original sound pressure wave, the output of the outer ear model is obtained.

To test the OME model, run the function named “testOME.m”. A figure plotting the external ear resonances and stapes peak displacement will be displayed. (as shown in the figure below)

External Ear Resonances and Stapes Peak Displacement from OME Model of MAP

SummaryEdit

The outer ear, including pinna and outer ear canal, can be simulated as a linear filter, or a filter bank. This reflects its resonance and reflection effect to incoming sound. It is worth noting that since the pinna shape varies from person to person, the model parameters, like the resonant frequencies, depend on the subject.

One aspect not included in the models described above is the Head-Related Transfer Function(HRTF). The HRTF describes how an ear receives a sound from a point sound source in space. It is not introduced here because it goes beyond the effect of the outer ear (pinna and outer ear canal) as it is also influenced by the effects of head and torso. There are plenty of literature and publications for HRTF for the interested reader.(wiki, tutorial 1,2, reading list for spatial audio research including HRTF)

Simulation of the Inner EarEdit

The shape and organisation of the basilar membrane means that different frequencies resonate particularly strongly at different points along the membrance. This leads to a tonotopic organisation of the sensitivity to frequency ranges along the membrane, which can be modeled as being an array of overlapping band-pass filters known as "auditory filters".[12] The auditory filters are associated with points along the basilar membrane and determine the frequency selectivity of the cochlea, and therefore the listener’s discrimination between different sounds.[13] They are non-linear, level-dependent and the bandwidth decreases from the base to apex of the cochlea as the tuning on the basilar membrane changes from high to low frequency.[13][14] The bandwidth of the auditory filter is called the critical bandwidth, as first suggested by Fletcher (1940). If a signal and masker are presented simultaneously then only the masker frequencies falling within the critical bandwidth contribute to masking of the signal. The larger the critical bandwidth the lower the signal-to-noise ratio (SNR) and the more the signal is masked.

ERB related to centre frequency. The diagram shows the ERB versus centre frequency according to the formula of Glasberg and Moore.[13]

Another concept associated with the auditory filter is the "equivalent rectangular bandwidth" (ERB). The ERB shows the relationship between the auditory filter, frequency, and the critical bandwidth. An ERB passes the same amount of energy as the auditory filter it corresponds to and shows how it changes with input frequency.[13] At low sound levels, the ERB is approximated by the following equation according to Glasberg and Moore:[13]


ERB = 24.7*(4.37F + 1) \,

where the ERB is in Hz and F is the centre frequency in kHz.

It is thought that each ERB is the equivalent of around 0.9mm on the basilar membrane.[13][14]

Gammatone FiltersEdit

Sample gamma tone impulse response.

One filter type used to model the auditory filters is the "gammatone filter". It provides a simple linear filter for describing the movement of one location of the basilar membrane for a given sound input, which is therefore easy to implement. Linear filters are popular for modeling different aspects of the auditory system. In general, they are IIR-filters (infinite impulse response) incorporating feedforward and feedback, which are defined by

 \sum\limits_{j = 0}^m {{a_{j + 1}}y(k - j)}  = \sum\limits_{i = 0}^n {{b_{i + 1}}x(k - i)}

where a1=1. In other words, the coefficients ai and bj uniquely determine this type of filter. The feedback-character of these filters can be made more obvious by re-shuffling the equation

 y(k) = {b_1}x(k) + {b_2}x(k - 1) + ... + {b_{n + 1}}x(k - n) - \left( {{a_2}y(k - 1) + ... + {a_{m + 1}}y(k - m)} \right)

(In contrast, FIR-filters, or finite impulse response filters, only involve feedforward: for them a_i=0 for i>1.)

General description of an "Infinite Impulse Response" filter.

Linear filters cannot account for nonlinear aspects of the auditory system. They are nevertheless used in a variety of models of the auditory system. The gammatone impulse response is given by


g(t) = at^{n-1} e^{-2\pi bt} \cos(2\pi ft + \phi), \,

where f is the frequency, \phi is the phase of the carrier, a is the amplitude, n is the filter's order, b is the filter's bandwidth, and t is time.

This is a sinusoid with an amplitude envelope which is a scaled gamma distribution function.

Variations and improvements of the gammatone model of auditory filtering include the gammachirp filter, the all-pole and one-zero gammatone filters, the two-sided gammatone filter, and filter cascade models, and various level-dependent and dynamically nonlinear versions of these.[15]

For computer simulations, efficient implementations of gammatone models are availabel for Matlab and for Python[16] .

When working with gammatone filters, we can elegantly exploit Parseval's Theorem to determine the energy in a given frequency band:

 \int_{ - \infty }^\infty  {{{\left| {f(t)} \right|}^2}dt = } \int_{ - \infty }^\infty  {{{\left| {F(\omega )} \right|}^2}d\omega }

ReferencesEdit

  1. T. Haslwanter (2012). "Short Time Fourier Transform [Python"]. private communications. http://work.thaslwanter.at/CSS/Code/stft.py. 
  2. Semple, M.N. (1998), "Auditory perception: Sounds in a virtual world", Nature (Nature Publishing Group) 396 (6713): 721-724, doi:10.1038/25447 
  3. a b http://tav.net/audio/binaural_sound.htm
  4. a b Shaw, E.A.G. (1997), "Acoustical features of the human ear", Binaural and spatial hearing in real and virtual environments (Mahwah, NJ: Lawrence Erlbaum) 25: 47 
  5. Federico Avanzini (2007-2008), Algorithms for sound and music computing, Course Material of Informatica Musicale (http://www.dei.unipd.it/~musica/IM06/Dispense06/4_soundinspace.pdf), pp. 432 
  6. a b c d Spagnol, S. and Geronazzo, M. and Avanzini, F. (2010), "Structural modeling of pinna-related transfer functions", In Proc. Int. Conf. on Sound and Music Computing (SMC 2010) (barcelona): 422-428 
  7. S. J. Orfanidis, ed., Introduction To Signal Processing. Prentice Hall, 1996.
  8. U. Zölzer, ed., Digital Audio Effects. New York, NY, USA: J.Wiley & Sons, 2002.
  9. http://www.acoustics.hut.fi/software/HUTear/
  10. Glasberg, B.R. and Moore, B.C.J. (1990), "Derivation of auditory filter shapes from notched-noise data", Hearing research (Elsevier) 47 (1-2): 103-138 
  11. http://www.essex.ac.uk/psychology/department/research/hearing_models.html
  12. Munkong, R. (2008), IEEE Signal Processing Magazine 25 (3): 98--117, doi:10.1109/MSP.2008.918418, Bibcode2008ISPM...25...98M 
  13. a b c d e f Moore, B. C. J. (1998). Cochlear hearing loss. London: Whurr Publishers Ltd.. ISBN 0585122563. 
  14. a b Moore, B. C. J. (1986), "Parallels between frequency selectivity measured psychophysically and in cochlear mechanics", Scand. Audio Suppl. (25): 129–52 
  15. R. F. Lyon, A. G. Katsiamis, E. M. Drakakis (2010). "History and Future of Auditory Filter Models". Proc. ISCAS. IEEE. http://research.google.com/pubs/archive/36895.pdf. 
  16. T. Haslwanter (2011). "Gammatone Toolbox [Python"]. private communications. http://work.thaslwanter.at/CSS/Code/GammaTones.py. 

Visual_System · Vestibular_System

Last modified on 25 March 2013, at 12:42