Headphone simulation of free-field listening. I: Stimulus synthesis
FredericL. Wightman
Department
ofPsychology
and Ve'
aismanCenter,University
of Ve'isconsin--Madison,
Madison,•Visconsin
53 705
Doris J. Kistler
[VaismanCenter,Universityof Ve'isconsin--Madison,
Madison,Wisconsin
õ3705
(Received14May 1988;acceptedfor publication12October1988)
This articledescribes
techniquesusedto synthesize
headphone-presented
stimulithat simulate
the ear-canalwaveforms
producedby free-fieldsources.
The stimulussynthesis
techniques
involvemeasurement
of eachsubject'sfree-field-to-eardrum
transferfunctionsfor sourcesat a
largenumberof locationsin freefield,and measurement
of headphone-to-eardrum
transfer
functionswith the subjectwearingheadphones.
Digital filtersare thenconstructed
from the
transferfunctionmeasurements,
andstimuliarepassedthroughthesedigitalfilters.Transfer
functiondatafrom tensubjects
and 144sourcepositions
aredescribed
in thisarticle,along
with estimates of the various sources of error in the measurements. The free-field-to-eardrum
transferfunctiondata are consistent
with comparabledata reportedelsewherein the literature.
A comparison
of ear-canalwaveformsproducedby free-fieldsourceswith ear-canalwaveforms
producedby headphone-presented
simulationsshowsthat the simulationsduplicatefree-field
waveformswithin a few dB of magnitudeand a few degreesof phaseat frequencies
up to 14
kHz.
PACS numbers:43.66.Qp,43.66.Pn,43.66.Yw, 43.88.Si [WAY]
INTRODUCTION
have includedfilling the pinna foldswith putty (Gardner
and Gardner, 1973;Oldfieldand Parker, 1984), coveringthe
Researchon the acousticaland psychologicalbasesof
pinnaewith blocks(Gardner and Gardner, 1973), and inhumansoundlocalizationsuggests
that theprimaryacousti- sertingtubesintothe earcanals(Jongkees
andGroen, 1946;
cal cuesused to determinesoundsourcepositionare the
FisherandFreedman,1968). All of the studiesreporteddeinteraural differencesin time of arrival (AT) of a sound
crementsin localizationacuityfollowingpinnadeformation.
wave at a listener's two ears, the interaural differences in
However,sincethe techniques
useddid not allow precise,
overallintensity(AI) of the sound,andthe position-depen- systematiccontrolover the stimulus,the experimentsprodentfilteringcausedby theinteractionof an incomingsound videdonly limited informationaboutthe role of thesecues.
wavewith the foldsof the pinhue(for an excellentreviewof
Presentingstimuli over headphonesallows complete
this literature, see Butler, 1975). Lord Rayleigh'sclassic specification
of thestimuliat a listener's
earsandthussolves
"duplextheory" (Strutt, 1907), whichhasmotivatedmost thestimuluscontrolproblem.Most of whatwe havelearned
of the modernresearchon soundlocalization,ignoredpinna aboutprocessing
of A T and AI cueshascomefrom experifilteringand held that apparentsoundsourcepositionwas mentsin which the stimuli weredeliveredby headphones.
determinedentirelyby AT cuesat low frequencies
and AI
However, the extent to which the resultsof theseexpericuesat high frequencies.In the pastquartercentury,many
mentscould be generalizedto free-fieldlisteningconditions
studieshaveshownthat cuesprovidedby pinnafilteringare
hasbeenquestioned,
mostlybecause
of theunnaturalquality
moreimportantthan previouslybelieved,especiallyfor loof soundsheard over headphones(e.g., they are typically
calizingsoundson the medianplane(whereA Tand Alcues
heardasoriginatinginsidethe listener'shead). The fact that
are minimized), and for establishingthe "externalized" theseexperimentshave beencalled "lateralization" rather
(out-of-head) character of soundsin the natural environthan"localization"experiments
represents
explicitrecogniment (Blauert, 1969; Butler and Belendiuk, 1977; Butler
tion of this lack of generalizability.Nevertheless,
thereare
and Planert, 1976; Gardner and Gardner, 1973; Hebrank
undeniableadvantagesto headphonestimulusdelivery. Sevand Wright, 1974;Plenge,1974). A recentmodelof sound eral investigators
haveattemptedto bringtheseadvantages
localizationalso gives recognitionto the importanceof
to bearonvariousquestions
abouthumansoundlocalization
pinnacues(Searleet al., 1976).
other than thoserelatingto processingof AT and AI cues.
Eventhoughthe recentresearchrecognizes
the imporFor example,Bloom (1977) andWatkins (1978) attempted
tanceof pinnacues,therehavebeenrelativelyfew experi- to simulatesourceelevationchangesby altering the specmentsin whichthesecueshavebeenmanipulatedsystemati- trum of headphone-delivered
stimuliin a manneranalogous
cally. Most of the available data have come from
to pinnafiltering.Other researchers
havestudiedthe apparexperimentsin which localization performancewas meaent locations of headphone-presentedsounds which had
been"binaurally"recorded,with microphonesplacedin the
suredbeforeand after an experimentalmanipulationdesignedto reduceor removepinnacues.Thesemanipulations earsof a dummyhead (e.g.,Plenge,1974) or at the ear-canal
858
J. Acoust.Soc. Am. 85 (2), February1989
0001-4966/89/020858-10500.80
¸ 1989 AcousticalSocietyof America
858
entranceof a humanlistener(e.g., Butler and Belendiuk,
recordedby the probemicrophoneat the eardrumwill be
1977).Thesepioneering
experiments
haveproduced
impor- Y2(t), the samesignalproducedby the loudspeakerin free
tant, suggestive
data.However,the resultsare not readily field.This is representedin the frequencydomainby substigeneralizedto free-fieldlocalizationconditions,sincethe ex-
perimentsdid not directlyassess
the degreeto whichthe
headphone
stimulireproduced
eitherthe acoustical
or the
psychological
featuresof a free-fieldstimulus.
We haveattemptedto producea veridicalsimulationof
thefree-field
listening
experience
byusingdigitaltechniques
to synthesize
headphone-presented
stimuli.The synthesis
techniques
andobjective
testsof theiracoustical
adequacy
are described
here;psychophysical
testsof the perceptual
adequacy
of the simulationaredescribed
in thecompanion
article(WightmanandKistler,1989).The basicassumptionthatguidesour approachisthat, if theacoustical
wave-
tutingthe right sideof Eq. (3) for X2 in Eq. (2).
The filterdescribed
in (4) appliesonly to a singlefreefieldloudspeaker
positionand oneear. To synthesize
each
stimulus,then,we mustdesigna pair of filters(onefor each
ear) for eachdesiredfree-fieldsourceposition.
The firstphaseof our synthesis
procedureinvolvesmeasurement
of the free-field-to-eardrum
transfer
function
(HRTF) for eachear of a subject,for a large numberof
soundsourcepositions.In practice,what we actuallymeasureis a quantitylike Y2 in Eq. ( 1), whichincludesnot only
the free-field-to-eardrum characteristics (F), but also the
characteristics
of the testsignal(X•), loudspeaker
(L), and
microphone(M). A headphone-to-eardrum
transferfuncphones
asin freefield,thenthelistener's
experience
should tion [like Y• in Eq. (2) ] isalsomeasuredfor eachearof the
alsobethesame.Thisassumption
isanobvious
oversimplifi- samesubject.In the secondphaseof the synthesis,eachdecation,in that it deniesthe relevanceof head movements, siredexperimentalstimulusis digitallyfiltered.The transfer
visualcues,and other localizationcues.However, the suc- functionsof the filters (one for the left ear stimulusand one
cesswe havehad in the psychophysical
validationexperi- for the right) are definedin Eq. (4). Ideally, when the filments(Wightman andKistler, 1989) indicatesthat within a
tered stimuli are presentedto the subjectover the headlimitedrangeof stimulusconditions,
theassumption
maybe phones,the waveformsreachingthe eardrumsshouldbe
warranted.
identicalto thoseproducedby a free-fieldstimulus.The error in the procedureisquantifiedby recordingthe stimuliat
I. METHOD
the eardrumsin the free-fieldandheadphoneconditionsand
computingthe difference.
Our approachis basedon well-understood
linearfilterforms at a listener's eardrums are the same under head-
ingprinciples.
Let x, (t) represent
an electricalsignalthat
drivesa loudspeaker
in freefield,andlety• (t) represent
the
resultantelectricalsignalfrom a probemicrophone
positionedat a listener's
eardrum.Similarly,letx2(t) represent
an electricalsignalthat drivesa headphone,
withY2(t) the
resultantmicrophone
response.
Givenx• (t), our goalis to
producex2(t) suchthaty2(t) equalsy• (t). We do thisby
designinga linearfilter that transformsx• (t) into the desiredx2 (t).
The designof the appropriatefilteris bestdescribed
in
thefrequency
domain.ThusX• (jw), or simplyX• i is the
Fouriertransform
ofx• (t), Y; isthetransform
ofy• (t) and
soforth.The probemicrophone's
response
to x, (t) canbe
written:
Y• = X•LFM,
( 1)
A. Transfer
function
measurement
Both free-fieldand headphonetransferfunction measurementsweremadeusinga techniquelooselybasedon the
proceduredescribedby Mehrgardt and Mellert (1977). A
wideband,noiselikesignalwas presented(either by loudspeakeror headphone)repetitively,andthe response
at the
listener'seardrumwasobtainedby averagingtheoutputof a
probemicrophone.The Fourier transformof this response
was dividedby the Fourier transformof the signalto produce an estimateof the transferfunction in question.The
signalwas 20.48 ms in duration and was computedvia an
inverseDFT so that both the amplitudeand phasecomponentsof its spectrumcouldbe tailoredto maximizethe sig-
nal-to-noise
ratioin the response
recordings.•
Specifically,
the amplitudespectrumof the signalwas flat from 200 to
4000Hz, whereit increased
abruptlyby 20 dB.Thereafter,it
to-eardrum transfer function (sometimes called the headwasflatto 14kHz. The signalcontainednoenergybelow200
relatedtransfer
function,
orHRTF), andM themicrophone Hz or above14 kHz. The phasespectrumwascomputedto
transfer
function.
Theprobemicrophone's
response
tox2(t)
minimize the peak factor of the signal (Schroeder,1970).
can be written
The signalwas output continuously(hence, with a repetition frequencyof about 50 Hz), via a 16-bit D/A converter
Y2= X2HM,
(2)
where H representsthe headphone-to-eardrum
transfer ( controlledby an IBM-PC) at a rate of 50 kHz. No antialiasing filters were used.For the free-fieldmeasurements,
the
function.SettingY• = Y2andsolvingforX: yield
signalwastransducedby a miniatureloudspeaker(Realistic
X• = X,LF /H.
(3)
Minimus-7). For the headphonemeasurements,
the signal
was
transduced
by
a
pair
of
Sennheiser
HD
340
headphones,
Thisequationshowsthat thedesiredfiltertransferfunction
drivenin phase.Signalswerepresentedat approximately70
T is givenby
dB SPL,a levelchosento reducethecontaminating
effectsof
whereL istheloudspeaker
transferfunction,F thefree-field-
T= LF/H.
(4)
Thus,if thesignalx• (t) ispassed
throughthisfilterandthe
resultant
x: (t) istransduced
bytheheadphone,
thesignal
859
d,Acoust.
Sec.Am.,Vol.85, No.2, February1989
the acoustic reflex.
The acousticalresponseat the eardrum was measured
with a miniatureelectretmicrophone(Etymotic) coupledto
F.L.'Wightman
andD.d. Kistler:
Headphone
simulation.
I
859
a siliconerubberprobetube with an outer diameterof less
than 1 mm. This probemicrophonesystem,with its matching preamplifierand compensationnetwork,had a sensitivity of about50 mV/Pa, and a frequencyresponsewhich was
relatively flat ( _ 2.5 dB) from 200 Hz to 14 kHz. Two
matchedmicrophoneswere used,one for eachear, and the
responses
from both were measuredsimultaneously.The
amplified microphone outputs were digitized (simultaneously) using 16-bit A/D converters (controlled by the
IBM-PC) at a 50-kHz samplingrate. The responses
to 1000
periodsof thesignalwereaveragedwith floating-pointprecision,a spectralresolutionof 48.8 Hz, and a worst-casesignal-to-noiseratio of well over 20 dB in the range200 Hz-14
kHz.
The acousticalmeasurements
weremadewith the tipsof
the probetubespositionedroughlyin the middleof the subject'sear canal,about 1-2 mm from the eardrum.This position was chosen in order
to be certain
the measurements
would captureall direction-dependent
effects(which may
not be the case for measurements at the ear-canal entrance)
and to avoid standing-wavenulls at high frequencies.At 14
kHz, the highestfrequencyof interestin our work, the first
standing-wavenull would occur at about 6 mm from the
eardrum (assumingthe ear canalis a uniformtubeclosedat
one end). To avoidoccludingthe ear canals,the probetubes
were held in place with custom (i.e., differentfor each subject) Lucite earmold shells, trimmed so that they did not
extend into the concha when inserted, and bored out to a
thickness
of less than 0.5 min. With
the earmold
shell in
place,the probetubewasinsertedinto a thin, semirigidguide
tube that was cemented to the wall of the earmold
shell. The
length of each guide tube was calibrated, at the time the
earmoldassemblywasmade,sothat with the probeinserted
asfar asits collar-stopwould allow, the probetip wasabout 1
mm from the eardrum. This calibration was accomplished
by insertinga human hair into the guidetube until the subject indicated that the hair had touched the eardrum. The
hair wasthenmarkedandwithdrawnsothat theappropriate
length for the guide tube could then be determined. The
body of the microphonewas left hangingat the side of the
subject'sear. Figure l(a) showsthe earmold shell with microphoneattached,and Fig. 1(b) showsthe whole assembly
in placein a subject'sear.
For free-fieldmeasurements,
the periodicwidebandsignal wastransducedby oneof eight loudspeakers,
eachpositioned 1.38 m from the subjectin an anechoicchamber.The
loudspeakerswere mounted on a semicirculararc (2.76-m
diameter), the endsof which were attacheddirectly above
and directlybelowthe subject.The loudspeakerswereaimed
at the positionof the subject'sheadin order to minimizethe
influenceof loudspeakerdirectionality(which we found to
be virtually nonexistentwithin 10 deg of the speakeraxis).
The entire arc assemblycould be rotated (by hand crank)
around the verticalaxis,and positionedwith a precisionof
about0.5 deg.The subjectwasseatedon an adjustablestool
(with back) so that his/her head was at the center of the arc.
The speakerswere mounted at --36-, -- 18-, 0-, q- 18-,
q- 36-, q- 54-, q- 72-, and q- 90-degelevationrelativeto the
horizontal plane passingthrough the subject'sears. The
860
J. Acoust.Sec. Am., Vol. 85, No. 2, February1989
FIG. I. Photographsof the customearmoldshellmicrophoneholder used
for acousticaltransfer function measurements.(a) The shell with microphoneprobetubeinserted.The majordivisionson the scaleare I cm and the
minordivisionsare I mm. (b) The assemblyin placein a subject'sear.
measurementswere madeat all elevationsexcept q- 72 and
q- 90 deg,and at all azimuthsaroundthe circle in 15-deg
steps.Thus trausferfunctionsweremeasuredfrom bothears
at 144 sourcepositions.
A typical measurementsessionlasted about an hour.
After the microphoneswerefitted in the subject'sear canals,
the subjectwasseatedin the anechoicchamber,and instruct-
ed on how to setthe azimuthof the loudspeakerspeakerarc
usingthe handcrank to turn the arc. Then, with the subject
alone in the chamber, the arc was moved to the first azimuth
setting(usuallydirectly behindthe subject).Dependingon
the conditionuuder study,the subjecteither lookeddirectly
forward and held his/her head still, or bit down on a bitebar,
which could be attached rigidly to the subject'sseat. After
the subject signaledthe experimenterthat all was ready,
measurements
were madein rapid succession
at all six elevations, in both ears simultaneously.About 2 min were required to make the six pairs of measurements
at each azimuth. The subjectthen moved the arc to the next location
and the sequencewas repeated.Finally, after measurements
F.L. Wightmanand D. J. Kistler:Headphonesimulation.I
860
jectsvariabilityin themeasurements
isquitelarge,andthat
thepatternof between-subject
differences
across
frequency
of sourceposition.Second,we will present
microphones,
anda pair of transferfunctionmeasurements is independent
had beenmade at all 24 azimuths,the subjectput on the
headphones,
taking care not to disturbthe positionof the
wastakenwith the headphones
beingusedto transducethe
widebandtestsignal.Ten youngadults(six females,four
males),withnohistoryof hearingproblems,
participated
as
subjects.
Eachraw datarecordconsisted
of the time-domainrepresentation
of a signalrecordedfroma probemicrophonein
a subject's
earcanal.This signalincludednot onlythedirection-specific
characteristics
of the subject'souter ear (and
head, shoulders,etc.), but also the characteristicsof the
originaltestsignal,the loudspeaker
(or headphones),
and
the measuringmicrophone.To obtainan uncontaminated
transfer
technique,
in whichear-canalrecordings
of real free-field
stimuliandsynthesized
headphone-presented
stimuliaredirectly compared.Third, we will showthat our measurements are roughly consistentwith comparablemeasurementsreportedelsewhere
in thelilerature.
B. Digital filter construction
free-field-to-eardrum
the results of an acoustical verification of our simulation
function
characteristic
A. Error and variability in the HRTF measurements
For thepurposes
ofthemeasurements
reportedhere,we
make a distinctionbetween"error" and "variability." Error
refersto the influenceof thosefactorsthat affectthe repeat-
abilityof the measurements
obtainedon a singlesubject.
Variabilityrefersto the differences
in the measurements
from subjectto subject.
Thosecomponents
of errorthatareprobably
mostrel-
(HRTF) or an uncontaminatedheadphone-to-eardrum
evant to the successof our simulations are the variations
transferfunction,the effectsof the signal,loudspeaker(or
from measurementto measurementcausedby head moveheadphone),andmicrophone
mustberemoved.Thiscould
ments,the variationscausedby the uncertaintyof microbe done by transformingthe raw data record into the frephoneplacement,
andthevariations
caused
by the uncerquencydomain(via an FFT) anddividingby the frequency
domainrepresentation
of the characteristics
of the signal,
themicrophone,
andthe loudspeaker
or headphone.
In our
case,to producethedigitalfiltersrequiredfor stimulussynthesis,we dividedthe frequency
domainrepresentations
of
the signalsrecordedin free fieldby the frequencydomain
representations
of the samesignalsrecordedunder.head-
tainty of headphone
placement.Figure 2 illustratesthe
magnitude
of theseerrorsfora typicalsubject.
Forpurposes
of illustration,we haie represented
the HRTF magnitudes
in thisfigurebothasraw,unprocessed
DFT outputandasJ-
octbandlevels.
Thel-octanalysis,
whichsmooths
outhigh-
frequency
detail,wasdonenotonlyfor computational
convenience
but alsobecause
we feelthe l-octrepresentation
phones.
2Sincethestimulus
andmicrophone
characteristics
reflects
cochlear
outputmoreaccurately
thantherawDFT
appearin boththenumerator
anddenominator
terms,they
representation.
Most
of
the
analyses
of
HRTFs presented
cancel.The loudspeakercharacteristics
were not removed
laterin thisarticlewillconcern
onlythel-octresults.
Figure
fromthedigitalfiltersusedto synthesize
stimuli.All digital
2(a) and (b) showsthe raw HRTFs and the mean (and
signalprocessing,
includingteststimulusgeneration,FFT
standard
deviation)
of thel-octbandlevelsfromtensepacomputations,
digitalfilterdesignandimplementation,
and
waveformanalysis,was accomplishedon a DEC VAX11/750 computerusingthe ILS (SignalTechnology,Inc.)
softwarepackage.
rate measurements of the HRTF for both ears from a source
on thehorizontalplanedirectlyfacingtheleft ear.The subject'sheadwasfixedbymeans
of a bitebarduringeachmeasurement
andtheearmoldassembly
wasremovedaftereach
measurement.
Thus thesepanelsillustratethe influenceof
II. RESULTS AND DISCUSSION
variabilityin microphoneplacement.Figure 2(c) and (d)
We have made transfer function measurements from
showstheresultsfromtwo comparable
setsof tenmeasurementsof the left and right HRTFs from the samesource
both earsof ten subjectswith sourcesat 144 positions.For
mostof the subjects,we haveseveralsetsof measurements position
andsubject.
For thesemeasurements,
thesubject's
duringthe measurements,
andthe
from a subsetof the sourcepositions,allowingquantitative headwasnot restrained
assessments
of the reliabilityof the measurements.
While a
earmoldassemblywasleft in placeuntil all ten measurementshadbeencompleted.
Thusthesepanelsillustratethe
completeanalysisof thesedata would be inconsistent
with
variabilityresultingfrom headmovementduringmeasurethepurposeof thisarticle,wefeelit isimportantto assess
the
stabilityandthevalidityof the measurements
andthe degree ment.Note that at leastfor the ear facingthe source(where
to which our synthesisprocedurescan actuallyduplicatea
signal-to-noise
ratioisbest),the _+_
2 s.d.intervalislessthan
free-fieldstimuluswith headphonepresentation.
5 dB wide,evenat highfrequencies.
Figure2 (e) and (f)
The measurementswill be evaluated in three ways.
providesan indicationof the variabilityin the headphone
First, we will presentdata on the stabilityof the measure- transfer function measurementsthat results from uncertainmentsfroma singlesubjectandon thevariabilityof themeaty of headphone
placement.
The resultsfrom ten different
surementsfrom subjectto subject.The data will showthat
headphone
placements
areshown.For thesemeasurements,
for frequenciesunder 14 kHz, the measurements
are stable, the earmoldassemblywasleft in placeand the headphones
provided the signal-to-noiseratio is high. These stability
wererepositioned
aftereachof the ten measurements.
data are difficult to evaluate, however, because it is not
Substantialintersubjectvariabilityin the HRTF for a
knownhowmuchinstabilitycanbetoleratedfor a successful singlesourcepositionis to beexpected,
givendifferences
in
simulation. The data will also show that the between-subheadsizeandpinnashape.This HRTF variabilityhasbeen
861
J. Acoust. Soc. Am., Vol. 85. No. 2. February 1989
F.L. Wightman and D. J. Kistler: Headphone simulation. I
861
R•h•
Ea•
.......
2O0
T--,-T'T'T'T •
500
1000
,
•
2000
I , . . I
5000
10000
20000
Frequency (Hz)
200
I
500
,
,,
,I
1000
,
I
2000
I •
5000
,,
FIG. 3. Intersubject
variabilityin the HRTF measurements
(•-octband
levels).Thesolidlinerepresents
themeanstandard
deviation
of•-octband
,I
10000
20000
Frequency (Hz)
levelsfromtensubjects.
Standarddeviationin each:•-octbandiscomputed
fromthe•-octbandlevelsof theHRTF magnitude
measurements
fromthe
ten subjectsat a singlesourceposition;the meanstandarddeviationiscom-
Left
E•r
'•
200
'
Righ•
E•
500
1000
2000
5000
10000
reportedbefore(e.g.,Shaw,1965) and is prominentin our
azimuths,from0 to 315degat 45-degintervals)into•-oct
20000
(e)
Right Ear
200
500
1000
2000
5000
10000
20000
Frequency (Hz)
FIG. 2. Illustrationsof typicalvariabilityin theacoustical
transferfunctiou
measurements.
The dB scaleis basedon an arbitraryreference;
the spectrumlevelofthemeasurement
system
noiseisat approximately-- 40 dBon
this scale.Each panelshowsraw DFT magnitudefunctionsfrom ten measurements.In additioneachpanelshowsthe means(solid line) and the + 2
s.d.interval(dashed
lines)ofl-octbandmagnitude
levelsfromthetenmeasurements(displacedverticallyby 10 dB for visibility). (a) and (b) Measurements
from a subject'sleft and right car, respectively,
of the free-fieldto-eardrumtransferfunctionfor a sourcedirectlyoppositetheleftear,at ear
level.The subjcct'sheadwasheld in placewith a bitebarduringthe measurementsand the microphoneassemblywasremovedafter each measurement. (c) and (d) Comparabledata except that, in this case,the microphoneassemblywasleft in placefor all ten measurements,
and the subjcct's
beadwasnot heldin place.(e) and (f) The variabilityresultingfromheadphoneplacement.
For thesemeasurements,
the microphone
assembly
was
left in place,and the headphones
were removedand replacedafter each
measurement.
862
J. Acoust.Soc. Am., Vol. 85, No. 2, February 1989
differentsourcepositions.
The dashedlinesrepresentthe + 2 s.d.interval
of thestandarddeviationestimates
andreflecttheextentto whichthepatternof intersubject
variabilitychangesacrossthe48 positions.
data. In order to evaluatethe intersubjectvariability, we
transformedeach subject'sleft-ear HRTF measurements
from 48 sourcepositions(all six elevationsat eachof eight
Frequency (Hz)
Left Ear
putedbyaveraging
thestandard
deviations
ineach•-octbandacross
the48
band levels.We then analyzedtheseband levels (using a
principalcomponents
analysis)to determineif thepatternof
intersubjectvariability across frequency changed with
sourceposition.This analysisrevealedno effectof source
position(a singlecomponentaccountedfor 94% of thetotal
variance). Figure 3 showsthe intersubjectstandarddeviationof the HRTF magnitude(in dB) asa functionof band
centerfrequency,averagedacrossthe 48 sourcepositions.
The dashedlinesrepresent+ 2 s.d.of the intersubject
standard deviationestimatesacrossthe 48 sourcepositions.As
Fig. 3 shows,thevariabilityin HRTF fromsubjectto subject
growswith frequencyuntil it reachesa peakof almost8 dB
between7 and 10kHz. The intersubject
standarddeviationis
significantly
lower on eachsideof the peak.Note that an
intersubjectstandarddeviationof 7 dB impliesthat individual HRTFs would beexpectedto differby asmuchas28 dB
( + 2 s.d.).
B. Acoustical verification of the simulation procedure
The working assumptionof our simulation efforts was
that if, usingheadphones,
we couldproduceear-canalwaveforms identical to thoseproducedby a free-fieldsource,we
wouldduplicatethe free-fieldexperience.
The onlyconclusivetestof this assumption
wouldcomefrom psychophysicalexperiments
in whichfree-fieldandsimulatedfree-field
listening were directly compared (see Wightman and
Kistler, 1989). To complementthe psychophysical
experiments,weevaluatedtheextentto whichoursimulationtech-
niquescouldproduce"acoustically
correct"waveforms
in
subjects'
earcanals.In otherwords,wecompared
ear-canal
F.L. Wightmanand D. J. Kistler:Headphonesimulation.I
862
t• Subject:
ARL•
¾AF:
9Tø•
•
• •
Su•t:
(fi)-
Subject:
ARL:
VAF:
•%
SDE;VAF: •
Su•ect: SDE•VAF: 92•
Su•t: SDH•VAF:•
••
Subj•t: SDH; VAF: •
• • Su•t: SDO;
VAF:
•%
•
2•
Su•t:
Su•t: SDO;
VAF:
9s%
SDP; VAF: •
5•
t000
(b)
Subj•t: SDP; VAF: 95•
2000
50•
10•0
•00•
200
•00
t000
2000
•0
1•00
•
FIG. 4. R•ults fromtheacoustical
verification
ex•riments.Dataareshown
f•m sixsu•j•ts andfoursource
•sition• each.Thedifferent
sym•ls
repr•entdifferent
•urce •itions. (a) The•l amplitude
difference
in d•, m•surcdat thelistener's
•rdrum, •tw•n thes•ctrumof a fr•-field
stimulus
andthes•ctrumofasy•th•iz• stimulus
preented
overheadphones.
(b) Thephase
difference
indegr•s.Thepercent
variance
a•ouatedfor
(VAF) ineachpanelrefl•tstheextenttowhichthepattern
ofdifferences
• a function
offrequency
isconstant
across
thefourpositions.
waveformsproducedby free-fieldsourceswith ear-canal
waveformsproducedby simulatedfree-fieldsources.
The first stepof the acousticalverificationprocedure
wasto measurethe HRTF of a subjectfor a givensource
position.Then, the headphone
transferfunctionwasmeasured.Finally,the frequencydomainrepresentation
of an
FIR digitalfilter wascomputedby dividingthe HRTF by
theheadphone
transferfunction.Normally,thisdigitalfilter
wouldbe usedto processa stimulusfor simulatedfree-field
listening.
In theory,if sucha processed
stimulus
ispresented
overtheheadphones,
the headphone
response
cancels,leavingthefree-fieldHRTF characteristics
imposedonthestimulus.To evaluatethe extentto whichthe theorywouldhold
in our situation,weusedtheimpulse-response
of thedigital
filterasa stimulus,presentedthat stimulusto listenersover
headphones,
and recordedthe result?With no error, the
Fouriertransformof therecordedstimulusshouldequalthe
originallymeasuredHRTF. Figure4 showsthe differences
larity of the patterns)is greaterthan 92% of the total in all
cases.In other words,the errorsin our procedureare comparableat all sourcepositions,and thusare similar to those
that wouldbe introducedby a slightchangein headphone
characteristics.
We feel it is highlyunlikelythat thiserror is
perceptuallysignificant.
One possible
sourceof theerror revealedby our acoustical verificationexperimentis theacousticreflex.The stimuli
in theexperiment,whichproducedthedatashownin Fig. 4,
weredeliveredat about70 dB SPL in the headphone
condition,andabout70 dB SPL in freefield(exactsound-pressure
in «-octbands(magnitude
in dB andphasein degrees)
be-
broughtaboutby theacousticreflex.If so,it is possiblethat
the usual practiceof making measurements
of HRTFs at
high stimuluslevelsin orderto maximizesignal-to-noise
ra-
tweenthe originalHRTFs and the transformsof the test
stimulusrecordingsfor sixof the ten subjectsat four source
levels in the two conditions were not recorded.) When the
headphonestimuli were deliveredat 90 dB $PL, the difference between free-field and headphonerecordingswere
muchgreater,as shownfor two subjectsin Fig. 5. Since90
dB SPL iswellwithin thelinearrangefor all ourequipment,
we feel the differences between the 90- and 70-dB SPL condi-
tions are probablycausedby changesin ear-canalacoustics
positions.
4In general,
it canbeseenthattheerrorislessthan
tio leads to contamination
1-2 dB in magnitude
and 10degin phase.Mostimportantis
the factthat the erroris independent
of sourceposition.A
principal componentsanalysisof the patterns of errors
acrossfrequencyrevealsthat the varianceaccountedfor by
thefirstextractedcomponent(a measureof theoverallsimi-
effects.
863
J. Acoust.
Sec.Am.,Vol.85, No.2, February1989
of the data with acoustic reflex
C. Comparison with other HRTF data in the literature
More than 40 yearsago, the first acousticalmeasurements of the transfer function
of the external ear were re-
F. L Wightman
andD.J. Kistler:
Headphone
simulation.
I
863
•..•
e,•
t$u
': ; :
••Ih.lect:
SDH
VAF:
S•
•e•ISu'
:;:
<•,
200
600
1000
2000
&000
10000
20000
200
500
FIG. 5. SameasFig. 4, exceptonlymagnitudedatafromtwoof thesubjects
are shown.For thesecases,the headphonestimuluswas20 dB moreintense
than for the conditionsrepresentedin Fig. 4.
1000
2000
5000
10000
20000
Frequency (Hi)
Frequency (Hz)
FIG. 6. AverageHRTF fora source
at0-degazimuth,0-degelevation,
from
thetensubjects
inthecurrentstudy(darksolidline)compared
todatafrom
the studiesof Shaw (1974, light solidline) and Mehrgardtand Mellert
(1977, dashedline.)
ported
byWiener
andRoss(1946).Since
thattime,more
than a dozen additional studies of external ear acoustics have
tienswereaveraged.Our data wereobtainedby arithmetic
beenpublished(Blauert, 1983,providesa comprehensive averaging.
summaryof thiswork). Nearly all of the publishedstudies
(2) Neither Shaw's nor Mehrgardt and Mellert's
includemeasurements
only of the magnitudeof the external
HRTF data were obtainedby measuringat the eardrum.
eartransferfunction,andmostof themeasurements
applyto
Most of the data compiledby Shaw (1974) representmeasurements taken at the entrance to the ear canal and later
sourcepositionson the horizontalplane.
correctedby an independentmeasurement
of the canalentrance to eardrum transferfunction.The data reportedby
1. Magnitude of the HRTF
Mehrgardtand Mellert (1977) were similarlycorrected.
The mostcomprehensiveand readily accessibledata on
the magnitudeof the HRTF are thosereportedby Shaw
(1974) andby Mehrgardtand Mellert (1977). Shaw'sdata,
whichapplyto sourcelocationsonthehorizontalplaneonly,
representa compilationof measurements
obtainedfrom several laboratoriesusinga varietyof techniques.Mehrgardt
and Mellert (after whosetechniquesour own are modeled)
report measurements
from sourcesboth on the horizontal
planeandon the verticalmedianplane.Figure6 showsaverageHRTF datafrom Shaw(1974), MehrgardtandMellert
(1977), and our own study for a sourcedirectly in front of
the subjecton the horizontalplane.The figureshowsthat for
frequenciesbelow2.5 kHz, our data are in goodagreement
with thosefrom the earlier studies.Above 2.5 kHz, however,
therearesizable(5-10 dB) discrepancies,
whicharegreatest
near 10 kHz. A similar pattern emergeswhen we compare
our data to those from the earlier studies from other source
positionson the horizontaland verticalmedianplane.Our
HRTF
estimates are about 5 dB lower than others between
2.5 and 7 kHz, and showa deepspectralnotchat around710 kHz, which is not shown in the other data.
The discrepancies
betweenour HRTF data and others
Our measurementswere made at the eardrum, and are thus
not subjectto the errorsthat mustbeassumedto accompany
any kind ofœosthoccorrection.
(3) The acoustics
of our measuringconditionsweredif-
ferentfrom thosein previousstudies.For example,our microphoneprobetubewasprobablymuchsmallerthanthose
usedin previousstudies(e.g., our probetubehad an outer
diameterof about0.8 mm, while the probeusedby Mehrgardt and Mellert had a 1.7ommouter diameter),and our
measurements
weremadewith a bored-outearmoldin place,
while in other studiesthe ear canal was perhapslessoccluded(exceptby the probetube itself). Theseacoustical
differences,
whichwould affectprimarilythe high-frequency HRTF estimates,may be significant.
We feelthat sinceour primary interestis in the dependence of the HRTF on the location of the sound source, a
more meaningful way to compare our data with those of
Shaw(1974) and Mehrgardtand Mellert (1977) is to examine the changesin the HRTFs broughtaboutby changesin
sourceposition.For this purposewe useazimuthaldependencyandelevationaldependency
functions.The azimuthal
dependencyfunctionsshow,as a functionof frequency,the
in the literature are neither surprisingnor worrisome. The
differencescould have arisenfrom any oneor more of several
sources,amongwhich are the following.
dB increase or decreasein the magnitude of the HRTF
causedby a changein azimuth from directly ahead to the
azimuth in question(for sourceson the horizontal plane.)
(1) The data from Shaw (1974) and Mehrgardt and
Mellert (1977) were not obtainedby arithmeticallyaveraging the HRTF measurementsacrosssubjects.Rather, a
schemeof averagingwasusedwhichwould tendto preserve
The elevationaldependencyfunctionsare computed for
sourceson theverticalmedianplaneandhavemeaningcomparableto the azimuthaldependency
functions.
Figure 7 showsa subsetof the azimuthal dependency
functionspresentedby Shaw (1974). Also in the figureare
our data, averagedover ten subjects,transformedinto azimuthal dependencyfunctions.The similarity betweenour
data and Shaw'sis gratifying,especiallygiventhe dramatic
better the details of the HRTF
functions.
In the case of
Mehrgardt and Mellert, for example,the individualHRTF
functionswereshiftedalongthe log-frequencyaxisuntil the
bestsubject-to-subject
matchwasfound,and then the func864
J. Acoust.Sec. Am., Vol. 85, No. 2, February1989
F.L. Wightmanand D. J. Kistler:Headphonesimulation.I
864
Asimuth:
50ø
Azimuth:
Azimuth:
00 ø
Asirnut, h: 180 o
Asimuth:
200
150
&00
1000
2000
5000
10000
20000
Frequency (Hz)
FIG. 9. Mean l-oct bandinterauralasymmetry
in the HRTF measure-
200
&00
1000
2000
&000
10000
20000
ments.The twopanelsshowtheabsolutevalueof thedB difference
between
the left andright HRIFs for a sourceat O-degazimuthand0-degelevation
(upper panel), and for a sourceat 180-degazimuthand 0-degelevation
{lower panel). The solidlinesindicatethe meaninterauralasymmetryand
thedashedlinesindicatetheupperandlowerlimitsof therangeof asymmetries.
Frequency (Ha)
FIG. 7. The solidline in eachpanelrepresents
an azimuthaldependency
function(changein HRTF from0 azimuth)averaged
overtensubjects
for
a singlesource
azimuth(0-degelevation).Thedottedlinerepresents
comparabledatafromShaw(1974). Eachpanelshowsthedatafroma different
2. Interaural asymmetry in the magnitude of the HRTF
It is typicalto assume(e.g., Shaw, 1974) that HRTF
measurements
are symmetricabout the vertical median
plane. Under this assumption,the left HRTF and right
HRTF wouldbe identicalfor sourceson the medianplane,
differences
of procedureacrossthe variousstudies.Figure8
andsymmetricelsewhere.
For example,the left HRTF for a
showssomeof the verticalmedianplaneHRTF data from
source
on
the
horizontal
plane
directlyoppositethe left ear
the Mehrgardtand Mellert (1977) study,transformedinto
should
equal
the
right
HRTF
for
a horizontalplanesource
'elevational
dependency
functions.
Thefigure
alsoshows
directly
opposite
the
right
ear.
However,
actualmeasurecomparabledata from our study,averagedacrossten subments
show
that
the
HRTFs
are
not
symmetric
(e.g., Searle
jects. While direct comparisonis not possible,since the
etal.,
1975),
and
there
have
been
suggestions
that
the asymsourcepositionson the medianplanewere differentin the
metries
are
perceptually
important,
especially
for
localizing
two studies,thegeneralagreementin formbetweenour data
sourceson the vertical median plane. Figure 9 showsthe
and that from the Mehrgardt and Mellert (1977) study is
interauralasymmetrywe obtainedin our transferfunction
obvious.
measurements.
The figureshowsthe meanandthe rangeof
source azimuth.
theunsigned
dBdifferences
(in l-octbands)between
theleft
andright HRTFs for sources
at 180-and0-degazimuthand
0-degelevation.Theseinterauralasymmetriesare comparableto thosethathavebeenreportedpreviously(Searleetal.,
1975).
$. Phase
•,,• •o 2?ø-•'
ß-• 1_.•s
ø
ß
of the HRTF
There are very few data in the literature on the phaseof
the HRTF. Thereareseveralpossible
reasons
for this.First,
many acousticaltransferfunctionmeasurementtechniques
•..•.
ß
(e.g.,thoseusingl-octbandnoise)provideonlymagnitude
information. Second,the relevanceof the menaural phase
•00
500
1000
2000
5000
10000
20000
F•equenc¾(Hs)
FIG. 8. Elcvationaldependencyfunctions(changein HRTF from O-dcg
elevationto the elevation notedat the left of each function) for a sourceat 0-
dogazimuth.Thesolidlinesrepresent
average
•-octbandlevelsfromthe 10
subjects
in thecurrentstudy;thedashedlinesrcprcscrlt
datafromthestudy
of Mchrgardtand Mcllcrt (1977).
865
J. Acoust.Sec. Am., VoL 85, No. 2, February1989
characteristicto soundlocalization,primarily a binaural
process,is questionable.
Third, interpretationof the phase
functioniscomplicatedby theinfluenceof acoustical
delays,
whichare difficultto quantifyandwhichaddan uninteresting linear componentto the phasefunctions.Fourth, phase
measurement
techniques
typicallyresolvephaseat eachfrequencyonly within a + 2rr range,soa completeappreciation of a phasefunctionrequiresthat the raw phasedata be
F.L. Wightmanand O. J. Kistlor:Headphonesimulation.I
865
Subject
SDO
Elevation: ff
•
'
•
goø
headphones,
simulatefree-fieldstimuli.The procedures
require measurements
of eachsubject'sfree-field-to-eardrum
transferfunctions,or HRTFs, deepin the subject'sunoceludedearcanals.The methodwe havedeveloped
isfastand
producespreciseand repeatablemeasurements.
The measurements
obtainedfromtensubjects
wereshownto begenerallyconsistent
with previousdatain theliterature,in spite
ofsubstantial'differences
inprocedure.
An objective
verifica200
I ,
600
, ,,
I
1000
,
I
2000
,
,
I ,
5000
, ,, I
10000
,
20000
Frequency (IIz)
FIG.
10. Interaural time differences obtained from the HRTF
measure-
mentsfrom onesubjectfor sources
at 0-decelevationand threedifferent
azimuths(0, 45, and 90 dec). The horizontallinesindicateestimatesof the
low- and high-frequency
asymptoticvaluesof interauraltimedifferenceat
eachofthethreeazimuths.
At 45-decazimuththelow-andhigh-frequency
asymptotes
are511and310/•s,respectively;
at 90-degazimuth,theyare780
and 701/•s, respectively.
"unwrapped,"a complexprocessthat is susceptible
to many
sourcesof error (Tribolet, 1977). Mehrgardtand Mellert
(1977) presentdataon the phaseof the HRTF, whichthey
tion procedure,in which ear-canalwaveformsfrom freefieldsources
andfrom headphone-presented
simulations
are
directlycompared,showsthat the simulationprocedurecan
duplicatefree-fieldwaveformswithina fewdB of magnitude
and a few degreesof phaseat frequencies
up to 14 kHz.
ACKNOWLEDGMENTS
This work wassupportedby grantsfrom the National
Institutes of Health (NINCDS) and the Air Force Office of
ScientificResearch,and by a Joint ResearchInterchange
with NASA. The authorsgratefullyacknowledgethe assistance of Marianne Arruda, Douglas Swiggum,and Linda
Petterssonfor their contributionsto the technicalphasesof
thework, andthe helpfulcomments
of Dr. RobertLutfi on
obtainedby addressing
both the acousticaldelay and the
several earlier drafts of this article.
unwrappingissues.However,it is difficultto compareour
phasedatawiththeirssincetheshapeof thefunctionasplot'Dividing
onespectrum
byanother
inorderIo"deconvolve"
thedenominated dependscriticallyon howthe acousticaldelayis quantitor signalfrom thenumeratorsignalisa potentiallyunstableoperation.If
at anyfrequencythemagnitudeof thespectrumin thedenominatorisvery
fied, and Mehrgardt and Mellert presentno detail on this
point. Nevertheless,
we shouldmentionthat a decomposi- smallcomparedto the magnitudeof the spectrumin the numerator,numericaloverflowproblemscan occur.In our application,this couldhaption of our HRTF data into all passand minimum-phase penif theheadphonetransferfunctionhada verydeepspectralnotchat
componentssupportsMehrgardt and Mellert's conclusion
somefrequency.Fortunately,while20- to 30-dBspectralnotches
appeared
in manyof our headphonetransferfunction(the denominatorin our case)
thattheHRTF canbemodeledbya minimum-phase
system
up to 10 kHz.
The phase differencebetween the transfer functions
measuredfrom the two earsis of considerableinterest,since
it is thebasisof the importantinterauraltime-difference
cue.
Kuhn (1977) haspresenteda rigid-spheremodelfor predictinginterauraltime differences
in the azimuthalplane.
He also presentedempirical measurementsof interaural
timedifferencefroma manikin'sears.The importantfeature
of Kuhn's modelis the predictionthat the ratio of the lowfrequencyinterauraltime differenceto the high-frequency
interauraltime differenceis 3:2. His data suggestthat for
anglesof incidencearound 45 dec, the predictionis confirmed,but for anglesnear 90 deg, there is lessvariationof
interaural time differencewith frequencythan the model
predicts. Figure 10 shows interaural time differencesextractedfrom our HRTF measurements
from onesubjectand
threeanglesof incidenceon the azimuthalplane.As can be
seen,for a 45-decincidence,the low-frequencyasymptoteis
about511its and the high-frequencyasymptoteis about310
/rs, closeto the predictionof the Kuhn model.For a 90-dec
incidence,the low-frequency
asymptoteis about780/•s and
the high-frequency
asymptoteis about700/rs. The latter is
greaterthanKuhn'smodelwouldpredict(i * 780= 520),
but is generallyconsistentwith Kuhn's empiricaldata.
III. CONCLUSIONS
This article has describedprocedureswe developedfor
synthesizingacousticalstimuli that, when presentedover
866
d. Acoust.Soc. Am., Vol. 85, No. 2, February 1989
measurements,
they did not causecomputationalproblems.
21ntheory,anysignalcouldbeusedto measure
theHRTF aslongasits
spectrumwasnonzeroin the frequencyrangeof interest.In pilot work with
two of our ten subjects,we evaluatedtwo signalsin additionto the signal
described
in thearticle:a 20-#simpulseanda flat-spectrum
random-phase
noiseburst.We foundthat whilethe threesignalsproducedindistinguishable HRTF estimatesin spectralregionswherethe signal-to-noise
ratio
washigh, the stepped-spectrum,
minimumpeakfactorsignalproduced
morestableestimatesin regionsof low signal-to-noise
ratio.
•Thefilterimpulse-response
wasusedasa te•tstimulus
forreasons
ofconvenience.In fact, becausethis stimulusoften had prominentpeaksand
valleysin itsspectrum,
it provedto bea severetestof thesimulation.Pilot
work with two of the ten subjectsshowedthat stimuliwith flatterspectra
usuallyproducedlesserror than what wasobtainedwith the impulse-responsestimulus.
4Theacoustical
verification
experiments
werelogistically
complicated
and
time consuming.For thisreason,a pilot studywasusedto selectthe four
positionsat which the greatestdifferences
werelikely. Given the patternof
data obtainedfrom the first six subjectstested,we electednot to teal the
remainingfoursubjectsand not to testat additionalsourcepositions.
Blauert,I. (1969). "Soundlocalizationin the medianplane,"Acustica22,
205-213.
Blauert,J. (1983). SpatialHearing: The Psychophysics
of Human Sound
Localization( MIT, Cambridge,MA).
Bloom,P. J. (1977). "Creatingsourceelevationillusionsbyspectralmanipulation," J. Audio Eng. Soc.25, 560-565.
Butler, R. A. (1975). "The influenceof the externaland middleear on audi-
tory discriminations,"
in Handbookof SensoryPhysiology,
editedby W.
D. Keidcl and W. D. Neff (Springer, Berlin).
Butler, R. A., and Belendiuk,K. (1977). "Spectralcuesutilized in the localizationof soundin the mediansagittalplane,"J. Acoust.Soc.Am. 61,
1264-1269.
Butler, R. A., and Planert, N. (1976). "The influenceof stimulus bandwidth on localizationof soundin space,"Percept.Psychophys.19, 103108.
F.L. Wightmanand D. d. Kistler:Headphonesimulation.I
866
Fisher,H., and Freedman,S. (1968). '•rhe roleof the pinnain auditory
localization," J. Aud. Res. 8, 15-26.
Gardner, M. B., and Gardner, R. S. (1973). "Problem of localizationin the
medianplane:Effectof pinnaecavityocclusion,"
J. Aeoust.Am. Soc.5:5,
400-408.
forauditory
localization,"
J. Acoust.
Soc.Am.60, 1164-1175.
Schroeder,
M. R. (1970)."Synthesis
oflow-peak-factor
signals
andbinary
sequences
withlowautocorrelation,"
IEEETrans.Inform.TheoryIT16, 85-89.
Shaw,E. A. G. (1965). "Earcanalpressure
generated
bya freesoundfield,"
J. Acoust. Soc. Am. 39, 465-470.
Hebrank,J., andWright,D. {1974). "Spectralcuesusedin localization
of
soundsources
onthemedianplane,"J. Acoust.Soc.Am. 86, 1829-1834.
Jongkees,
L., andGroen,J. (1946). "On directionalhearing,"J. Laryngol.
Shaw,E. A. G. (1974)."Transformation
ofsoundpressure
levelfromthe
freefieldto theeardrumin thehorizontal
plane,"J.Aeoust.Soc.Am. 86,
Otol. 61, 494-504.
Kuhn, G. (1977). "Model for the interaural time differencesin the azi-
Strutt,J.W. (LordRayleigh).(1907)."Onourperception
ofsound
direc-
muthalplane,"J. Aeoust.Soc.Am. 62, 157-167.
Mehrgardt,S.,andMellert,V. (1977). "Transformation
characteristics
of
the external human ear," J. Acoust. Soc. Am. 61, 1567-1576.
Oldfield, S., and Parker, S. (1984). "Acuity of soundIocalisation:a topographyof auditoryspace.II. Pinnacuesabsent,"Perception13, 6601617.
Plenge,G. (1974}. "On thedifference
betweenlocalization
andlateralization," J. Acoust. Soc. Am. 56, 944-951.
Searle,C. L., Braida,L. D., Cuddy, D. R., and Davis, M. F. (1978). "Binauralpinnadisparity:Anotherauditorylocalizationcue,"J.Acoust.Soc.
Am. 87, 448-455.
Searle,C. L., Braida, L. D., Davis, M. F., and Colburn, S. ( 1976}. "A model
867
J. Acoust.Sec. Am., Vol. 85, No. 2, February1989
1848-1861.
tion," Philos.Mag. 13, 214-232.
Tribolet,J.M. ( 1977). "A newphaseunwrapping
algorithm,"IEEE Trans.
Acoust.Speech
SignalProcess
ASSP-25,170-177.
Watkins,A. J. (1978}. "Psychoacoustical
aspects
of synthesized
vertical
localecues,"J. Acoust.Soc.Am. 63, 1152-1165.
Wiener,F. M., andRoss.D. A. (1946). "The pressure
distributionin the
auditory
canalina progressive
sound
field,"J.Acoust.
Soc.Am.18,401408.
Wightman,
F. L.,andKistler,D. J. (1989)."Headphone
simulation
offreefieldlistening.
II: Psychophysical
validation,"
J. Acoust.Sac.Am. 88,
868-878.
F.L. Wightmanand D. J. Kistlor:Headphonesimulation.I
867
© Copyright 2026 Paperzz