Eric Ungar’s Acoustics from A to Z

The membrane of the human EAR
Responds to sound so we can hear.
Its motion vibrates tiny bones,
Wiggling small hair cells that sense tones.
Their nerve cells connect to the brain
From which we information gain.

The first book of Caesar opens with Gallia om nes in partes tres divisa est – all of Gaul is divided into three parts – as I remember (surprisingly) from my high school Latin. Anatomists similarly consider the human ear in terms of three parts: the outer, inner and middle ear. I suspect that both of these somewhat artificial divisions into three parts were made for the same reason: to divide a complex entity into smaller parts that can be discussed more easily.

The outer ear consists of the fleshy appendage attached to the head, called the pinna. This Latin word means ‘sail’ and undoubtedly was chosen by someone with large protruding ears who lived near a windy beach. The pinna and the ear canal with which it communicates channel sound waves to the canal’s termination, the eardrum or “tympanic membrane.”

The middle ear works somewhat like Thomas Edison’s phonograph: a membrane, set into motion by sound waves, is connected to a mechanical amplification system that communicates the amplified motions to the next stage. In the ear the mechanical amplification is achieved by a set of tiny bones or ‘ossicles,’ which connect to the so-called oval window. The ossicles make this window move in and out much like a piston and these motions are transmitted via the inner ear’s essentially incompressible fluid to the basilar membrane and the organ of Corti. This organ is not a musical instrument; rather, it is essentially a membrane that supports a forest of about 20,000 hair cells of d different types and lengths, which respond differently to sounds at different frequencies. These hair cells are connected to nerve cells that communicate with the brain, which does most of the difficult data processing.

Hearing loss may result from damage to any of the conductive mechanisms or from damage to the neurological elements. My wife’s hearing loss was reduced by surgical freeing of the ossicles that had become locked together and could not transmit the sound-induced vibrations well. Noise-induced hearing loss most often results from damage to the hair cell structures, which deteriorate, break off, and are not regenerated. ‘Presbycusis’ – the hearing loss we experience as we get older – begins with loss of the hair cells responsible for hearing the higher frequencies. In more ways than one, the hairless hear less.

The FREQUENCY of oscillations
Tells us how many fluctuations
Up and down from mean are reckoned
Per unit time (minute or second)
The unit ‘Hertz’ is now preferred;
Cycles-per-second’s been interred.

Although I have the utmost respect for the German physicist Heinrich Hertz (1857-1894) after whom the unit of frequency is named that used to be called “cycles per second,” I wish the world would have stayed with the old designation. I have never had to explain what “cycles per second” means or how many cycles per minute correspond to a given number of cycles per second, but the uninitiated often need to be told what ‘Hertz’ (Hz) means.

According to such eminent references as Cyril Harris’ Handbook of Acoustical Measurements and Noise Control, the frequency of a periodic phenomenon is defined as (a) the number of times the phenomenon repeats itself in one second or (b) the reciprocal of the period, where the period is the time it takes for the phenomenon to repeat itself. These definitions, however, are not entirely precise. Visualize a simple sinusoidal trace that goes through one cycle each second. It clearly repeats itself once per second, but it also repeats itself once every two seconds, once every three seconds and so on to infinity. So, its frequency would be not only 1 Hz, but also 1/2 Hz, 1/3 Hz, etc. Thus, at least for simple stationary signals it may be more precise to define the frequency as equal to the reciprocal of the shortest time it takes for any portion of the signal to repeat itself.

And what about a signal that never repeats itself – as is the case for most signals in the real world? The usual spectrum analysis is done by sampling a signal over a selected time interval and assuming that the sample repeats itself forever. So, if we apply the foregoing definition to this repeated sample signal, we find that its frequency corresponds to the arbitrary length of the sample we took – implying that the signal’s frequency is arbitrary. If the signal indeed is random, so that it never repeats itself, then its period would be infinite and its frequency would be zero.

Spectrum analyzers fortunately are not bothered by these definitional dilemmas. They typically process data sampled over specified intervals on the basis of the assumption that the samples are repeated indefinitely, fit the sum of a series of (infinitely extended) sinusoids to the data, and report the magnitudes of these sinusoids as a function of their frequencies. (The frequency of the sinusoids is defined as suggested at the end of the first paragraph above.) This so-called Fourier transform process allows one to represent a time-varying signal sample in terms of a series of frequency-dependent values.

It has been told that the French mathematician Fourier, who invented the transform, used to take quite a long time to work out the necessary integrals, while his younger brother took about half as long. Consequently, the older brother came to be known as Slow Fourier and the younger sibling came to be known as Fast Fourier. In recent years the latter achieved fame posthumously by lending his name to the Fast Fourier Transform (FFT) algorithm that is implemented in modern digital spectrum analyzers.