GlossaryOfTermsGlossary of Terms for Audio EssentialsSound waves: Sounds are vibrations that travel through the air to our ears. Whatever medium the sound is travelling through vibrates in a wave pattern. A graph can be plotted to show us what particular sound vibration patterns look like. Audio editors and other applications display this wave form with time along the horrizontal axis of the graph and the sounds amplitude along the vertical axis.Volume: How loud a sound is. This is measured in decibels (DB). For every 6 DB increase the volume of a sound doubles. For electricians, this differs from the use of DB in electrical circuits, where 3 DB represents an increase by a factor of 2.Frequency: This is the number of times a sound wave goes through 1 complete cycle per second. It is measured in Hertz (Hz). Sounds audible to the human ear have frequencies ranging between 20 and 20000 Hz. Frequency response: This describes the range of frequencies to which an ear or a device such as a microphone or speaker will respond, i.e. start vibrating, so that sound can be captured or reproduced. Flat frequency response: If a microphone has a flat frequency response, this means that it is equally sensitive to all frequencies within the range of its frequency response and so will not change the characteristics of the sounds being fed to it. The frequency response of the human ear is decidedly not flat. Pitch: This refers to how low or high a sound is and is our perception of frequency i.e. the higher the frequency at which a sound wave oscillates, the higher pitch it will be. A doubling of frequency in Hz is equivalent to increasing the pitch of a sound by 1 octave. Fundamental: The fundamental frequency of a sound is its predominating frequency. It is the frequency that our ears perceive as a sound's pitch. Harmonics: these are frequencies that combine with the fundamental frequency of a sound to give it its unique characteristics or timbre. When tuning up, orchestras always play an a note. The main reason we can tell the difference between all the instruments playing, even those emitting the same fundamental frequency, is that each instrument emits its own series of harmonics. A natural harmonic is any frequency that is an integer multiple of the fundamental frequency. An overtone is any resonant frequency in a series of harmonic frequencies that may or may not be integer multiples of the fundamental frequency. Zero-crossing: The point in a sound waves cycle where it passes through 0 on the amplitude axis. The higher the sound's frequency the more zero-crossings occur per second. Zero-crossings are particularly useful when editing audio. Phase: Phase refers to the relationship between two or more identical wave forms in terms of time. If two sound waves are super-imposed on one another such that their peaks and troughs occur at the same time, the result is that the sound we hear, the combination of the two identical wave forms, is twice as loud. Waves combining in this way are said to be in phase. If sound waves are super-imposed on one another so that as one reaches its peak, the other reaches its trough, these waves cancel each other out. the net effect is silence. The vibrations push against one another with equal and opposite force, negating one another. When this happens, the two waves are said to be half a cycle or 180 degrees out of phase. This is also referred to as phase cancellation. If sound waves are super-imposed on one another such that, as one reaches its peak, the other is half way between its peak and its trough, the resulting sound is half as loud as either of the two sounds would be by themselves. In this example, the sound waves are a quarter of a cycle or 90 degrees out of phase. Sometimes certain frequency components of a complex sound can encounter phase problems, if, for example, the sound was recorded with a pair of microphones and the results are combined. This can result in changes to the frequency spectrum of a sound that make it seem as if it is coming through a tube for example. Phase relationships can be very useful or they can cause significant problems. Dynamic range: The difference between the loudest and quietest parts of a sound in Db. 0DBFs: This value represents the loudest a sound can be in a digital system, i.e. when its binary values are 1s all the way. Sound volumes in audio programs are expressed as negative values relative to that number. -6 DB, when displayed on a meter in a digital system, represents a volume 6 DB below 0DBFS, which means that the sound is half as loud as the loudest sound that system can reproduce. Clipping: The harsh distortion introduced when a digital signal tries to exceed 0DBFS. the result is that the top of the wave form cannot form properly. Sample rate: The number of times per second that a digital system captures sound. Think of each sample as a snapshot taken of the oscillating sound wave. Niquist's theorum: this states that, in order to get an accurate picture of a sound wave, the sample rate must be at least twice the frequency of the sound being captured. Aliasing: the spurious tones that distort digital audio, caused by using too low a sample rate to capture the highest frequencies of a sound. Bit depth: The number of bits per sample. The greater the number of bits, the greater the bit depth and therefore the greater the resolvable dynamic range. Dither: the process of adding low level hiss to digital audio when converting from a higher bit-depth to a lower one to preserve overall quality. Noise-shaping: Making sure that the dither hiss occupies parts of the frequency spectrum to which the human ear is least sensitive. Quantization distortion: the type of distortion created when a sound's dynamic range falls outside the lowest point of the dynamic range allowed by a given bit depth. It gives audio a crunchy, gritty sound. |