In this essay I will be introducing the concept of sound and the different forms that sound comes in. With accompanying examples, I will go into detail about how indoor, outdoor and simulated acoustics work and what they are.
Acoustics is the science of sound, and how the ear will receive it based on the environment. The quality of sound will be different based on the factors of the environment, and this affect the sound waves. For example, your voice will sound sharper in a bathroom rather than the living room because the different surfaces, such as tiling and carpet, will pick up and feed back sound differently.
The variations of pressure in the air are what defines sound waves. Different objects will give out different vibrations which will travel through the air to the eardrum, and the brain picks this up as sound. This is known as frequency. For example, a violin will cause different levels of vibrations in comparison to a guitar.
Studio acoustics – these acoustics are created in a building that is designed specifically to produce the highest quality sound. This can range from small recording studios to orchestra halls. Small recording studios can be built from home with the right sound theory and treatment and can produce audio ideal for small numbers of people, and orchestra hall is ideal for a larger scale of audio so that there is the perfect room for the sound waves. They are two different ends of the studio spectrum. The materials and positioning of the materials are placed specifically to accommodate for the different vibrations.
Live rooms and dead rooms/surface types and properties – live rooms are rooms that are designed to allow for sound waves to reflect off surfaces with sharper, clearer tones, and the use of different materials within them can have different effects. For example, to create a live room, you would include materials such as glass, stone and metal as they reflect sound waves clearly. Dead rooms are designed with materials that absorb extra sound that may be unwanted so there will be less reflection from the sound waves. A room can be deadened with panels and foam that absorb energy, bass traps which absorb unwanted bass, and drum booths which allow for quieter instruments to be heard. The shape of the room also affects if a room is dead or live – a room with an odd shape means that the sound waves are not as parallel resulting in a different sound to a room with a basic rectangular structure.
In situ recording – this is where audio recording takes place on an original location in real time. The source of audio does not change location, it can be anywhere regardless of audio quality. For example, ambush news reporters may catch a celebrity on a busy street where the audio quality may not be best, but that can often be the only chance to get what is needed. The audio can be edited in software if it needs improving.
Reverberation – this is the continuing effect that a sound wave creates after the initial sound has been produced. Reverberation depends on the frequency of the vibrations, and the environment. In a smaller room, there will be less reverberation as the sound will hit the wall and either be absorbed or reflected based on materials. In a larger room, reverberation is stronger due to the fact that there is space for all elements of the sound to be heard without cancellation.
Soundproofing/screening – this is when we use different materials in order to block out unwanted noise. These noises can come from many materials such as walls, glass, doors outside, etc. so to block these out you may invest in materials that cancel this out. Some examples of these materials are panels, noise isolating foam, sound screens and vinyl. They absorb the sound waves so little or no extra sound can be heard.
Actuality/sound bites – actuality is a term in relation to news and broadcasting. They are audio clips that span from 10-20 seconds long and are often unedited, original material such as interview questions. They are produced often with equipment such as shotgun microphone and boom. When these clips are used outside of radio, they are called sound bites.
Background atmosphere – this is also known as ambience. It refers to sound that is already present in an environment without alteration. It can be natural, industrial, human, and comes in many forms such as birds, trees in the wind, machine noises, speech from a distance, etc.
Unwanted noise/ambience – this refers to a static humming during the recording of audio. It is heard in the quieter moments of filming and can be described as a hissing sound. This can happen due to equipment, ambience or an instrument. Unwanted ambience refers to other causes of noise such as bars or motorways. You can remove unwanted noise through software, such as audacity.
Wind noise – wind noise occurs when wind brushes past a microphone, causing the membranes of the microphone to fluctuate and vibrate. It is unpleasant especially to hearing aid users, and it results in bass like, whooshing sound which interrupts the clarity of the rest of the audio. There are many ways to reduce this such as software, dead cats, and windsocks.
Processor – a processor is a system that represents audio signals electronically. There are two ways in which a processor can represent signals – analogue or digital. They both use a different method to process the sound. Analogue is a processor that is programmed manually to use a certain set algorithm and has to be manually changed if necessary. Digital processing is automatic and uses programmed binary numbers that alter the algorithm when necessary.
Effects units – An effects unit is a processing device that can be used to alter and process the sound of audio to improve its sound quality. They are mainly used by musicians during live or studio performances, and different forms of effects units will alter different instruments for certain effects. They are also known as pedals.For example, a dynamics pedal is used to make instruments such as an electric guitar sound louder and a filters pedal can be used to alter the frequency outputted. An effects unit can be either digital or electronic.
Compression and limiting – Compression is a form of audio signal processing. The function of compressing audio is to alter the dynamic volume produced. It can be used to either increase volume if it’s too quiet or reduce it if it’s too loud. There is a threshold set which computes if a volume is too loud or quiet and the volume is automatically compressed once it surpasses this threshold. There are two forms of compression – downward compression is where volume is decreased and upward compression is where volume is increased. A limiter is a downward compressor that can be used as a safety device to cap the volume if it becomes loud enough to cause damage. For example, these systems may be used in studios, broadcasts and instrument amplifiers.
Computer based software – computer based software is digital material specifically designed to be operated via a computer or other similar digital devices. They are designed with a certain function to benefit the user of the device, such as security systems, creative software such as Adobe Photoshop, games, etc. They are created through coding and manual creation, and can be installed onto the devices through means such as disk and downloads.
Surround sound – surround sound is a multi channel function that is used to improve the quality of an audio experience. It uses more than one audio source such as multiple speakers that are placed strategically in an area to give a full, complete audio experience. The most common place surround sound is used is in theatre/cinema. Speakers are placed to create a more intense effect because it makes the product sound appear closer. Surround sound is highly effective with horror films, because the terror appears closer to the audience rather than only coming from one screen further away.
Mono and stereo – mono is the use of sound reproduction through one channel. The sound uses one single channel and is only conveyed to come from one single direction, with no variation of where the sound is coming from. When recorded, the microphone is placed strategically so that all sound waves reach the recording equipment at the exact same time, so that when played back the sound is even to both ears. Stereo is a multi channel format that uses two channels to make audio sound as though it is coming from more than one location – left, middle, or right. Recording equipment will be placed at certain distances from the sound sources so that it picks up the audio at a calculated time to reach the ear separately. An example of a clear mono and stereo difference is Wouldn’t It Be Nice by The Beach Boys. They produced both versions of the song, when you listen to them both you can hear the difference.
Phase – is a word used to describe the positioning of one sound wave in accordance to another, coming from different channels. When the peak/crests and troughs of more than one sound wave are aligned and in sync with each other, the audio is in phase and will produce the highest quality sound because the high frequency does not cancel out the low frequency and vice versa. When the troughs and crests meet each other, they cancel each other out because the opposite ends of the sound spectrum collide so therefore the sound heard is flat, known as being out of phase.
Pitch – pitch is a term used to describe the frequency of audio. When sound waves are of a higher frequency, we call the sound produced high pitched as it reaches our ears in a higher, sharper tone. When the sound waves are of a lower frequency, we refer to it as lower pitch as the sound is duller and less piercing to the ear.
Time delay – time delay is an effect of audio that plays back the recorded audio after a chosen time. It can be used creatively for different effects, such as playing it back repeatedly or creating an echo effect. Examples of systems that will create this are tape loops, pedals, audio software plugins, and effect units that are analogue.
Indirect recording – indirect recording is when audio is recorded specifically to pick up indirect sound. In an area of specific acoustics such as a live studio, sound waves will travel around and bounce off of certain surfaces before reaching the ear or recording equipment, which is known as indirect as it takes longer time to reach the ear than if the sound waves were directed straight to the ear/recording equipment.