You can add some thickness to your studio production by creating a double effect that mimics the sound of the stacked instrument or vocal part. There are techniques that you can try to give you the similar feel of having the performer perform that part many times.
Short delays: a delay can give the impression of multiple takes. Start by trying a delay of 25ms where you have no feedback. Discerning a delay is hard for the human ear when it is shorter than 25ms. If your delay is longer, though, it will sound like an echo. Try playing around with delaying the time and see if you like it. If you have a modulation adjustment in the delay, try using a slow rate setting or speed to add a subtle pitch variance to the echo. When you have a pitch change, it helps to make the effect authentic. This trick to get a double guitar track is an old one.
Editing: an audio engineer student may find this one hard and time-consuming, but this is preferred on vocals more than a short delay. Make two copies of your vocal track. Go through your first copy and chop up the vocal randomly every few words. Now, go to your second copy and chop up vocals again, making sure you don’t chop in the same places as the first copy. Go back through your chopped tracks and nudge each region randomly about 20ms. Some need to be nudged early in some late. When they are random, it helps the effect. Send these tracks to the slow-moving chorus effect to add modulation. To find the sound you are looking work, you can now play around with the balances.
Plug-ins: each year, new plug-ins come out that sound better. Some of these plugins allow you to get the double effect that would normally take a lot of pitch and delay shifting. Sometimes, you can get a cool effect by using the editing and delay trick with a double plug-in.
Hopefully these tips will help you to add thickness to your studio project.
Are you planning to have your music played on the radio? By mastering your music, you prepare it for the air. Unfortunately, a lot of the mastering sessions are delivered on files in the wrong format. You can’t expect mastering engineer work miracles. When you give them garbage, you get garbage in return. If you handle the mix process improperly, your music will likely never be broadcasted or sound decent with any system.
There are a few mistakes that are commonly made that make it impossible for the mastering engineer to do his job right. If you can avoid these pitfalls, your sound will be far superior.
1. Mix levels.
If the bus meters he can read as soon as I press play, that will be a deal breaker. This is because there is no room left to add any gain. You need to monitor your output level by using the master fader. Some systems will force you to create a master fader or session. It is a valuable tool for the mix engineer. You will need to watch the final output level of the master fader to ensure the peaks ever go above 5 dB’s. No resolution will be lost if the original recording is in high definition.
2. Bus compression.
Most people love to hear the sound that comes with a good stereo bus compressor. There can be too much of a good thing, though. You can use an expander to help, but that leaves you working backwards attempting to undo the steps that were taken in the mix. The same sound will never be found. You should never overdo the limiting or compressing. If you create a heavily compressed or limited mix to listen to yourself, that should be fine, but the mastering studio should not receive that version.
3. Method of delivery.
You should never send your files on the CD. The audio CD only has one format where a data disc can have a number of formats burned to it. You can also deliver your files on a USB flash drive, hard drive, or uploaded to an FTP server like Dropbox. Never zip the files before you upload them. When you send a zipped file, the data is compressed and make calls problems for the mastering engineer.
By avoiding these mistakes, you will keep your mastering engineer happy.
When you go to school to learn about sound design, you learn a lot about MIDI Programs, Electronic Music, and Synthesizers. Understanding each of these technologies is important to sound design and sound synthesis training. In the 1960’s, subtractive synthesis became a lot more popular. By the 70’s, this form of synthesis was available to the masses. Only a handful of components are necessary for subtractive synth to give you a wide range of sounds. This makes it great for early synth designers who needed accessible and affordable synthesizers.
In the early 1900’s, additive synthesis was first seen as the tonewheel based instruments were created. The concept of these was first seen, though, in 1822 by Joseph Fourier and his research on harmonic analysis.
You can achieve both forms of synthesis through using some basic elements such as amplifiers, filters, and oscillators. The primary difference between the two forms is their approach, not their parts. Below will tell how the approaches of the two differ.
A subtractive synthesizer provides you with oscillator wave shapes like square, sawtooth, triangle, and sine. Since a sine wave only generates one frequency, it is known as a pure tone. Other wave shapes give you even and odd harmonics that are above the fundamental frequency that they are tuned to. The concept is to generate a higher number of harmonics than you want with subtractive synthesis, then subtract certain frequencies with a filter. Therefore, all subtractive synthesizers must use a minimum of one filter.
There are oscillators with additive synthesis too, but they only have one wave shape with those oscillators, the sine wave. If you go this route, you only want to build the sound with one harmonic at a time. Each of the sine wave oscillators is at a specific frequency that corresponds with a single harmonic. It is a lot like creating with Legos, in that you build with one block at a time. The higher number of clocks you use, the better your sculpture can be realistic. Additive synths typically have more oscillators and do not need a filter like the subtractive synths do.