## .: SOUND SYNTHESIS TUTORIAL :.

### :: Theory on sound synthesis - Phase

It is time to concentrate on the concept of phase and the role it plays in wave shaping. Phase is determined by time and it is always related to the combination of two or more waves. In the simplest example, as shown in the graphic below, combining two identical waves without any delay generates a wave with double amplitude, or what is the same, produces the same sound, but louder. But what would happen if one of these identical waves would be delayed half a cycle with respect to the other? They would cancel each other and would give only silence. As seen in the first chapter of this tutorial, a single sine wave can be measured by specifying just its frequency and amplitude, but when two or more waves are combined one has to consider also their relative offset (delay). This offset is what is usually called phase, and can be measured in time or degrees. As it can be seen, phase has very serious implications in sound wave theory and also in sound synthesis.

The concept of harmonics has much weight in the following explanation. Harmonics are essential in any field related to sound generation and manipulation; in fact, harmonics is what sound is made of. It is key to understand harmonics and their role, so if you need to review this concept, you can find information about them in the first chapter of this tutorial.

Combining complex out of phase signals does not necessarily lead to a complete cancellation. Let's take a saw wave as example; a saw wave has every harmonic present on it. If the first harmonic (the fundamental frequency) lies at 100 Hz, the second harmonic will be at 200 Hz, the third harmonic at 300 Hz, and so on... Combining two of these saw waves with their fundamental frequencies offset (delayed) half a cycle, would mean that the fundamental frequencies would be cancelled. But on the contrary, the second harmonics, lying at 200 Hz, would be added. The third harmonics, lying at 300 Hz, would be cancelled, the fourth harmonics, lying at 400 Hz, would be added, and so on... So odd harmonics would be cancelled, while even harmonics would be reinforced, and the practical result is that the saw wave has doubled its frequency (because what was previously the second harmonic would be now the first harmonic) and preserved the same amplitude (because half of the harmonics have been eliminated but the remaining ones have doubled their amplitudes).

Fourier analysis theory states that any two complex signals can be described as an infinite number of sine waves that represent all the frequencies present in the signal. So it follows that at any given offset between two identical signals, each frequency would be phase-shifted (offset) by a different amount. But what sense makes this, above all related to sound synthesis?

The final conclusion is to state that filtering leads to changes in phase, because, following the Fourier analysis theory, the very fact that filters alter frequencies means that they also alter phases. It is interesting to note that phase modulation is the base of a type of sound synthesis. The Yamaha DX7 synthesizer, which is usually referred to as a FM (frequency modulation) synthesizer, is actually a PM (phase modulation) synthesizer. Both modulation types sound very similar, but phase modulation is usually easier to implement in the field of sound synthesis.

### :: The keyboard as modulator

Apart from the standard control of the notes (pitch), the keyboard could be assigned to different parameters present on a synthesizer (if this one allows for it) to act as a modulator, allowing so to achieve a range of additional effects. This technique is known as keyboard tracking or keyboard scaling. For example, the keyboard can be set to modulate the filter cut-off frequency; doing so, every time the octaves are played up, the filter opens more and more and the sound gets brighter, while playing down the octaves has the opposite effect. This technique is known as filter tracking. Another common form of keyboard tracking consists in setting the keyboard to modulate the amplitude of the sound; so every time the octaves are played up, the sound gets louder, while playing down the octaves has the opposite effect. Synthesizers that have a more extensive modulation matrix, will yield a larger variety of results.

Another aspect of keyboards is velocity, which is just how hard a key is struck. Oldest synthesizers often lack any way of measuring velocity values, but in modern synthesizers, velocity control has become a serious tool of manipulation. Velocity can be assigned to a number of parameters, depending on the possibilities offered by the synthesizer (modulation matrix). Usually velocity is assigned to amplitude, so the keyboard would imitate the natural behaviour of a non-electronic keyboard, where the higher the velocity, the louder the sound, and vice versa. For example, piano samples are often patched so that velocity control both amplitude and filter cut-off, so the harder a note is struck, the louder and also brighter it gets, giving a more natural sound. Velocity is used for expression, but also as an additional modulation tool in a synthesizer.

Until here, some basic knowledge and ideas on how to make use of a synthesizer, without having to rely always in the patches that come with the synthesizers. Programming a sound patch can be so interesting and creative as composing a music theme. If you are new to synthesizer programming, I would recommend you to start your learning with a basic, simple synthesizer such as the analogue, classic Wahnsyn Type I.

From now on, the road is open...

~ Sakhal ~