Digital Music Composition/Introduction

What Is Digital Music? edit

In the context of this book, digital music refers to music made using digital hardware or software, typically with a computer workstation.

If you have heard music produced while you were alive, you have almost certainly heard music that has been produced digitally. Music in the digital domain is often associated with electronic music; however this is not always the case. The foundation of electronic music is based on work done in the analog domain, from analog synthesizers to tape loop manipulation. These techniques are still popular today, even though computer-based sequencing and sound modification / generation are industry standard. Computers are, however, regularly used to produce digital equivalents of analog instruments and effects, making the distinction increasingly less important.

Audio Versus Control edit

It is important to understand the distinction between two kinds of computer music data: audio samples and control data. Audio samples are direct digital representations of sound that you can actually hear, while control data consists of a sequence of commands, typically on a note-by-note level, that can be obeyed by a suitably-designed music instrument or other sound-production device. Working directly with audio gives you, in principle the greatest freedom to perform complicated transformations on the samples to produce sounds that no one has heard before.

But certain kinds of processing are easier with the control data; for example, feeding the same sequence of note commands to a different instrument, or to the same instrument with different settings, can produce an entirely different-sounding rendition of the same tune, without the need for a lot of processing. It is easy to speed up or slow down the tempo of the piece, simply by adjusting the timing of the notes; this can be done independently of changing the key of the piece by transposing all the notes up or down by a constant interval. Working with audio data, it is harder to apply these two effects separately; speeding up/slowing down the tempo also tends to raise/lower the pitch, unless you apply a lot of complicated processing of the sound.

By far the most common kind of control data system you will come across is called MIDI, which is short for Musical Instrument Digital Interface. Its usage is so firmly established that even music software applications running on a computer with no actual MIDI interfaces will still communicate with each other as though they were connected via MIDI. Standardization is a Good Thing.

Audio Files Versus MIDI Files edit

Corresponding to the two different kinds of music-related computer data, there are two families of file formats for holding the data. Files for holding audio samples tend to be separate from those for holding MIDI data; the only kinds of files that can hold both tend to be specific to particular music applications, and not common formats.

There is a variety of audio formats, such as .WAV, FLAC, MP3, Ogg Vorbis and of course the whole range of movie formats that include both video and audio tracks. For our purposes, the important question is whether the format is compressed or uncompressed, and particularly, if it is compressed, whether the compression is lossy or not. Lossy compression throws away information that the listener cannot hear, based on well-established psychoacoustic models of how human hearing works. Unfortunately, while this may be fine for a final delivery format, it is not good for an initial audio recording format, because it severely limits the kinds of processing you can perform. For example, you cannot use filtering to emphasize less-audible sounds if the audio encoding has already got rid of them! Thus, for raw audio capture, we want raw, uncompressed formats, or failing that, losslessly-compressed formats.

For MIDI, on the other hand, there is just one Standard MIDI File format. Thank goodness.

Brief History edit

Synthesized instruments began to appear relatively soon after music became recorded, in the early 20th century. The exact technology used varied, but the primary lesson to learn is that they used an oscillator to generate a tone at varying frequencies. Amplitude, when mapped out as a function of time, produce a waveform that tells a speaker what point to be at at a given instance. What an oscillator does is produce an indefinitely repeatable pattern that can be directed to the speaker. Through the combination of multiple oscillators, distortions of the waveform, and other techniques, unique sounds may be produced.

Digital technology works with bits, which have only a "on" and "off" state. So, to represent the waveform in computer memory, the waveform is divided into discrete samples, where each sample represents the amplitude of the waveform at a given moment in time. Sound streams use many bits for each of these variables, to achieve a very fine precision in the waveform's representation. Digitalization means that audible artifacts can appear if care is not taken to keep the sound at the highest level of precision possible, and a poor conversion to analog(which often occurs on computers with low-cost integrated sound) will always make what comes through the speakers sound weak and "tinny".

Early synthesizers had no standardized control mechanism; this led to problems and additional hassle in the studio. But after the advent of the MIDI standard in the 1980s, control was greatly simplified. MIDI is a digital protocol that allows one machine(such as a piano keyboard, or a computer) to control another over a cable, triggering the other machine's events. At first, that meant a hardware synthesizer, but much of the hardware used when MIDI was invented has been reduced to software versions today. MIDI files are simply sequences of MIDI events using the "General MIDI" (GM) standard instruments, whose actual sounds are up to interpretation by the playback mechanism; their derision by many is the result of GM's limitations. MIDI is showing its age today, but is still a commonplace and useful tool.

Samplers also appeared in the 1980s. These machines store recordings and play them back; the difference between them and tape recorders are that samplers allow for great variation in the manner of playback. Digital technology makes effects such as reversing, looping, instantaneous restarting, and variable playback speed easy to accomplish. Modern computer sound cards work like a sampler that the computer can control and feed data to on-the-fly. The first sampling computer was the Commodore Amiga, released in 1985. From this computer, tracking was born, introducing digital music composition to the home consumer and started a new generation of musicians and genres.

MIDI Basics edit

As should be apparent from above, MIDI is a rather important part of digital music production, so it helps to have a basic idea of what it is.

As mentioned above, MIDI is a sequence of commands to make sound, as opposed to representing actual sound itself. The main part of it is note-on and note-off commands: for a keyboard instrument, these represent keys being pressed and released, for a plucked string instrument, they would be the plucking and muting of strings, for wind instruments, the starting and stopping of blowing a note, and so on. Associated with each note command is a channel, which can have a number from 1 to 16, and a velocity, which might represent how hard the key is hit or the string is plucked, and is usually supposed to affect the loudness of the note (though this can depend on the instrument configuration). Different instruments receiving the same stream of MIDI commands could be configured to only listen to specific channel numbers, or a single instrument could be set up to produce different sounds on different channels at once; either way, you can play a whole set of instrument parts from a single MIDI command stream.

Of course, 16 channels isn’t a lot—perhaps sufficient to emulate a small band, not for an entire orchestra. This small number reflects the limitations of digital technology in the early 1980s, when MIDI was first formulated. However, there are ways around this, for example by running multiple concurrent MIDI streams, each driving a different set of instruments. This is not hard to do on a modern computer. Thus, in such ways, the venerable MIDI standard continues to remain useful, to the point where no proposal for a successor has managed to gain much support.

Besides note commands, there are also control-change commands to set values for controllers on each channel (e.g. vibrato depth and speed), and program-change commands to choose the instrument for each channel. There are also non-channel-specific messages, of which the most important is “all notes off”—very useful if, for some reason (software bug, congestion on the MIDI connection or whatever), you have managed to send a bunch of note-on commands without corresponding notes-off; sequencer programs will often have a “Panic” button which, when clicked, sends this command and silences the cacophony!

There is also a set of “system-exclusive” commands (commonly abbreviated “SysEx”), the meanings of which are up to the makers of the instruments. These could be used to set up the full range of synthesizer parameters, or even load actual sound data into the synthesizer.