The following is a paper Tim wrote for
a computer science class as an undergrad at Wartburg
College in Waverly, Iowa. Many of the details
are out of date (it was written in early 1992, and
many of the sources are from the 1980's), but the
basics still hold true, and it contains some
historical information that many of you might find
interesting.
Computers and Music
By Timothy S. Fischer
(1992)
There is no escaping electronic music. Whether
it be television commercials, popular music,
movies, or kid's cartoons, it's safe to say that
most contemporary music has some sort of
electronic instrumentation. Pick out a popular
song at random, and there's a pretty good chance
that a computer was involved in its creation (Yelton
11).
Synthesizers, the instruments which create
electronic music, have been around for some time.
The first attempt at harnessing the computer as a
tool for synthesizing sounds was in the mid
1950's. Bell Laboratories of New Jersey became
interested in the possibilities of transmitting
telephone conversations in digital form. This
process involved converting the conversations from
analog to digital form by a process called
sampling. Then at the other end, the digital
information was converted back into analog form.
This system needed only to transmit a very small
portion of the audio spectrum, but the team
realized that it might be possible to use similar
technology to create music electronically. In
1957-58, this research resulted in MUSIC I and
MUSIC II, two very simple computerized music
generation programs (Manning 217).
The next significant impact in electronic music
was the work of Robert Moog and others in the
mid-1960s. Moog fashioned musical instruments from
electronic lab equipment, and some of his
instruments appeared on popular records by Walter
Carlos, the Beatles, the Monkees, and others. Rock
music quickly incorporated synthesizers, and the
analog synth sound of the 1970s laid the
foundation for many hit records. In the last few
years, new techniques in digital synthesis have
made many new sounds available to anyone who wants
to experiment with electronic music (Yelton
13-14).
Analog synthesizers operate through electronic
oscillators and filters that generate simple
combinations of pure frequencies with standard
waveforms, such as sine or square. These sounds
may be interesting in their own right, but only
superficially resemble acoustic instruments.
Digital instruments, however, use a sampling
process, which uses digital recordings of real
instruments. When a musician presses a key, the
machine calls the digitized note from memory, and
sends it through a loudspeaker. This process in
theory can mimic any acoustic instrument, except
sampling can use a lot of memory (Brody 27).
For example, to accurately mimic the sound of a
piano would require 25 billion bits, filling
100,000 memory chips. To avoid this cost, sampling
instruments often cut corners. Many keyboards
store only the initial "attack" portion
of the note, and a short portion of its final
decay. Another shortcut is to record only a
fraction of an instruments notes, say 20 or 30 of
the 88 notes on a piano, and then electronically
raise or lower their pitch to produce the
remaining notes. Unfortunately, both of these
methods result in distorting the sound, which
makes it differ from the acoustic method. One
recent method of condensing sampled sound was used
in the Kurzweil 250 keyboard. This keyboard used a
set of rules that describe how the instrument
sounds without actually "spelling it
out". This method resulted in the most
realistic sounding keyboard yet (Brody 28-29).
Although the most common electronic instrument
is the keyboard synthesizer, some new and unusual
instruments are appearing in the last few years.
Along with electronic wind instruments such as the
Yamaha WX-11 Wind Controller used in the last
Wartburg Artist Series, and guitars such as the
SynthAxe which can control traditional
synthesizers. The SynthAxe has two sets of
strings: a short set of strings to trigger notes
in a synthesizer, and a long set to vary the
notes' pitches. This instrument allows for some
unique musical effects, such as "strumming a
brass section" ("Electronics" 38).
There are also some other unusual instruments.
Philippe Guerre designed an instrument he calls
the Laserharp. Consisting of a laser and a
rotating mirror, along with photoelectric sensors
which detects whether or not the laser beam has
been interrupted and where in the path of this
beam the interruption has taken place. The beams
of light can be "plucked" just like a
harp, and the pitch changes as the musicians had
moves up and down the beams. Guerre also has a
contract with the French space organization
Aerospatial to launch the Laserharp software into
space in a satellite. The satellite would then use
the light from stars to play melodies (Connor 37).
One of the most significant developments in the
history of electronic music came in 1983, when the
country's major electronic-instrument
manufacturers announced a communication standard
called Musical Instrument Digital Interface,
commonly referred to as MIDI. Before MIDI, it was
not uncommon to see a keyboardist playing several
instruments at once. Since various keyboards have
their own strengths and weaknesses, they weren't
able to choose one over another. Keyboards
couldn't be interfaced, because there were no
standards. However, MIDI changed all this. MIDI
avoided compatibility problems by largely ignoring
how an instrument produces sound. Physically, MIDI
consists of three wires with a five-pin DIN
connector at each end, which plugs into the
instruments. A MIDI instrument does not directly
control another. Rather, the first sends
information to the second describing when to make
a sound, what pitch and instrument the sound
should be, how long the sound should persist, and
other important information. The controlled
instrument simply does what it is told, following
the lead of the first. For example, when a key is
pressed on the controlling synthesizer, three
bytes of information are generated. The first
indicates a key has been depressed, the second
indicates which key was depressed, and the third
indicates the velocity of the depressed key, or a
dummy value if the keyboard doesn't support
velocity sensing. Dissimilar instruments can be
interfaced, since the simpler instrument can
ignore any MIDI information it cannot use. For
example, many higher-priced instruments use the
velocity of the key pressed to indicate how loud
to produce the note. If the controlling keyboard
produces his information, and the controlled
keyboard cannot use it, it is simply ignored (Jordahl
79; Hall 27; "Electronics" 38).
Obviously, MIDI was designed for the performing
musician, but the same interface allows keyboards
to be connected with personal computers, leading
to many different applications. Since MIDI was
designed with professional musicians, it's not
surprising that there have been some major
benefits to them from this marriage of computers
and musical instruments. One of the largest
benefits of combining computers with keyboards
comes with music writing. With music notation
software, such as Finale on the Macintosh, or
Score on the IBM PC, music can be played on the
keyboard or other MIDI instrument, and immediately
be displayed on the computer screen. Musical
pitches can also be entered with a mouse or the
computer keyboard. Mistakes can be corrected, and
other musical symbols, such as dynamic markings,
tempo changes, and even lyrics, can be entered to
make a complete score. When finished, the score
can be printed out. If a laser or other
high-quality printer is used, the music approaches
published quality. The advantages of using music
notation software parallel the advantages of using
a word processor rather than a typewriter (Mahin
"Choosing..." 17-23). My brother,
Michael Fischer, is a professional musician who is
just getting started in music composing. After
writing several compositions by hand, he has
recently switched to a program called Music Prose.
Fischer said that initially you don't save that
much time using the computer compared to
conventional hand-notation means, but this time is
made up if the piece needs to be edited. For
example, when he sent a piece in to be published,
they recommended some changes be made. Since he
had written this piece out by hand, he had to
rewrite the entire score to make these changes.
Had he have written this piece with Music Prose,
he would have simply had to make the needed
changes and reprint.
Other applications for professional musicians
include software known as librarians. These
programs organize and store synthesizer sounds,
known as patches. Synthesizers have a small amount
of memory, which can hold a finite number of
patches, usually in a bank of 32, 64, 100, or 128
sounds. However, if you have a patch librarian for
the computer, you can store thousands of synth
patches on a single floppy disk. This information
can be downloaded, via MIDI, from the synthesizer
to the computer, and they can be reloaded to the
synthesizer when needed (Yelton 149-150). Some
librarians, also voice editing software to help
create new sounds. Since changing these settings
on the synthesizers themselves can sometimes be
hard to use, voice editing software, such as the
Opcode Editor/Librarian, allow these settings to
be made conveniently on screen with a mouse (Mahin
"Software..." 49).
In addition to the benefits to professional
musicians, there are also many benefits for those
who teach music and their students. One of the
most popular MIDI applications for the classroom
is known as sequencing. Using sequencer software,
the computer acts like a tape recorder. However,
instead of recording the sounds, the sequencer
records the MIDI information which tells which
notes were used, how they were used, etc. This
data can be stored to disk or edited in a variety
of ways. For example, tempo can be changed without
changing the pitch. Also, passages can instantly
be transposed to another key. Sequencers can also
record and play back different parts of a
composition in a multi-track manner. Therefore
students can record a basic chord pattern, then
listen to it while adding a melody line. Then
these two parts can be played while a bass part is
added, and so on. When the process is completed,
the computer can then print out a complete score.
(Jordahl 79-80; Blackman 29). Sequencers are also
helpful for professional musicians who are
composers. If the computer is connected to a
synthesizer which can make the sounds of the
instruments the composer is calling for in the
composition, the composer can hear what the piece
actually sounds like before it is fully completed
(Ehle 45).
Another use for computers and keyboards for
students is in individualized music instruction.
For example, students can use the synthesizer to
play a musical passage shown on the computer
screen. The computer "reads" the input
data, and watches for errors in notes, rhythm, or
tempo. Incorrect input is shown in musical
notation next to the original passage, while both
are played back for the student to compare.
Similar programs are becoming available for other
instruments besides keyboards. At least one
software series currently on the market is
designed especially for singing instruction.
Students sing passages into a microphone, and the
computer then indicates if the notes sung are
flat, sharp, or on pitch, while a synthesizer
plays notes for comparison (Jordal 80; Hall 28).
Computers are also helping people who are
physically disabled write and play music. With a
MIDI keyboard and sequencer, people who are
moderately disabled-- that is they can coordinate
one hand or even just a finger, can compose and
play music. At the present time, the severely
handicapped cannot take advantage of MIDI on their
own, but need someone to help. Research in this
area is important, because if disabled children
cannot communicate, then no one can understand
what they think. Before computers, severely
disabled children could only communicate by
"eye-pointing", in which the
"talked" by focusing their eyes on
different symbols to make simple sentences. Today,
word processing programs for the disabled allow
more sophisticated communication. A program might
scan a cursor over a grid of letters, words,
numbers, etc. A switch that is triggered by the
disabled person stops the cursor and sends
whatever is beside it to the top of the screen to
build up a sentence. Drake and Grant would like to
see a similar system developed for music (Drake
and Grant 37-40).
Another system, known as Biomuse, could also
eventually help disabled persons. The Biomuse
system consists of dime-sized electrodes which are
hidden underneath a headbands and armbands placed
on the musician. These electrodes pick up signals
from the musician's brain and, eye, arm, and hand
muscles. A small box of newly designed circuitry
would gather, filter, and process the bioelectric
signals so they could control a music synthesizer
through the MIDI interface. What this all means is
that a musician could walk on stage, and begin
playing: with every gesture of her hand, every
flick of a finger, notes would emanate from the
banks of speakers, and when she closes her eyes,
an unseen piano would accompany the melody (Amato
202-3). In addition, this system is ideally suited
for someone who has very little motor control.
Since the electrodes actually work on brain waves,
totally disabled persons might be able to make
music without moving a muscle. The Biomuse
technology may also help disabled people get jobs
by enabling them to control computers without
needing fine motor control. The inventors of
Biomuse are looking into the possibility of using
the signals created by the eye muscles to move the
cursor around the screen like an
"eye-controlled mouse".
As you can see, the potential of combining
music and the computer is only beginning. The MIDI
interface was only established in 1983, but as can
be seen by its mention in almost every aspect
described in this paper, it is indispensable, and
new applications are being found constantly.
Although I personally don't believe traditional
acoustic instruments will ever become obsolete, I
feel we will see electronic and computer music and
its applications become more and more prevalent.
Works Cited
Amato, Ivan "Muscle Melodies and Brain
Refrains: Turning Bioelectric Signals into
Music" Science News V135 (April 1, 1989):
202-203.
Blackman, Jaimie M. "The MIDI
Potential" Music Educator's Journal V73.4
(1986): 29.
Brody, Herb. "Kurzweils Keyboard"
High Technology V5.2 (1985): 27-32.
Campbell, Phillip. "The music of digital
computers" Nature 324 (1986): 523-28.
Connor, Steve. "The light way to play with
the stars" New Scientist 112.1530 (1986): 37.
Drake, Adle, and Grant, Jim. "Music gives
disability a byte" New Scientist V133.1544
(1987): pp. 37-39.
Ehle, Robert C. "Musicians and
Computers" The American Music Teacher V35.5
(1986): 30-31, 45.
"Electronics Develops Charms to
Sooth" New Scientist V113.1544(1987).
Hall, Vann. "Conquering the MIDI
Muddle" Music Educator's Journal V73.4
(1986): 27-9.
Interactive Music Company, The Book of MIDI
Computer Software (Hypercard Stack). Distributed
by Opcode Systems, Inc., for the Apple Macintosh.
Jordahl, Gregory. "Teaching Music in the
Age of MIDI" Classroom Computer Learning V9.2
(1988): 79-83.
Mahin, Bruce P. "Choosing Music Notation
Software" Clavier V28.6 (1989): 17-23.
Mahin, Bruce P. "Software Review: Opcode
Editor/Librarians" Instrumentalist V45.9
(Apr. 1991): 49.
Manning, Peter. Electronic and Computer Music
Oxford: Oxford University Press, 1985.
Yelton, Geary. Music and the Macintosh Atlanta:
MIDI America, Inc. 1989.
Twin Cities MIDI Copyright
© 1995-96 Timothy
S. Fischer
Twin Cities MIDI Copyright
© 1997-2001 David
L. Stevens