In 1948, Canadian Hugh LeCaine invented the Sackbut.
That might sound like a style of jeans to you, but it was actually the world’s first music synthesizer. LeCaine named it after the mediaeval word for the trombone, which originates from a French term for “push and pull”. One of the instruments that LeCaine’s new device mimicked was the trombone.
After the sackbut, LeCaine continued inventing machines to effect
and imitate sounds and music. Stuff like the “special purpose tape
machine” and the “touch sensitive organ”. To see pictures of them with
contemporary eyes, you’d think LeCaine was one mad sound scientist. The
truth is, most were groundbreaking audio devices and a lot of his work
is reflected in technology used to create music today.
In fact, after listening to some audio clips of LeCaine’s inventions
on the web I was amazed by how familiar they sounded. I could really
hear a relationship with contemporary electronic music. Artists like
the Chemical Brothers, Fischerspooner, and even Kraftwerk, all enjoy a
broad palette of auditory tonality thanks in no small part to this
Canadian from the last century.
Ironically, music synthesis is supposed to sound real, not
artificial like much of the work electronic musicians produce. That was
LeCaine’s goal. So synthesis was intended to act like a sort of audio
trickery, and it comes in all forms.
Like lip-synching. You thought we left that nasty practice behind
with the 80’s air band, but anyone who saw Ashlee Simpson on Saturday
Night Live last year knows better.
During that supposedly live performance her vocal track kicked in
way before she was ready for it. She stood for a moment like a deer in
some crazy trucker’s headlights, and then fled the stage, leaving her
band behind to weather the technical glitch.
Lots of major artists fall back on lip synching during live
performances. Reportedly, Gwen Stefani uses it when she finds herself
on tour with the flu. I even read web forum posts reporting that she
would casually pause during a show to cough without any breakdown in
her vocal performance.
Another musical trick based in audio synthesis is called AutoTune.
This software is commonly used to fix recorded singing errors. If a
vocalist misses a note, the sound engineer uses AutoTune to correct the
flaw without needing to re-record the track.
AutoTune can also do intentionally funky things with vocals. Cher
used it in her song “Believe,” and I’ve noticed the Beastie Boys
fooling around with it from time to time.
AutoTune is often used in live performances. Diva Shania Twain
depends on it to save her performances when she sings a bad note –
which happens more often than you might think. AutoTune kicks in
automatically and fixes her mistake on the fly with the audience none
the wiser. Lots of big name artists use AutoTune in this fashion,
You might see all this technical wizardry as an artistic crutch, but
can you blame performers for falling back on technology from time to
When on tour most of them have grueling schedules, and the prices
people pay for big ticket acts demands a perfect sounding performance.
Even Madonna is only human, after all. You can’t expect her to be in
top form every night, and who can actually sing in perfect pitch whilst
executing cartwheels and backflips?
Truth is, synthesis is an integral part of modern music. There’s so
much synthetic music around these days that it’s become the norm. Not
just contemporary music utilizes it, though. Consider Jimi Hendrix and
his “Cry Baby” pedal. That was pure audio synthesis.
When you really think about it, based on society’s fascination with
technology, synthesized music is really the natural artistic
progression. LeCaine would be proud.
Andrew Robulack is an IT Business Strategist and Architect based in Whitehorse.
Originally published in the Yukon News on February 18, 2005.
Copyright Andrew Robulack 2005