One of the basic tenets of my approach to machine-generated musical
accompaniment is that, to best follow a soloist, one must hear
several performances of the piece and player in quesition.
I use this information to train a mathematical model for
the musical interpretation in question, helping my system
to better anticipate the future musical evolution.
However, there always needs to be a first performance, and the
accompanist should not completely fall apart, even when the
conditions are unfamiliar. The example of the exposition of
Mozart's Clarinet Quitnet (K. 581) demonstrates my system's
ability to "sightread" --- perform without training data.
If I had a favorite piece of music, this would probably be it.
My friend, Ted Lane, graciously agreed to record the piece with
my accompaniment system on a recent visit to Amherst for an
(all-human) concert of wind music. I neglected to tell him to
bring his A clarinet, so he agreed to play it on the B flat clarinet
instead. In making this recording, I coached Ted not to simply
play the piece as he thought it should be played, but to test the
limits of the system's ability to follow. If some of the rubato
seems a little out of place, think of it as a test of the machine's
ability to sightread.
Mozart's Quintet for Clarinet and Strings, K. 581, 1st mvmt exposition
More musical examples of my accompaniment research
More involved discussions of the science behind