What Is Music Informatics?
Music Informatics is an expanding area at Indiana University that combines skills from many diverse disciplines. Some examples of music informatics research include:
Music Information Retrieval
Trying to locate music from a database using audio ("query by humming", recordings of performances, etc.), text, MIDI and other queries. A number of goals are possible, for example finding instances of the identical music (perhaps other performances or printed editions), of similar music (covers, remixes, variations), or of music in the same style or performed by the same artist.
Given some information about a user's musical interests, the idea is to suggest other music they're likely to find interesting. Of course, this problem has much in common with Music Information Retrieval; in fact, nearly all existing music recommender systems look for music similar to the user-provided starting point. But some people want to find music very different from what they're familiar with! Neither version of this problem is remotely near being solved.
Studying and Synthesizing Music Expression
The holy grail here would be to synthesize a convincing expressive performance of a previously unknown piece given an encoding of it in fully-symbolic form, i.e., something equivalent to music notation. The problem is currently being addressed for piano music in the RENCON community; however, the "continuously controlled" instruments are, perhaps, both more challenging and interesting.
Audio Signal-to-Score (singing, polyphonic, piano, etc.)
This is trying to do for music what speech recognition does for text. Audio transcription is already challenging with monophonic (single instrument) data, since the realization of pitch and rhythm can differ dramatically from their idealizations. We are currently pursuing polyphonic piano transcription.
Several types of musical analyses, such as harmonic analysis, phrase structure, and note spelling lend themselves naturally to algorithmic study.
Musical Accompaniment Systems
Chris' personal favorite. This area creates a program that plays the role of an orchestra or other ensemble providing accompaniment for a live soloist. The main ingredients of such a program are the ability to hear (score following), predict future evolution (modeling expressivity), and sound synthesis.
Used for accompaniment systems, but quite interesting in its own right. This technique develops a correspondence between a symbolic music representation and and actual audio performance. The many possible applications include developing large "note-indexed" audio databases, random access to music content, interweaving visual and aural music representations, automatic page turning, synchronizing animation with music, real-time performance modification, and audio editing.
Optical Music Recognition (OMR)
The world is in need of symbolic representation of virtually every kind of music. While audio-to-score is the most broadly applicable tool for this purpose, OMR -- the music equivalent of OCR -- is much easier. Besides, starting with audio gives you a representation of a performance; in many situations, especially for classical music, what is needed is simply an encoding a computer can manipulate of an existing (i.e., music notation) symbolic representation. Commercial OMR programs have been available for years, but they still have many limitations, and the problem remains open.
Music Source Separation
Source separation is generally regarded as a very difficult (nearly impossible) problem. However, in many musical contexts we might have very detailed information regarding the audio content, such as a matched musical score. How can we use this information to separate parts? The obvious application of this task is karaoke.
Music for Computer Games
How can one develop algorithmically composed music that adapts to the evolving state of a game and enhances the experience.
MIDI to Symbolic Score
For years, the MIDI file format has been most common one for symbolic score representation, not because it is the best, but because it has been the only one usable with a wide range of programs. As better standardized and higher-level representations (for example, MusicXML) evolve, we need to algorithmically convert MIDI to these forms. Issues include automatic voicing, pitch spelling, understanding of rhythm, etc.