Research in artificial intelligence (AI) is known to have impacted medical diagnosis, stock trading, robot control, and several other fields. Perhaps less popular is the contribution of AI in the field of music. Nevertheless, artificial intelligence and music (AIM) has, for a long time, been a common subject in Society Conference and the International Joint Conference on Artificial Intelligence.
In fact, the first International Computer Music Conference was the ICMC 1974, Michigan State University, East Lansing, USA Current research includes the application of AI in music composition, performance, theory and digital sound processing. Several music software applications have been developed that use AI to produce music. A few examples are included below. Note that there are many that are still being developed.
In 1960, Russian researcher R.Kh.Zaripov published worldwide first paper on algorithmic music composing using the “Ural-1” computer.
In 1965, inventor Ray Kurzweil premiered a piano piece created by a computer that was capable of pattern recognition in various compositions. The computer was then able to analyze and use these patterns to create novel melodies. The computer was debuted on Steve Allen’s I’ve Got a Secret program, and stumped the hosts until film star Henry Morgan guessed Ray’s secret.
A program developed by David Cope which composes classical music.
This program was designed to provide small-budget productions with instrumentation for all instruments usually present in the full-fledged orchestra. If there is a small orchestra playing, the program can play the part for missing instruments. High school and community theaters wanting to produce a musical can now benefit from the virtual orchestra and realize a full Broadway score. This software is able to follow the fluctuations in tempo and musical expression. Musicians enjoy the thrill of playing with a full orchestra, while the audience enjoys the rich sound that comes from the combination of the virtual orchestra with the musicians.
Computer Accompaniment (Carnegie Mellon University)
The Computer Music Project at CMU develops computer music and interactive performance technology to enhance human musical experience and creativity. This interdisciplinary effort draws on Music Theory, Cognitive Science, Artificial Intelligence and Machine Learning, Human Computer Interaction, Real-Time Systems, Computer Graphics and Animation, Multimedia, Programming Languages, and Signal Processing. One of their project is similar to SmartMusic. It provides accompaniment for the chosen piece follows the soloist (user) despite tempo changes and/or mistakes.
SmartMusic is an interactive, computer-based practice tool for musicians. Challenging exercises, instant feedback tools, and more than 30,000 accompaniments make SmartMusic a great practice partner. Designed to help the teacher and student alike, this program offers 5 categories of accompaniments: Solo, Skill development, Method books, Jazz, Ensemble. Teachers can give students pre-defined assignments via email. They can also scan in sheet music that is not yet in the library and save it as a SmartMusic file. Students can choose the difficulty level they want to play at, they can slow down or speed up the tempo or even change the key in which to play the piece. Computer-aided music instruction isn’t new; programs like Band in a Box and Music Minus One also provide accompaniment. But SmartMusic compares students’ playing with a digital template, which lets it detect mistakes and mark them on a score. It also simulates the rapport between musicians by sensing and reacting to tempo changes.
StarPlay is also a music education software that allows the user to practice by performing with professional musicians, bands and orchestras. They can choose their spot and watch the video from that spot. They can hear the other musicians playing. Again, the program listens to the user’s performance and helps them improve their performance by providing constructive feedback as they rehearse. StarPlay was developed by StarPlayIt (formerly In The Chair), a music technology company that has won many awards for its platforms for online musical performance and participation.
Developed at Princeton University by Ge Wang and Perry Cook, ChucK is a text-based, cross-platform language that allows real-time synthesis, composition, performance and analysis of music. It is used by SLOrk (Stanford Laptop Orchestra)  and PLOrk (Princeton Laptop Orchestra)
The Impromptu media programming environment was developed by Andrew Sorensen for exploring ‘intelligent’ interactive music and visual systems. Impromptu is used for live coding performances and research including generative orchestral music and computational models of music perception.
MIDI to string instrument (guitar, violin, dombra, etc.) tablature conversion is nontrivial task, as the same note can reside on different strings of instrument. And creation of good fingering sometimes challenge even for real musician, especially when the matter is about replaying two handed piano composition on string instrument. So in TabEditor (the tiny plugin for Reaper DAW) was used the AI which solves this puzzle same way as musician would do: trying to keep all notes close to each other (to be possible to play) while at the same time trying to fit all piano notes into range of notes that instrument allows to play simultaneously. And when situation is impossible (piano party has more notes than guitar can play) the AI tries to find less destructive solution, removing from composition as few notes as possible. Prolog programming language was used to create this AI.
Ludwig is an automated composition software based on tree search algorithms. Ludwig generates melodies according to principles of classical music theory. The software arranges its melodies with pop-automation patterns or in four-part choral writing. Ludwig can react in real-time on an eight-bar theme played on a keyboard. The theme will be analysed for key, harmonic content and rhythm while it is being performed by a human. The program then without delay repeats the theme arranged e.g. for orchestra. It subsequently varies the melody to create a little piece as interactive answer to the human input.
OMax is a software environment which learns in real-time typical features of a musician’s style and plays along with him interactively, giving the flavor of a machine co-improvisation. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (Aka the OMax Brothers) in the Ircam Music Representations group.
Melomics is a proprietary computational system for the automatic (without human intervention) composition of music, based on bioinspired methods and produced by Melomics Media. Composing a wide variety of genres, all music composed by Melomics algorithms are available in MP3, MIDI, MusicXML, and PDF (of sheet music), after purchase. Music composed by this algorithm was organized into an album named Iamus (album), which was hailed by New Scientist as “The first complete album to be composed solely by a computer and recorded by human musicians.”