Midimutant
Made in collaboration with Aphex Twin, the midimutant learns how to program your DX7 synth so you don't have to. Equipped only with a microphone input and midi output, the midimutant runs on a Raspberry Pi and uses artificial evolution to grow new sounds on hardware synthesisers that mimic an example sound you provide.
We had an article in the january 2018 magpi magazine and Richard also talked more about this project in his interview with synth designer Tatsuya Takahashi.
There are some technical info and notes on the hardware and pi setup here - we get a lot of requests but at the moment we're unable to release the source code and instructions on how to build your own, for complicated reasons - sorry! In the meantime, Samplebrain is fully available and open.
How it works: every sound in a population of initially random patches is sent and auditioned via sysex midi messages, sampled and checked for similarity using mfcc analysis. The best patches are chosen to form the next generation using the sysex patch data as genetic material, converging (most of the time) on similar sounds. Unlike a supervised machine learning algorithm, the artificial evolution does not need to model the underlying parameter space - i.e. how the synth internally functions to create sound. Midimutant can therefore be used on any synthesiser with a documented sysex dump format.
Some conceptual background can be found in this paper (although the midimutant is a more naive and freeform approach to the same problem):
Andrew Horner, James Beauchamp, and Lippold Haken, "Machine Tongues XVI:
Genetic Algorithms and their application to FM matching synthesis,"
Computer Music Journal, 17:4, pp. 17-29, winter 1993.