Determine the scale of a melodic piece (audio or MIDI) isn’t easy. And if you have to manually name thousands of items, like it occurs a lot for virtual instrument creation for which you can have recorded several sets of musical phrases, or if you are making data analyses, it can quickly become overwhelming.
REAPER does have a scale finder help window, but you don’t have infos about the pertinence of each results, you don’t know what is the underlying algorithm, and most of all, you can’t do any action from it (no quick renaming, no access for ReaScript to make other actions based on the results).
The purposes of the scripts shown here is to help you with your key/scale finding process in REAPER.
There development were sponsored by Soundiron for Virtual Instruments creation.
- Determine scales and notes stats for selected MIDI items (the main script)
- Color selected active takes or tracks according to scale in their name or notes (a simple companion script)
The analyses script works on MIDI Items. If you want to analyses audio files, convert them to MIDI first, by using ReaTune, or by using extra VST which can be more specialized (like JamOrigin MIDI Guitar). You can even use more complex external app solutions like Melodyne, which can do chord recognition.
Then you can put the MIDI items on the track above or below your audio items. The scripts will consider the MIDI items to determine the scale of your original items (above or below). Indeed, processing in MIDI has a lot of advantages: you can manually adjust the pitch recognition by deleting some notes, or adjusting some that were badly recognized, and it is way faster in terms of performance.
The Scale Detection Algorithm
This scripts is basically a Lua integration of the famous Krumhansl-Schmucler algorithm for REAPER MIDI items. This algorithm has been the subject to a lot of advanced scientific research and thesis in sound and psycho-acoustic since almost 30 years, especially by David Temperley, who wrote a lot about it. It is implemented in a lot of audio-related softwares, commercial and open sources like music21 or Essential.
There is various scale detection algorithms out there, but where this one excels is by not having to used data quantified on grid: it doesn’t care about tempo and beats. Indeed, it only needs two infos: the notes and their total duration in the song.
It then perform calculus based on these data on which a chosen profile (a set of weight/importance of each tone in a key) is applied, to determine a correlation coefficient. The higher the coefficient, the more probable your melody is associated to this key. It tests that for each keys, minor and major, so that’s 24 keys to test.
Various profiles emerged during the years. Some are based on musician preferences, some on analysis from a corpus of songs in certain genre… You have to chose the one which fit the best your workflow. Finding the different profiles was a mix between reading articles and checking open source softwares source code.
Here are the profiles I integrated in the script, with an letters identifiers for let the user quickly choose.
Weights from taken from music-21 python library.
- KS = KrumhanslSchmuckler
- KK = KrumhanslKessler = “Strong tendency to identify the dominant key as the tonic.”
- AE = AardenEssen = “Weak tendency to identify the subdominant key as the tonic. (N.B. — we are not sure exactly where the minor weightings come from, and recommend only using these weights for major).”
- SW = SimpleWeights = “Performs most consistently with large regions of music, becomes noisier with smaller regions of music.”
- BB = BellmanBudge = “No particular tendencies for confusions with neighboring keys.”
- TKP = TemperleyKostkaPayne = “Strong tendency to identify the relative major as the tonic in minor keys. Well-balanced for major keys.”
Other profiles have been taken from Essential Key algorythm at Algorithm reference: Key — Essentia 2.1-dev documentation. I let you check Essential doc for more infos about theses profiles.
- DT = Diatonic
- DTR = David Temperley Revised
- WC = Wei Chai
- TM = Temperley Mirex 2005
- SH = Shaath
- GZ = Gomez
- NL = Noland
- EDMM = Electronic Dance Music Manual
- EDMA = Electronic Dance Music Automatic
Soundiron’s team made testing with three trained musicians to see what algorithm would fit better their workflow (virtual instrument creation, analyses of lot of small musical phrases). We compare results made by human and results given by the algorithm according to the various profiles. It appears that the TKP were almost always in accordance with the musicians finding, and that’s why it is the default in the script. When it was off, the key with the second or third higher correlation profile was usually the one estimated by the musicians. That’s why the script output 3 scales per default.
If you want to read more about this algorithm, here are various sources you can check:
- Key-finding algorithm an short articles which explains with graphics and the mathematical formula the concept behind the algorythm
- What’s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered | Music Perception
- A Bayesian Approach to Key-Finding, by David Temperley, 2002
- KEY-FINDING WITH INTERVAL PROFILES, Søren Tjagvad Madsen (Austrian Research Institute for Artificial Intelligence (OFAI), Vienna), Gerhard Widmer (Department of Computational Perception, Johannes Kepler University, Linz), 2007
- Notes on David Temperley’s “What’s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered” by Carley Tanoue
- Are Pitch-Class Profiles Really “Key for Key”?, by Ian Quinn
Other features than the key/scale analyses have been added to the script to make it do REAPER related actions:
- You can commit the X first most probable keys to the item names or notes,
- You can commit the keys as prefix, suffix, or by replacing entirely their destination field
- You can output the detailed list of correlation coefficients results from the analysis, so you can see how close the algorithm find your notes data to a possible scale. Handy for manually adjust the results, if an item is below what you admit as a confidence level.
- You can filter MIDI notes by a minimum length to prevent them to be analyzed for the key detection
- If your MIDI gets extracted from audio, you can determine a “paired track”, which contains same number of items. it can be just above, below, none or event the current (in case you want to make extra actions of your currently selected items).
- Extra action 1: naming paired items as well (in case of “current”, it does nothing)
- Extra actions 2: moving paired items on tracks which have the scale name as title (in case of current, it moves the MIDI items)
Just rename selected MIDI items and their paired audio items on the track below.
Here is how to setup your projects, if you want to used paired items and move to tracks with scale name.
You can mod this script by duplicating and renaming it.
Then, edit the the user config area in the top of the file. Here are easily customizable value, which lets you customize the popup default values:
-- USER AREA ----------- prompt = true -- true/false: display a prompt window at script run target = "name" -- name/notes action = "r" -- prefix/suffix/match/replace (use only first letter) paired_items_dest = "a" -- append scale to items on track above/below/current/none (use only first letter) name_paired_item = true move = true -- true/false min_len = 0 -- Number unit = "s" -- s/ms/grid/samples/frames profile = "TKP" -- KS,KK,AE,SWS,BB,TKP results_num = 3 -- Number (1-24): number of scale to output log = true -- true/false flat = "Sharp" -- Sharp/Flat console = true -- true/false: debug -- END OF USER AREA ----
Miss a function? Let me know!
By purchasing this product, you are supporting my free scripting. Thanks!