The Making of Traktomizer
Learning the Habits of a Playlist Algorithm
Artists know when the final stages of creating a piece of music has arrived because that's the moment we think our work is good enough to compare with some other piece of music we admire and love and respect as perfection; the thought goes something like: ‘if only I can get my sound to sound like that!’.
Curiously the best technological capabilities we have today approach nowhere near the insight an artist may so effortlessly gain just by putting on a favourite benchmark record.
Working with tons of audio meta data from the Labrosa Million Song Dataset, Spotify, Last.fm and MusicBrainz and a dozen more, the question I sought to answer was – “If they can’t listen, how do the algorithms choose what to suggest?” A musician and programmer my whole life, I knew that cracking why they like what they like could become an enabler for me to tweak my music for their favor, even if by an edge. I was hooked!
Literally hundreds of tests of millions of data-points which were sorted, ranked, aligned by this genre and that country and so on over a period of some eight months; running hundreds of mini Python scripts; during which time the art of Benchmarking demonstrated increasing confidence and ability about determining whether a song has ‘the goods’.
Quite simply if a song keeps getting recommended and/or returned with the majority of search results, one couldn’t argue the fact that the music recommendation-bot has taken a shining to it; confirmation that a powerful method of comparing music, even for robots, is a process of comparing one song to another, and to a minimum-in-to-out-range of acceptable values about some mean standards of deviation for the ranges of Audio Properties being measured; and taken as a collective analysis, descriptive enough even for a programmable bot the make crucial judgement calls. But this type of behaviour is no different from classical music training during which all the Greats must be studied and perfectly covered too.
Of course the way an algorithm listens is by formula and we can quite easily express it as: comparative analysis of a song against available metadata for a big enough group of songs already being suggested, naturally tally-up to means/averages and standard deviations describing the measure of the minimum and maximum ranges for which the song is liked by a group (who like x or y genre), give or take an acceptable margin for novelty, but not so much that it is no longer recognisable in a very similar group.
While data alone is never a guarantee say in stock-trading or for that matter betting on horses, it is more often useful than not (hence our unquenchable dependance upon it) and in the case of robots who are entirely dependent upon data for making their decisions, it is clear that understanding it might steer one to create a better version of a song – at least by contemporary standards.
Traktomizer is a prelude to the types of production tools we're going to be using more of alongside our Parametric EQ, Compressors and FX suites. Because of its ability to pinpoint robo-sonic production inadequacies and propose a ranked list of audio tweaks most likely to have a positive effect on a song forever, Traktomizer might well be the first communications bridge able to explain the habits of playlist automation robots to artists.
Whether that's a good thing may already be conclusive; at least for Traktomizer's very first client who used the program to analyse their song during the final stages of production and noted, “We were the first record label to buy Traktomizer and we put it to use on a premaster-mix. Then within days of releasing, our song hit the 2016 Official European Independent Music Top 20, charting at #18. Now it feels like we can’t do without it. Well done!” Susanna Lepianka, Record Label Owner DyNaMiK Records Ireland.
Steve Freedom, Creator/Artist