This is not about AI, the author is mostly just pointing out that Spotify was not designed for classical music.
This is a product issue. Spotify DJ is essentially “shuffle with some voice interludes”. There’s probably some non-AI code in there to explicitly prevent it from playing an album end to end.
Besides, AI is not one thing. It’s weird to generalise “This beta spotify feature doesn’t serve me, hence AI is useless”. For example, when the author says “if it can’t do this, how could it compose music?”, that’s a category error.
Honestly the whole post and tone are just baffling. It’s mixing up all sorts of opinions and trying to put them under one umbrella, and about 50% of the text is just name dropping specific classical pieces.
I happen to agree that the Spotify DJ feature is terrible, but I think this is a very ineffective way of presenting the argument.
That's rather underselling him. Charles Petzold wrote the canonical reference works for programming Win32 and MFC.
It's like calling Donald Knuth a lecturer.
It’s a tour de force in technical communication. A fascinating book for both the Computer Science novice and expert.
> Microsoft provides two frameworks for developing Windows applications: MFC (Microsoft Foundation Classes) and Win32. MFC (Microsoft Foundation Classes) is a Microsoft framework for developing Windows applications in the C++ programming language. Win32 is a collection of functions and data structures provided by Microsoft for the development of Windows applications. [0]
[0] https://www.tutorialspoint.com/article/difference-between-mf...
Pascal, C, or C++, it depends what mood they were in.
Given the author's background I believe it's intentional ragebait. It's as ridiculous as saying LLM can't count the number of Rs so it cannot generate grammatically correct sentences. No way he really thinks the logic is sound.
The product organization at Spotify is a master class in dysfunctional product organizations. Compare the feature parity of the desktop and mobile applications and you'll find random features available in one but not the other. Try to do basically anything in CarPlay other than select a different recently-played playlist and you'll be able to do it 10x faster by picking up your phone and doing it there.
That isn't really a category error. It's more begging the question. It makes the assumption that the ability to DJ music is the same ability as being able to compose music, and uses that assumption to suggest the conclusion that a failure to DJ classical movement would necessarily result in the failure to compose same. A category error would be assigning a property to AI that it cannot have. It would look more like, "if AI can't DJ music, we have no way to know what color it is."
Honestly a human DJ might well do what the Spotify DJ does — play a popular piece that matches the outlandish request and then transition to other music.
If I told the DJ at my wedding to play an album front to back, and they transitioned to Aerosmith, I'd be tapping a friend to run the music the rest of the night.
Would expect any provider like Spotify to just export the reports Nielsen requires, not design their core systems around it.
And yet an awful lot of musicians are also DJs. It's almost like spending a lot of time playing music and watching how people react to it give you a good sense of how the underlying processes of creating it can work.
Which is to say, he's doing a very good job of reminding you/us nerds that "there should be no excuse for this, technical or otherwise." The technology exists to make this work very well and has for sometime; I GET why it's not working now, but that doesn't make it any less garbage.
If you have the slightest knowledge of classical music you would know it should not be mixed like in a dj set, and you would not optimize your dj algorithm for it.
But if you want to preserve the original composition of classical music, you have to play the track start to finish, preferably with a small pause between tracks as well.
Sure, that might not be what a DJ algorithm is optimized for, but a more generalized AI like an llm should be able to figure that out.
Yet the computer program happily tried to do it anyway. It would be much better to fail with a clear error message than to try to proceed and emit garbage.
I've played the violin since I was a kid (only for fun now). I can find something I love about almost any musical genre and I'm sure I'm not the only one.
BT is a trance dj that's classically trained: https://en.wikipedia.org/wiki/BT_(musician) and Armin van Buuren has classically trained parents
https://m.youtube.com/watch?v=6yFanGv_ReU
https://m.youtube.com/watch?v=S1YwlPH_o50
https://m.youtube.com/watch?v=j2fNloJAge0 (same chord progression as la folia https://m.youtube.com/watch?v=7v8zxoEoA_Q)
La folia itself has been "remixed" many times by both classical and modern composers https://en.wikipedia.org/wiki/Folia
https://en.wikipedia.org/wiki/Switched-On_Bach
And there was this fun disco version as well: https://en.wikipedia.org/wiki/A_Fifth_of_Beethoven
No one is required to like it. But the word 'hate' is a bit extreme, even in your example. Also, the group comprising "the classical music fans" is certain to include many who disagree with you.
Music.app is already better than Spotify at handling the relevant metadata. But the dedicated Apple Music Classical app is roughly the same as IDAGIO.
(They bought IDAGIO's former competitor Primephonic to do it)
For each composer, it shows all their well known works, and then you can tap on each to see all the recordings of that particular piece.
Smart move on Apple’s part, if you ask me.
And the play history integrates with the main Music app
this is a problem for classical (and jazz) for two reasons a) these genres are not particularly popular on the platform so there are few unique users and b) the songs are LONG so listening sessions contain fewer songs.
track cooccurance based recs work well for popular genres, but these other genres need a different approach to recs and that's actually where AI could do really well by digging into the unstructured data associated with the tracks (sonic analysis of the song, biographical information about the composer, details about featured soloists, etc) rather than relying on piles of user behavior.
It’s just a different kind of DJ. Like how HipHop DJ is different from a trance DJ. And a wedding DJ is different too.
I missed that autocorrection early. Sorry
Atari Cambridge Research- part 5: David Levitt shows Music Box on his Lisp Machine
https://www.youtube.com/watch?v=ocwsVkqEKys
Atari Cambridge Research- part 6: Music box with Tom Trobaugh and drum machine with Jim Davis.
https://www.youtube.com/watch?v=DhA0FGsin_s
Cynthia Solomon has shared a treasure trove of rare classic videos of Seymour Papert, Marvin and Margaret Minsky, kids programming Logo and playing with turtles, and many other amazing things at the MIT AI Lab, MIT Media Lab, and Atari Cambridge Research:
Google’s Nest speakers have or had similar issues: they’d start any requested piece of (at least multi-movement) classical music somewhere in the middle and simply defy any instructions to start at the beginning, bizarre behaviour for a smart speaker.
Maybe Spotify works more off lyrics, and classical music usually doesn't have lyrics.
I'd love to have AI that could hear music.
>This is a product issue. Spotify DJ is essentially “shuffle with some voice interludes”. There’s probably some non-AI code in there to explicitly prevent it from playing an album end to end.
And I would argue that is one of the stupidest ways.
https://medium.com/luminasticity/the-complete-playlist-e8eb3...
The Complete Playlist argues for shuffling and serendipity for achieving accidental surprise and delight and clever juxtapositions, something that if you had an actually competent DJ could be guided and not left to chance.
A competent DJ makes musical arguments in relation to an aural environment in the same way a competent Philosopher may make intellectual arguments in relation to an environment of ideas.
Same thing I saw in AI-assisted coding. People complaining how AI- enabled some XYZ security risk, it's bad, it's crap. This could be true, but why ignore the fact that you create a full blown native Mac app, with a single sentence? That should be good for at least a few things. Right?
I haven't seen a single "AI evangelist" address any concerns and limitations, other by than "throw more AI at it" or "it will get better in 5 years, just in time for cold fusion".
> you create a full blown native Mac app, with a single sentence
Like they created a full blown C compiler that "could compile linux" but in reality didn't pass its own tests?
If you constantly cry wolf, no one's going to believe you when the wolf actually comes.
You see what you choose to focus on. I come across many people who are excited about the possibilities of AI-assisted coding, who are frustrated by its limitations, who share strategies for overcoming or avoiding those limitations, and s on. For a concrete and famous example, I would put Andrej Karpathy in this category. Where are you looking that you're not finding any of these people? linkedin?
For other things like when asking questions I won't just blindly copy what the LLM is suggesting. I'll often rewrite it in a style that best fits the style of the codebase I'm working on, or to better fit it into what I'm trying to achieve. Also, if I've asked it for how to do a specific one-line query and it has rewritten a whole chunk of code, I'll only make use of that one line, or specific fix/change. -- This also helps me to understand the response from the LLM.
I'll then do testing to make sure that the code is working correctly, with unit tests where relevant.
Can't help but click sometimes, always see the same arguments, so why not post the same thing as well?
By the way, I never check user names, I just reply to the post content.
You always have people at both sides of the aisle though - people who say it can do much more than it can, and people who say it can do much less.
It's the same with all technologies - robotics, crypto, drug discovery, the internet, digital cameras, quantum computing, 3D Television, self-driving cars - it was probably the same with the steam engine. All of these will have had people who said that the technology would be useless and die (e.g. Napoleon and the steam engine), and others that would have said it was totally transformative.
Pointing to people who hold extreme opinions 'for' a particular technology that are overly-bullish, and then dismissing the technology based on that, isn't a particularly good strategy in my opinion.
Who's "they"?
> If you constantly cry wolf, no one's going to believe you when the wolf actually comes.
Who's "you"?
You seem to believe all AI advocates are of the same hivemind and they somehow think and behave collectively. Have you considered that they might be different people with individual opinions and motivations?
AI is very good at some things and very bad at others. Early on, many thought chess would be one of the last things mastered by computers, but they were wrong. It makes no sense to take the statement "AI is extremely bad at this task compared to humans" and conclude that AI must be useless or a waste of time.
In this case, the AI DJ is bad at picking out classical music. Okay, sure, whatever. But that doesn't automatically mean the AI DJ is bad at everything.
> Like they created a full blown C compiler that "could compile linux" but in reality didn't pass its own tests?
You are strawmanning hard here. Who is "they"? You are putting all "AI evangelists" into the same blob here, and instead of answering the questions at-hand you ignore them and respond in an ad-hominem style by attacking a project that someone else made, completely unrelated to this entire thread. That is not good faith discourse!Anthropic IIRC?
However, every post here that says the slightest thing positive about AI’s abilities is always met with “yeah well it can’t do my dishes for me so it’s total garbage!” BS.
You yourself are bringing up “making a compiler” out of nowhere. Nobody but you brought that up here. Yet you’re using it as the be-all end-all yard stick, simultaneously completely ignoring and completely proving the argument that you’re replying to.
It’s amazing how big a % of the developer community has started acting like intentionally unintelligent petulant children the moment they’re faced with an iota of the sort of job security risk they’ve been inflicting on others for decades. Some of you need to grow up.
I would guess it's for the same reasons that you're ignoring all the fixes necessary to get to an actual "full blown native Mac app". It's rarely a single sentence unless your app does something trivial like printing Hello World.
The example you described, no.
It is not good because its quality and adherence to the spec (the single sentence) is and will always be probabilistic...
Isn’t the same true for a lot of individual programmers and even teams?
Especially so if they were provided just a short one-sentence vision instead of proper documentation.
Sure, outsourcing is similar, but the difference is one uses a process that is inherently probabilistic and will show up in every result, while other just depends on the probability of you getting a good team.
In this context I suspect a SotA LLM could sometimes beat some cost-comparable UpWork professionals in both quality and spec adherence. In other words, if you need an app and can’t do it yourself and have a tight budget, LLMs are quickly becoming a viable option for more and more complex apps (still only simple ones before it produces junk, but progress is pretty appalling)
I am not sure I want to keep paying for something that needs some amount of luck on my side, to be useful. Writing elaborate plans for LLMs also feels a bit pointless when there is no hard and fast rule about how much of it will be followed ..
Apparently some people appear to be doing it, but I am afraid it is not something that will have a universal appeal..
Isn't that a bit overblown? I just fired up Copilot in VSCode and typed in "make me a DAW plugin that will inject MIDI control changes into the track output" and it didn't even know where to start.
I listen to a lot of DJ mixes on YouTube (Hör Berlin is great, for example) and part of the appeal is what this particular DJ picks: what kind of music are they listening to in the country they’re from, how are they interpreting it, what are they mixing it with, etc. For some DJs there’s also kind of a personal visual brand, like musicians themselves.
The idea of an anonymous AI picking an optimized list of music kind of defeats the purpose.
While there has always been room for middle-of-the-road "content", there have also always been those that seek a higher value. I expect that segment to only grow.
To use the food analogy again: sure, if you just eat random things on the menu, you might find new foods that you enjoy. But it’ll be a much better experience if the chef / restaurant is introducing you to new foods in an intelligent way, not randomly or “We see you like chicken, so try this other chicken dish.”
> fitting the track into the set as a whole. It’s not a random music discovery process
there have been plenty of attempts to analyze music and to automate track matching like the music genome (going back to '99) and while human DJ's definitely have their place (i actually listen to lots of those) it's not inconceivable that a lot of modern music could also be mixed and matched automatically with at least half-decent (to a human) results.
P.S. found the article itself pretty funny - like a nerdy, methodical complaint, just funny to read
ISMIR: The International Society for Music Information Retrieval
Finding a path through the Jukebox: The Playlist Tutorial:
https://musicmachinery.com/2010/08/06/finding-a-path-through...
>Tutorial 4: Finding A Path Through The Jukebox -- The Playlist Tutorial. The simple playlist, in its many forms -- from the radio show, to the album, to the mixtape has long been a part of how people discover, listen to and share music. As the world of online music grows, the playlist is once again becoming a central tool to help listeners successfully experience music. Further, the playlist is increasingly a vehicle for recommendation and discovery of new or unknown music. More and more, commercial music services such as Pandora, Last.fm, iTunes and Spotify rely on the playlist to improve the listening experience. In this tutorial we look at the state of the art in playlisting. We present a brief history of the playlist, provide an overview of the different types of playlists and take an in-depth look at the state-of-the-art in automatic playlist generation including commercial and academic systems. We explore methods of evaluating playlists and ways that MIR techniques can be used to improve playlists. Our tutorial concludes with a discussion of what the future may hold for playlists and playlist generation/construction.
The Echo Nest:
https://en.wikipedia.org/wiki/The_Echo_Nest
Paul's blog:
And github repo:
If an AI would make interesting DJ mixes that aren’t merely collections of similar music, I think they’d need to be constructed in a totally different way.
I usually listen to dublab (los Angeles, cologne, and Barcelona) and nts1 (usually London) and nts2 (location rotates). They have 1 or 2 hour DJ sessions (live or recorded) and your hear some music that you normally wouldn't be exposed to and sometimes you hate it but usually not.
The significant problem that AI faces in automatically curating something is that the input data is usually pretty terrible. It's based on either similarity of the thing being curated which doesn't work because people don't want things to be too similar or to dissimilar, or it's randomness which doesn't work because it's too discordant, or it's based on patterns in the data (people who listened to X listened to Y, so recommend Y to people who listen to X) which works but only if the listener's taste aligns with the majority. If you introduce multiple sources of patterns in the data you quickly lose any variation and things stop being interesting.
This is a hard problem. No one has ever really solved it, despite Spotify, Netflix, YouTube, etc investing hundreds of millions into the space. Humans are probably just too fickle to accept that an algorithm can choose for us. It lacks the social proof that a tastemaker like a DJ brings.
I found luck just using a LLM to chat about my tastes, what I like, what kinds of songs I want to discover ... it does a good job and is able to also give me background.
So we are back to humans curating content, just with AI then doing the final search
I would love to try it however they would have to solve "global song availability" and "Sponsored songs only Stations".
But if they did try there is the chance of some niche communities forming.
It wouldn't even need to be live to begin with. A narrated playlist with a DJ and basic control functionality such as fading into songs or a voice over.
Not trivial but doable and I wonder why they never tried that.
Right now NTS Guide to: Italian Library, Soundtracks and OSTS
I avoid ghetto music but aslong as it has classical, OSTs, calm jazz, trance etc :)
A bit turned off by a certain section but I can ignore it.
aha okay, I can search for "NTS Guide to" on Spotify and people have just made playlists! Great! thanks
Otherwise a few European radios, even if with ads, as a second goal is to keep my foreign language skills up to date.
Also a few lucky algorithm gems on YouTube, or the KEXP, Tiny Desk, ARTE Concerts, Colours channels.
Never got into Spotify.
The streaming app algorithms are bland as hell, built for people who just want noise in the background.
you have to do your own search and play, but some of the stuff by unknowns and famous artists giving back is profound, they KNOW when they hit it, all live, mostly acoustic and all useing musicians, no tape, no sequencers. listen to one such performance, and maybe you dont need anything else for a week.
The term DJ is synonymous with modern, electronic music, anyway.
He didn't even say "classical", he was circumspect with "that moste illustriouse of musical traditionnes".
We get it, you like classical music and Spotify is a poor fit. That's... the article?
I follow several thousand composers and musicians. I then get daily playlist creation by crabhands.com of any new releases by those I follow. I then export the crabhand playlist into my own local database via exportify.net. I then create Spotify playlists of music I haven't heard that I may like as well as the released works I like best. Then I score the works I've listened to and feed that back into the system. So I get a deluge of new releases but play it in an organized fashion.
I learned Windows programming from his book, I'm sort of shocked he doesn't seem to have a base-level understanding of how transformer models work..
That, and instrumental music. Seems to believe the set of all music = pop songs + western civilization tradition classical music
¯\_(ツ)_/¯
Austraian/New Zealander detected lol
Sure, leans a bit classical and not the least bit "wanky" (at least IMHO)
> but the term was around long before it.
Wanky? DJ? Classical? Term Of His Natural Life? .. Regardless of the specific etymological chronology you're thinking of, I feel there are non-wanky examples in the broad tent of "classical".
It appears that Spotify's engines use a mix of these licenses to reduce costs. Since AI isn't explicitly user-made selections, it's quite possible that the AI playlist generator is limited to a radio license model for playback, simply to save money (considering the additional cost of providing AI).
I really had to push to keep reading past this part.
But this piece doesn’t really say anything surprising anyway. Spotify isn’t for classical music. There are other services that are.
> Am I naïve in expecting Artificial Intelligence to be smart? Is my interpretation of the word “intelligence” too literal? And when an AI behaves stupidly, who’s to blame? The programmers or the AI entity itself? Is it even proper to make a distinction between the two? Or does the AI work in so mysterious a way that the programmers need no longer take responsibility?
IMO this is a programming/prompting failure - not a failure in the general capability of 'AI'.
We can prove that an AI can understand this with a basic prompt:
https://chatgpt.com/share/69b67906-0e18-8012-9123-718fc6422c...
This is a minimal base prompt, with no fine-tuning, with the same user prompt, which shows that an AI will respond correctly by default. Presumably either the AI they are using is a weak model, or their prompt is encouraging the model against this (e.g. maybe the prompt says 'return one song based on the suggestion, and then songs from similar artists after')
> I’ve heard people claim that an AI can compose music. But how can that be when it can’t even grasp basic concepts in music?
Trying to infer the underlying capability of AI to generate music based on a badly-prompted Spotify DJ feature is always going to have it's limits. The proof of 'can AI compose music' will be in the eating of the pudding. AI models have already been able to compose classical music to some extent, and can grasp music theory, so after this point it's just going to be a matter of quality/taste.
What are the chances there is or will be a prompt to direct listeners from artists with higher royalties to those with basic fees?
If I was an MBA, this would absolutely a direction I would take.
Also, might I recommend looking at the way the world is, not the way the world might be. This is one of the ugly AI tendrils this disgusting industry is putting into everything, bringing ruin to the world. This is the actual reality of it, making the world a dumber, less interesting more stupid place.
I'm shifting 'blame' to Spotify, rather than the user or the AI model - although blame is probably a pretty strong word anyway for what is probably just supposed to be a fun DJ feature.
> All the prompts were crystal clear.
We don't know what the prompt is, because the FULL prompt will be a combination of the base prompt plus the user prompt. It's trivial to show that a modern model with a minimal base prompt will return correctly (as per my original post), so IMO there is probably something in the base prompt which is encouraging the model to return differently.
I wanted to clarify the first two points, but i'll not respond to the rest of your comment as it's a bit overly-emotive (calling what I say disgusting, rambling about the downfall of society as a whole etc).
Spotify are currently making a big deal about not writing any code - I attended a webinar this week where one of the slides proudly trumpeted this fact:
“ 0 lines of code
Spotify's best engineers have not written a line of code since December.”
Bunch of clowns coasting on their moat instead of building an actually good product.
Users are often to blame in many varied cases and there should be no taboo around discussing this. I think maybe some people hear that you should never blame rape victims for rape and then go running wild trying to apply that as a general principle of never blaming anybody who is in any way a victim of anything, even when the "victimhood" is simply some piece of trivial software not working well. But we're not talking about rape so your intense rejection ("disgusting") is completely off the mark.
Every time someone calls an LLM "AI", their brain faults a little more.
This is the profession of marketing's greatest success: inflicting so much damage on the rest of the world.
Pandora was worthless, though, because of their skip limit (even in the paid version). Even with its effectiveness, it would still feed me junk.
This guy is a classical music guy, though, and all the pickers suck, for that. Classical has been treated badly, forever. I am extremely disappointed that Apple segregated classical into its own app, because I have always enjoyed mixing it in with my regular music.
One thing about classical music, is that every performance is a “cover.” Who performs the piece is just as important as who wrote it. None of the selection services seem to understand that.
MP3 tags are pretty much worthless. They are incredibly limited, and I don’t know why they have never been improved.
You have to classify every title as one type.
How would we classify Zappa, or Secret Chiefs 3? Are they jazz, alternative (a worthless category), rock, pop, heavy metal, comedy? Depending on what you listen to, it could be any one of them. Also, each song could be in multiple categories. Boz Skaggs was known for disco-style pop, but he was an outstanding blues performer, and many of his songs reflect that mix.
This is really a music industry problem, and software just reflects that. The bug is really in the Requirements phase.
In my interactions with distributors, it seems streaming services tend to support up to two genre classifications; though they're pretty outdated and general (even more general and dated than the Winamp genre list). I don't think they use the metadata presented much in the classification; in fact Spotify does its own estimation of 'energy' and other subjective emotions using various classifier algorithms.
It's not wrong, it's just not what I like
I don't really use Spotify so I can't compare but Pandora was awesome. I've found Youtube playlists to be the best replacement so far.
(1930s) https://www.youtube.com/watch?v=cHLbaOLWjpc
(2025) https://www.youtube.com/watch?v=O8hPNPInXh0
(2024) https://www.youtube.com/watch?v=Ef-c-9G52jU
(2015) https://www.youtube.com/watch?v=7C1EDbkl2CU
But this is the problem. Before, people where naturally exposed to classical and jazz and OSTs etc.
This is the problem with current society.
You do realize Pandora is still there, right?
(Obviously you could VPN in, but it's a meaningful hassle.)
The coverage I could find suggested that they were available in three total countries: the USA, Australia, and New Zealand.
So there was no point at which they locked out "international audiences". They had a branch in Australia, and they closed it.
In 2007 the copyright lawyers caught up with them and they locked it to US IP addresses: https://mashable.com/archive/pandora-international
Australia-specific operations were 2012-2017.
It's very miss-or-miss; you need to be willing to thumb down 95% of what Pandora thinks you'll like. But with enough care, it's a good discovery channel.
But somehow, probably from a combination of rights owners gaming it and Spotify gaming it, DW is a pale shadow of its former self.
I have a few other experimental features in the pipeline that will expand the music selection, but they are not there yet.
Personally I dropped playlists long ago for YouTube dj sets which are a million times better than Spotify’s AI dj. Some of this is not a tech failing but the DJs have access to unreleased tracks, their own private edits, and are more willing to do more bold things. The AI DJ will never drop a surprise change that makes the crowd scream.
I wish more people would ask themselves those questions.
Sadly Charles himself didn't appear to conclude that yes, it's naïve to expect AI to be "smart" (whatever that means) and yes, he and many other people get hung up on the word "intelligence" in AI, a field that's been called that since the 1950s.
Classical is a harder (or at least different) problem and it's why specialist apps like Apple Music Classical exist.
Classical isn’t harder. It’s just so niche that leadership at spotify never bothered. It has a whole different taxonomy; it’s composer, not performer based etc.
Spotify isn’t against new taxonomies outside of weatern pop music. The India launch, where ragas were super important, shows it. But the Indian market is vastly larger than the small (albeit loud) number of classical music enthusiasts.
All that is to say, it’s a business decision, not a tech or AI problem.
(I really like classical music, too, btw., so please don’t read this as me not respecting that user base.)
Like a full Michael Nyman concert? -
* https://www.youtube.com/watch?v=t3KgZlxTz8g
* https://www.youtube.com/watch?v=FHVU3UlLHRg
Philip Glass's four-hour-long Music in Twelve Parts (1971–1974)?
Disclaimer: I'm a massive fan of the six hour extended cut of Cage's 4'33
For someone that explicitly states:
> I don’t listen to pop songs. I prefer music of the 500-year tradition (...)
And who apparently wants to stream music, it is wild he's not subscribed to Apple Music Classical which exactly circumvents all complaints in this article...
I’ve been wondering if AI could be used to compose a set that rivals real DJs, but it seems like a difficult problem. First it needs to select tracks that fit well together, and stitch them together to ramp up and ramp down energy over time. Then it needs to layer the tracks, which requires an intuition for what sounds good and I’m not sure can be done algorithmically. It also needs to do engaging transitions which are appropriate for the moment - also difficult.
We're told its better than people at selecting songs (e.g. has the combined wisdom of all music and music experts), basic requests like "play the first movement of Beethoven's 7th" don't sound hard for an average person with limited / no musical expertise. If I said "please play the entire 7th symphony", and the tool responds with "sure, I'll play the whole thing", then proceeds to play the Beatles, I'd say that's a fair thing to point out as a shortcoming.
Its only obvious to tech people that understand that the technology has extreme limits and only works well on areas with abundant high quality data and labels, and can't be expected to reason like a person at all in many cases, that those limits seem as obvious as hammer / screw-driver. And that given how spotify developed these models, they probably didn't really intend classical or test that area -- so it fails despite sounding confident.
But maybe we should stop advertising screwdrivers as universal intelligence? There's a lot of mott and bailey going on. When AI makes mistakes its "just tools, stop expecting intelligence." However, when people question the AI hype its "humans make mistakes too, LLMs are truly reasoning and better most humans already." And "the entire labor economy will be replaced, human DJs will cease to exist.".
Which I really should have anticipated since I generally dislike music radio "DJ"s too and Spotify's AI DJ is trying to be like one.
In particular it would do things like start playing tracks with no bearing on anything I'd ever listened to, like local South African music which is very far from universally preferred here. I also got the feeling it was pushing "promoted" tracks with little regard to what I would likely like, just like real life radio stations.
I also don't care to have some voice interrupting the music all the time.
I was hoping it would kind of be like their other "radio"s, but it would be more explorative to finding more "similar" tracks to what I have listened to, without seeming to get stuck in a repeating play list.
I suppose it's a cool gimmick for people who are prefer the broadcast radio experience.
Even between people with the same cultural background, music tastes can vary wildly.
But for classical music: Apple Music Classical is where it’s at, it understands the relationship between composer, work and recording.
Mine will randomly go "Here's some driving tunes" while I'm sitting still, or introduce to me to "The Kings of Leon" when I have some of their songs liked and in my playlists. It's pretty clear there's not a lot of my data in the the input context, or it's not been improved over time.
It shows up in all Spotify-generated playlists, so I refuse to listen to them. I assume their shitty AI recommendations are similarly filled with cancer.
Some things just aren’t meant for shuffle and genres that haven’t been properly digitized are definitely one.
Presumably a pop DJ would also mess this up. It's like going to an Indian restaurant and asking what Dim Sum they recommend.
The only reason a human would be able to do this task is that they might be trained in how to find classical music, and they have spent some time learning what is what in that world.
But a Spotify AI is of course going to be trained on the prevailing classification system only.
> The use of the word “song” for instrumental music — that is, music that is not sung and hence is not a song — is borderline illiterate.
This guy comes across as incredibly obnoxious. It's shit like this that gives classical music a bad rap as stuffy and unapproachable.
But yes, Spotify and the like are terrible for classical music. Apple Music has a separate app for this, which does a pretty good job and addresses most of these complaints.
There are apps specifically dedicated to classical music and there are many youtube channels for classical music, with sheet music[1], with visualizations[2], with videos of concerts.
Spotify and it's drop-in competitors were never good for classical music. This article is just another rant on this issue, by someone to whom classical music is so important, a pillar of western civilization, but not important enough to look for other ways to listen.
Why do people who hate AI think that every use of the term AI is referring to the exact same software program?
Instead they're just thin veils around paid-promotion.
Perhaps file a ticket for the devs and go back to listening to the albums without AI
The way things are going I'm not sure how much longer we will have that option.
> [list of 20 classical artists] I’m aware that many people are unfamiliar with this musical tradition, but it forms one of the sturdiest pillars of what we casually refer to as “western civilization.”
> Unfortunately, this tradition is not much respected
> The use of the word “song” for instrumental music — that is, music that is not sung and hence is not a song — is borderline illiterate.
This writeup is insufferably pretentious. It almost reads like a caricature of someone that listens to classical.
Prompted playlists is a beta feature designed to cater to most users. They are likely using a heavily quantized model, fine tuned on common use cases.
Is it really surprising that it doesn't cater to the fringes of Spotify's user base from the get-go?
Clearly the author believes that their taste in music is the superior one, and so Spotify not designing their product experience around their tastes is "appalling."
Then you get absurd rants like this:
> I’ve heard people claim that an AI can compose music. But how can that be when it can’t even grasp basic concepts in music?
Almost like these are two completely separate models, in two completely separate products.
That being said, Spotify is probably not the best product if you listen to classical. If classical were all I listened to, I would probably still have an offline collection in a Media Monkey library as my main source of listening.
Real DJs don't follow playlists. They work within constraints — energy, tempo, crowd — and let the set emerge. Better boundaries, not more rules.
Call me when Spotify and YT collaborate with Deezer on labelling AI music as such. Yes, it's a nuanced concept, but the soup YT was serving me was extremely obvious, which was easily confirmed by checking the throughput of the "artists".
Snobbery sniping aside, I empathize with their sentiment, and their work was worth reading. Spotify’s whole UI is far too complicated and I wish they would go the Facebook route of breaking out the separate products into separate apps. Jumbling podcasts, pop music, and covers — sorry, classical music — is a bit weird.
Isn't it the job of a DJ to pick a good recording? Petzold's test seems reasonable to me. As a classical listener, if I want a specific recording I'll just play that recording. The main function of the DJ is music discovery. Perhaps they know of good recordings I haven't already heard.
My grandfather was a typesetter and print designer. My other grandfather was part of Gill’s circle and his bookplate was inscribed by him. My first and only kickstarter in which I participated was Linotype: The Movie. I am currently reading Jury’s Type Designers of the Twentieth Century. I also have Peace’s catalogue of Gill’s inscriptions on my desk. Justin Knopp from Typoretum set my personal card from his digitized collection of rare founts. I’m interested in type and page design and I do like em dashes.
But I also just really like iOS’s automatic replacement of 2x hyphens with a dash.
Songza was able to do this properly years ago which customized playlist based on your mood but Spotify just doesnt get it.
I wouldn't be surprised if creating a truly great AI DJ was also hindered by this kind of legal shackles.
Doesn’t that sound ridiculous?
Since I’ve switched from Spotify to Apple Music, Apple’s recommendations are lousy and I miss discovering new artists and songs. Several of my cult favorites were Spotify suggestions I never would have found otherwise.
Are there any good recommendation engines, or people mostly just use Spotify for that?
I’d be sad if I had to switch back to Spotify but it is what it is.
The Echo Nest was one of the most interesting music-tech companies ever built: a music intelligence platform spun out of MIT that analyzed audio, metadata, web text, artist similarity, genre structure, and playlist construction. Spotify bought them in 2014 specifically to strengthen music discovery and recommendation. At the time, Spotify said the deal would let it use The Echo Nest's "in depth musical understanding and tools for curation", and even said the Echo Nest API would remain "free and open" for developers.
https://en.wikipedia.org/wiki/The_Echo_Nest
https://news.cision.com/spotify/r/spotify-acquires-the-echo-...
If you ever used the old Echo Nest APIs, Remix SDK, demos, Music Hack Day projects, or Paul Lamere's experiments, that was a golden era. Echo Nest had open APIs for artist similarity, track analysis, playlisting, "taste profiles", ID mapping across services, and beat/segment-level music analysis. Paul Lamere's whole ecosystem of demos came out of that world: Boil the Frog, Sort Your Music, Organize Your Music, playlistminer, and later Smarter Playlists. His GitHub still points to a lot of that lineage, and his blog is still active. In fact, he posted just this month about rebuilding Smarter Playlists after ten years of use.
The sad part is that the open developer platform mostly did not survive the acquisition. By 2016, developers were being told that the Echo Nest API would stop issuing new keys and then stop serving requests, with migration to Spotify’s API instead. Community discussions at the time also noted that some Echo Nest capabilities, especially things like Rosetta-style cross-service mapping, were not really carried over.
https://github.com/beetbox/beets/issues/1920
That's also why Spotify's current AI DJ is so frustrating. The problem is that "AI DJ" is not the same thing as a system that deeply understands musical structure, discography semantics, performance history, or classical work/movement hierarchy. It's a recommendation + narration layer, not a true MIR-native musical intelligence system.
If you're interested in the research side of this field, the conference is ISMIR: the International Society for Music Information Retrieval, which is literally dedicated to computational tools for processing, searching, organizing, and accessing music-related data. That community is still very active. The ISMIR site describes MIR exactly in those terms, and the 2010 Utrecht conference included Paul Lamere's tutorial, "Finding A Path Through The Jukebox -- The Playlist Tutorial."
https://news.ycombinator.com/item?id=36482468
>gffrd on June 26, 2023 | parent | context | favorite | on: Show HN: Mofi – Content-aware fill for audio to ch...
>Yes! It was "Infinite Jukebox," created by Paul Lamere ... it was awesome because it would analyse a track, then visualize its "components" and you could watch as the new "infinite" track looped back on itself and jumped from point-to-point in the original track to create an everlasting one. He created some excellent products from the Rdio API, and later Spotify ... and I believe his analysis engine ended up being the foundation upon which Spotify's _play more tracks like these_ capability is based.
>Looks like he's moved over to publish on Substack -- there's a recent(ish) post reflecting on 10 years of Infinite Jukebox:
https://musicmachinery.substack.com/p/the-infinite-jukebox-1...
>rahimnathwani on June 26, 2023 | next [–]
>However, that wasn't the end of the Infinite Jukebox. An enterprising developer: Izzy Dahanela made her own hack on top of mine. To make it work without using uploaded content, she matches up the Echo Nest / Spotify music analysis with the corresponding song on YouTube. She hosts this at eternalbox.dev. It runs just as well as it ever did, 10 years later.
>DonHopkins on June 28, 2023 | parent | context | favorite | on: Show HN: Mofi – Content-aware fill for audio to ch...
>I was working on some music retrieval stuff in 2010, so I joined the EchoNest developer program and played around with their web apis that let you upload music and download an analysis that you could use in all kinds of cool ways. They had an SDK with some great demos and example code. I discussed it with Eric Swenson and Paul Lamere, and had the chance to hang out with Paul Lamere and Ben Fields at ISMIR 2010 (the International Society for Music Information Retrieval conference) in Utrecht, where they gave a tutorial about playlisting:
https://ismir2010.ismir.net/program/tutorials/index.html#tut...
Finding a path through the Jukebox: The Playlist Tutorial:
https://musicmachinery.com/2010/08/06/finding-a-path-through...
>Tutorial 4: Finding A Path Through The Jukebox -- The Playlist Tutorial. The simple playlist, in its many forms -- from the radio show, to the album, to the mixtape has long been a part of how people discover, listen to and share music. As the world of online music grows, the playlist is once again becoming a central tool to help listeners successfully experience music. Further, the playlist is increasingly a vehicle for recommendation and discovery of new or unknown music. More and more, commercial music services such as Pandora, Last.fm, iTunes and Spotify rely on the playlist to improve the listening experience. In this tutorial we look at the state of the art in playlisting. We present a brief history of the playlist, provide an overview of the different types of playlists and take an in-depth look at the state-of-the-art in automatic playlist generation including commercial and academic systems. We explore methods of evaluating playlists and ways that MIR techniques can be used to improve playlists. Our tutorial concludes with a discussion of what the future may hold for playlists and playlist generation/construction.
>[...]
Some of the most interesting Echo Nest descendants are still around in one form or another. Paul Lamere's current/public projects include Smarter Playlists, and his GitHub still highlights SortYourMusic, OrganizeYourMusic, playlistminer, and BoilTheFrog. Glenn McDonald’s Every Noise at Once is another major descendant of that tradition: an enormous map of music genre space. Glenn's own site still describes it as an "inexorably expanding universe of music-processing experiments", and the public genre pages now explicitly say they're a long-running snapshot based on Spotify data through 2023-11-19. After Spotify's layoffs in 2023, TechCrunch reported that Glenn lost access to the internal data needed to keep Every Noise fully updated, which is why it now feels more archival than alive.
Back in 1998 when I was working on The Sims 1, I proposed in my review of the design document something I called "Moody Music": essentially a soundtrack plus a synchronized semantic/emotional control track that could affect gameplay over time. The idea was that music wouldn't just decorate the simulation; it would change it: influencing mood, motives, relationships, skills, timing, and even triggering events at specific musical moments. I wrote that up in my review of the 1998-08-07 Sims design document, along with the broader idea of letting the game recognize a player's own CDs and fetch associated "moody tracks" from the network.
Don’s review of The Sims Design Document, Draft 3 – 8/7/98:
https://donhopkins.com/home/TheSims/TheSimsDesignDocumentDra...
>I have some ideas about how the music could effect the game, that I will write up more completely later. In a nutshell, the people in the house could have a cd or record collection to choose from, each record an object that has the sound (audio wave and/or midi) and a “moody” track synchronized with the music. Playing the music also plays the moods into the environment that the people pick up on. Music can subtly effect how people react to the environment, objects, and each other. It can effect their motives and even their skills temporarily. For example, you might be able to clean the house better and faster if you put on some up tempo bouncy music. The player should be able to assume the role of disc jockey on the radio, and play from another larger library of music and commercials, that effect the peoples moods and buying habits. The TV of course is another source of mood altering temporal media, with commercials and shows that should effect different people differently. But the most important part of this idea is instead of the game effecting the music that’s played, the music effects how the game plays! The ultimate way for the user to effect the game via music, is to insert one of their own CD’s into their real computer’s CDROM drive, and the game would recognize it, and start playing it (maybe with a simple cd player interface to select the song). There could be a database associating the unique ID number of the CD with a table of contents and “moody” tracks that tell how the song effects the peoples emotions over time, with "percussion" events at dramatic moments of the music that can trigger arbitrary events in the game (like provoking a fight that was brewing, or triggering an orgasm at just the right place in the song). We hire monkeys to listen to well known CD’s, and enter time synchronized tracks with semantic meanings in Max (like note tracks, and user defined numeric tracks) or some other timeline editing tool). Put the database up on the web for instant retrieval, so when somebody sticks in a new CD, it downloads our “moody” tracks that go with it, and it starts playing and effecting their game! Streaming emotions over the net! Eventually there should be an end-user tool so people can record their own responses to music as moody tracks they can use in our games. This mechanism could be used in all kinds of games, to varying degrees of effect. I’m not saying that music should be the only way to control the game – it’s more like a subtle background effect, but there certainly could be a scenario where you try to accomplish some task (like taming a wild beast) by using only your musical taste and timing. The real bottom line benefit is that you get to listen to your OWN cd collection of music you want to hear, instead of being driven crazy by the repetitive music bundled with the game.
In hindsight it was quite adjacent to MIR, affective computing, adaptive soundtrack systems, and some of the ambitions that Echo Nest represented. That's why I was so excited about The Echo Nest in 2010 when I was working with Will Wright at the Stupid Fun Club on a music spatial organization and navigation system called MediaGraph.
MediaGraph Music Navigation with Pie Menus Prototype developed for Will Wright's Stupid Fun Club
https://www.youtube.com/watch?v=2KfeHNIXYUc
>This is a demo of a user interface research prototype that I developed for Will Wright at the Stupid Fun Club. It includes pie menus, an editable map of music interconnected with roads, and cellular automata.
>It uses one kind of nested hierarchical pie menu to build and edit another kind of geographic networked pie menu.
2. This author is truly insufferable and arrogant.
3. Apple Music Classical exists.