I'm at the stage where sometimes I make something that sounds good (to me) but I know it requires work (in the "not fun" sense) to finish it and even then, it will likely never be appreciated by anyone but myself.
Which isn't a problem if the process itself is joyful, but I have to admit I've always struggled to enjoy anything that doesn't involve other people in some way (shared goal or approval of some form).
None of these problems are "new", but I feel like AI is making this question of "why do it" or "what is worth doing" even more urgent. Kind of wondering how others are affected by all this, if at all.
During the pandemic, a friend and I decided to make a record together. We labored over it for almost two years and finally “released it” on bandcamp to very little fanfare.
A few friends and family had nice things to say, and one random stranger reached out with positive feedback.
I get a monthly stream report from bandcamp, and it almost always says zero.
I am so pleased with this project and have such great memories of making the album that I had two lathe cut vinyl copies made (one for me, and one for my friend).
I put a big part of myself into the project and was able to convey ideas and feelings that I couldn’t express effectively via other methods.
I listen to the recording about once a year. It’s a part of me now, and I couldn’t be happier with my journey in making it.
To me, this is the purpose of the creative journey. Knowing yourself better, and enjoying all of the steps involved in arriving at what is always a surprising destination.
If someone else feels something as a result of your work, that’s a nice bonus, but not something I focus on at all.
> Why do I want to make music?
I picked a basic DJ controller and a midi controller bundled with Ableton. I'm a novice, but I love listening to music and dissecting what makes a good performance. I crave that feeling of getting chills when I find something new that moves me in new ways. This set was a pretty recent example:
https://www.youtube.com/watch?v=gfF8jzBVWvM
That being said, the world is increasingly crowded with "good enough" music.
I resolved early on that I was never going to make a money doing this, which simplified things greatly. There's a primal part of our brain that craves adoration. I do wish for others to adore my music. Even if it's a handful of people. I do wish to perform publicly one day, even if it's at a park for passersby.
Mostly I just want something to move my brain in different ways. I want to create something beautiful.
It's fun.
That's it's own reason. Even before AI you statistically will never ever ever make money.
Not only that, but legions of scam artists want to rip you off in some manner. 'Cool music , for 400$ I can get your listeners '
If your goal is being heard and appreciated, well, you better reconsider.
If you're doing it for your own pleasure and pure love of art, absolutely do go on, without any expectations. It may or may not take off, but the samurai must not care.
In the past learning a skill and do something was mostly for pleasure, and something that would stay in your inner circle of friends. Maybe one of your friends would tell his other group of friends but that would be it.
Now internet gave us the opportunity to reach the whole world and that changed the expectations.
Artists wanted to "be heard and appreciated" since they started banging on goat skin drums and painting on cave walls...
I think that in the past it was just a lot more difficult to *not* be heard or appreciated at all
The OP never said this. You and a few other commenters seem to think being heard == being an influencer that’s in it for the money.
> and something that would stay in your inner circle of friends. Maybe one of your friends would tell his other group of friends but that would be it.
That’s what being heard and appreciated is.
It has been a tremendously rewarding journey to create new music and see myself improve. 10/10 would do again.
Definitely recommend to OP to explore the modern warrior philosophy drawing from bushido.
For me it is beyond trying to make money or become famous, it is simply to enjoy the journey and the creativity that comes with creating music.
To clarify, when I speak of "approval", I'm not imagining a successful career or financial success. It's much more basic, i.e. having a few people tell me they genuinely like something I created would do that.
> it is simply to enjoy the journey and the creativity that comes with creating music.
It's unfortunately not simple for me (again, context of long term burnout / depression etc). If I only go by enjoyment, I will watch TV and maybe read and go on bike rides until the end of my days. But that is not fulfilling in the long term. I have a creative drive, but it's rather intermittent and not enough to consistently want to do the work involved. I'm trying to nurture it.
You can scratch the same music-exploration itch with a much lower time commitment and get the same thrill of accomplishment as you improve. There is also a built-in crowd of other players at any skill level that you can share your achievements with.
It's not the same glassy-eyed state as you'd get with normal video games, TV, or doom scrolling at all. You will need to focus and clear the mind.
But actually creating music or playing an instrument is much more rewarding. The time commitment is part of it, the journey is the destination and all that.
I bet a lot of accountants in the old days were really good at basic math, and proud of being fast and accurate and now there's calculators and the amount of people that work on mental math just for the love of the game is probably super small in comparison to when it was a core skill of many more people's jobs.
So do not become discouraged by the machine generated sounds. They are only sounds, not a message.
> Artists? Pencil laborers, more like.. I am in favor of using AI in visuals. It will eliminate a lot of mere decorators, and won’t even slightly affect the artists. I hope AI as a technology has the same effect on the world of ART as the invention of photography had: it got rid of a lot of empty landscape copiers. Impressionism was born shortly after that. See, I believe many cursed photography, but Monet never saw it as a problem.
What it will do is it will spoil it for 99% of small time artists that still managed to make a living out of it. It will drown audience with slop and make them not care for new releases even more.
That "artists" idea is that the Monet's will survive, but art is not a rat race where just the Monets need apply. A healthy art scene needs all kinds of creators, at different levels, and needs to be able to sustain a decent above-average quality chunk of them. Not just the Monets.
Not to mention it seems like the pretentious artist in the discussion sees themselves as some outlier Monet type that will be fine.
I like this analogy. Maybe this time around is different, but I like to hope it's not.
Even before generative AI, there is a long-going debate in audio circles around simulated guitar amplifiers. The truth is, the simulations of them have gotten so insanely good that now one could simply purchase an all-in-one pedalboard and have basically all of guitar history at your toes.
My rule-of-thumb is this: "does this tool I'm using in particular take away from the authenticity of my performance or songwriting?" Example: I am very keen on performing vocals and guitar at the same time, and I don't have an expensive studio setup, and my office has background noise. I use these tools, and yes even some open source AI ones, 1) remove background noise of the individual tracks and 2) do a final master against a recording I want to target (using something like Matchering or similar [0]). It still sounds like me, my voice isn't perfect, my beat isn't consistent, but it sounds like I rented some studio space. So for me it was a cost-saving measure.
And this is actually a problem. Great art usually comes from constraints, real or artificial. These things are a lot of fun to tinker with (a really fun hobby) but one amp, one guitar, and a small number of effects pedals will probably lead to you actually make more and better stuff.
I get what you're saying but in general this specific case I think the all-in-ones win for most people.
Ultimately I spent so much of my time worrying about "what crazy expensive equipment should I buy" when I was younger and more into this stuff, and I should have simply just played my shitty instruments and recorded on my shitty equipment. That's on me, but I also find it empowering as an artist that I can clean up my recording in the way that replaces my need for expensive equipment while maintaining (in my humble opinion) a sense of authenticity of my performance. I agree there may be too many knobs, but finding the knobs that I want has never been easier and I would rather live in the now than in the past.
Either way, I strongly encourage you to keep using a DAW if that brings you joy. Using AI to create art is a different skill set, just like using acoustic instruments is a different skill set from using either. Each option appeals a different amount to different people, and you should just do what brings you the most joy.
Maybe get a second-hand Novation Circuit to start with, or some similar "groovebox" that lets you make songs on one device, and see if you actually still do enjoy making music, yet haven't found the right process for you yet.
I don't think you're wasting your time, as long as you're having fun, regardless of what happens in the rest of the world. Sure, AI could probably make "better" (by some definition of "better") music than me, but AI couldn't make my friends smile at me as I play them my music I've made, that's quite literally priceless.
Can I ask how you share music with friends? I guess this is part of my problem, I don't really have anyone I could share with or collaborate with. The few people in my life don't listen to the type of music I like.
Best way to meet like-minded people is to go to music events where those people are, always a ton of music makers around those, usually also by themselves, sometimes in the back or on the side to the speakers. Most people in such events are OK with being approached by strangers :)
My oldest/life-long friends have very different musical tastes...and some can't even ride a bike :)
1. All that AI really does is a (partially) randomized exploration of the space that has been spanned by existing music. AI creativity, as far as it can be said to exist, is limited by this. You, on the other hand, are human and not bound by any of these limitations. You are free to explore wild things that no AI can do. Just as a completely random example, you could go out, record noises your environment (even if it's just with the smartphone), grab interesting parts, chop them up, process them and turn them into unique new instruments. Bang on random stuff that has a nice ring to it. Record background hums, apply filters and envelopes to them etc. And there are so many other ways to produce unique creations.
2. Most importantly, music is a form of human expression. It is able to capture the human condition in a unique way. As a human, you can express these things genuinely through your own emotions, experiences, memories etc. AI systems can only produce hollow facsimiles of this. Regardless of whether you are conscious about it, every piece of music that you create is a reflection of you: your thoughts, your emotions, your process. And that imparts the true value on your creations.
I'm not sure if you already knew this, but this is actually a thing already - it's been called "Botanica" and there are a bunch of cool tracks floating around.
Sample track: https://www.youtube.com/watch?v=j0QCBPnJz5w
Obligatory Ben Levin video: https://www.youtube.com/watch?v=N-mK82gLkWE
For example, Battle of the Bits [0] is a community all about chiptune music. I'm sure you _could_ use AI to help you learn and produce some things, but the community is mostly about sharing ideas about what works at the electronic level, so even if AI became super capable, it wouldn't help you engage with the community in any meaningful way. There are several such communities across different domains and I imagine they aren't going anywhere anytime soon, regardless of how much improvement happens w.r.t. AI, since the focus is on "what you learned" and not so much "what you did".
Similarly, I have seen communities focused entirely on Silicon Graphics workstations, or pc-98 internals. Human passion-based communities aren't going anywhere, Google just makes it incredibly hard to find them outside of word-of-mouth.
This is the reason why a lot of us make music. Writing orchestral pieces is my own meditation. I don't share most of them, and replacing them with AI would defeat the purpose.
Please keep learning it! The world needs more musicians, even if we never hear them.
2. If it turns out it’s not therapeutic for you, try something different. Play piano. Learn chess. Learn MMA. Go for a run. Heck, vibecode something silly. Music production is not the only way, if you have it a good try and it just frustrates you, try something else.
That's true of 99% of very polished finished work too. Amazing bands and artists in Spotify with sub 1000 streams/month.
>None of these problems are "new", but I feel like AI is making this question of "why do it" or "what is worth doing" even more urgent. Kind of wondering how others are affected by all this, if at all.
Absolutely. One big concern is that even if you do it and you're proud of it, many will think it's AI anyway.
Plus the over-inflation of AI generated shit. It could all die in a fire.
I have a very low bar for what I consider to be a successful creation: it just needs to be enjoyable for myself in the future. Anyone else who happens to enjoy the content I make is a bonus. I have several songs on SoundCloud that I have produced in the past and I still enjoy listening to them.
AI music may eventually satisfy the masses and you can't stop that speeding train, but the process of creating something yourself will always have value if it's something you're interested in creating.
You can make a decent demo in a DAW and run it through AI for a nice production. The art of writing songs is still equally hard IMO. And a good song is still good, no matter what costume it wears.
if you're creating because you feel a drive to create, you are making art and that has intrinsic value to yourself and others. if however you are performing the act of musical creation as a means to an end, what you are doing may be better considered work and not art. the work of others can also be appreciated but it is different.
keep at it though. you are asking good questions and unlike many you are also personally engaging with them.
Music is about the “feel” first and foremost. Playing music on a physical instrument or singing is a feel thing.
DAWs are tools for polishing what was created with feeling into something “produced”. If that’s what you want to end up with, that’s ok. Just be clear with yourself on which you’re trying to do.
I noticed the problem when I realized I couldn't make music in a specific mood or genre. Sometimes I'd finish my song and think "oh wow, a happy rock song" or "a sad edm song" or whatever but it was always just random chance where I ended up. With music theory knowledge I could always add more instruments or notes that could exist in that place but with 0 direction, whatever I made was always listenable but never more than that.
Now people want actual food and they want stuff made with human hands and they want to know what's in it. People want TV shows with a proper story. People are beyond done with cookie cutter superhero movies.
The slop wave is going to pass. AI can make stuff that sounds super polished and perfect, but people will want the rough and crude touch of something hand made. They'll want to see videos of musicians showing behind the scenes of how they made something. They'll want to go and see a musician perform. Interest in 100% AI generated music will fade into the background and it'll be relegated to soulless Muzak used for ambiance in soulless chain restaurants too cheap to pay for actual music and too afraid to play any songs that might offend or annoy someone.
The age of music production is almost over, the age of the music industry already is.
I wouldn't want to be in the DAW/VST business today though, because a lot of potential customers are thinking exactly as you do...
Step 2: find a local jam group or community band/orchestra
Step 3: have fun playing music with friends
For one, acquiring an instrument is expensive - even secondhand, most instruments cost a significant amount of money. Learning it properly is even more important and expensive - fixing something in a DAW is easy, unlearning muscle memory is much harder.
Keeping up said muscle memory also isn't easy. Sure, if you got a free-standing house, no one will care much about a drum set, trumpet or whatever. But most people don't have that luxury in urban sets any more, and typical residential building quality makes even some electronic instruments (e.g. kicks still cause some amount of noise passing through floors) a challenge. Building noise ordnances / HOA rules are a bitch on top of that - most allow only a limited time window in the afternoon, useless for working-class people.
Local community groups... if your community has one, and they have some studio space where noise doesn't matter, great! Most, unfortunately, don't - space in urban environments is already rare and at a hefty premium, space that accepts noise and has adequate resources (in practice: a usable toilet is the most important) is even rarer.
Depends on the instrument. You can get a completely new Harley Benton electric guitar for sub-$200.
> Sure, if you got a free-standing house
Sure, trumpets and classical instruments are a challenge, but all the guitars and all the keys can be practiced on headphones with near-zero noise. It's not an excuse.
I definitely agree it is much harder to learn a instrument than it is to learn a DAW. Believe me, I've done both.
There are lots of reasons to forego picking up an instrument, but living in an apartment or a modest budget are certainly not good reasons.
This isn't true at all. You can get a brand new Squier Strat for < $200 and a second hand one for less than half that. You can pick up used acoustic guitars for next to nothing if you look hard enough. You can get a used digital piano for < $200 too.
If you do it for yourself - do it.
If you learn that to make money - forget about it.
Well... play in band/orchestra ? You get to meet people, interact with them, build with them, etc.
I've been making music solo using various machines and computers all my life and I love it, but it's probably not for everyone. Yes, you're alone. Yes, (almost) nobody cares, so if you can't enjoy the process there is no point really.
() from time to time someone will show some interest but let's face it: there's just too much good music released everyday, competing with other distractions for the attention of the people.
For people like me, AI doesn't change much, it's another tool. We've been abusing technology in music for decades.
Create for yourself, and for those that seek the human effort and passion. There's an increasing number of us.
I'm the biggest doomer on this site, yet I'm certain human art will become even more valuable, and appreciated, than it has ever been before in history. Just don't expect to make billions out of it, or to reach out to the masses that are quite content with industrial-scale mediocrity.
AI is forcing art to return to having no meaning or purpose beyond itself and thats a good thing. It's how things used to be
If you enjoy the process and its outcomes, then it's not a waste of time. If you are forcing yourself to do it or have another motivation for it that is not rooted in genuine interest, then yes, you are wasting your time.
> I feel like AI is making this question of "why do it" or "what is worth doing" even more urgent
This is a spiritual question, so you will have as many answers as there are askers. I found my answer and am happy to share it with you. Why do it? Because I want to. What is worth doing? What I want to do or what gets me to the things I want. Wanting is a very important process, that is often damaged by conditioning. We are told that some things we want are bad and that some things we don't want are good. Or that ego is evil. So many ways this process can go wrong. I think fixing this in oneself is part of becoming an actual adult. Once you know what you want and what you don't want, you no longer are dependent on others telling you what to do or forcing you to do things you shouldn't be doing. Ego is not evil, it's there for a reason. Some people have an overgrown one while others have an underdeveloped one. What is needed is balance. I don't think the pattern recognition machine has anything to do with it. I suspect, that a lot of people who use music as a band aid for personal problems, i.e. people who build their identity around being special due to music making, are the ones who are afraid of AI, but if you just enjoy making music, then what does it matter if music itself is patterned and if a machine can exploit that? It doesn't take anything away from the joy of making music, if you experience it in the first place.
In practice, it's not binary. I'm interested because I want to make music similar to that which I like listening to.
Sometimes I get enjoyment out of it, but sometimes I lose interest maybe because I'm facing a frustration.
My question of wasting time is connected to "can I even create something worth listening to". If nothing I could make is worth listening to, then I guess I would feel the process of creation is pointless.
I've heard others write about how what they produce is worth listening to, to them. I think that is enough, but I also think I lack confidence in my own judgement. Almost like I need someone else to confirm my validity. I have recognized that as a result of emotional neglect, but I haven't figured out how to fix it.
If it's nothing but an end product, that needs to fit a specific aesthetic, with a specific sound, then I probably agree. AI is making that "pointless" in a way.
Almost everyone I know who's been an artist for years though, has come to a similar realization: What you set out to create, and what it turns into through the process of creating it are different things. The meaning, truly is found along the way.
You can always be better, there's always more to learn. Nothing is ever truly perfect, or "complete"
If you write harmony, there's always a different way it could be written, that might fit better, or be more interesting. If you do sound design, whether that's with getting different guitar tones, synth programming, unique recording techniques, there's always more to learn, or a different way to approach it.
If the only point is an end result, then AI can deliver a simulacra of that.
For everyone I know that loves music, or working with DAWs, the end result is an ever shifting target as you learn more, and understand music in a different way.
Ultimately, there are no shortcuts to making something new, because the practice of trying to make things is what results in what your art becomes. Tools and technology can shape what that thing ends up being, but they (traditionally) don't replace the process of creating it, and the feedback loop between who you are and the decisions you make along the way.
Stripping all of that out, and jumping to a "finished" product, is, well very product focused, but to me completely devoid of art or musicianship.
Some people seem to compare this to sampling, but anyone who's ever actually worked with sampling in a creative way will realize how hollow that comparison is. Almost all good sampling still requires a good deal of active feedback, between the person working with it and the way THEY hear what's going on.
Remove the person from that loop, replace the decisions with a general vague notion, and you end up with something that sounds "like" music, but that feedback loop is broken.
I see the same thing with all the AI UI design that's coming out. It's all generally quite competent, and exactly the same. Great for a business tool, where maybe the velocity and an acceptable MVP is the only point, but terrible for actual design and novel thought.
TL;dr: Why do it? Because you want to, and you think that with enough time engaging with something you'll change, just as it does, and the result isn't something you could have ever predicted when you started. It changes you, and that's the point. Just like learning an instrument, or learning to code. It's not purely about the produced result, and that very result fundamentally is changed by you actively engaging with whatever the medium is.
I have experienced the process you're talking about, although to some degree I feel it's symptomatic of a lack of skill. I start out with some kind of inspiration in mind, but end up with a compromise between what I can do and what sounds good when I fiddle around with things. Part of me feels dissatisfaction that I don't know which knobs to turn to get what I want, but I suppose that's just the normal learning process (albeit less structured than those I have gone through in the past, which is its own obstacle sometimes).
That said, I wonder if doing it with other people who suck would help. I started playing ice hockey as an adult, and the thing that got me over the initial hump of being completely useless was doing lessons with other newbies in my exact shoes (or skates) rather than trying to go right to full speed games.
This hits very close to the philosophical core of the AI debacle.
All hardcore fans of AI just want things done. The process is of no interest to them.
This is truly an eschatologic problem of desire. Consider:
Some people want to grab their result, attain satiation, have orgasm, and die, right now.
Others would much rather enjoy the process, the meal itself, indulge in gentle act of love in tune with the partner, and just keep on living their lives, continuously.
These days roughly 20% of the songs coming through our platform for promotion are AI-generated. Roughly 75% of them are honest and declare their AI usage - but another 25% try to hide it. Some of them are actually writing scripts to "clean" their audio so that it can bypass detection.
'Detecting AI' is not a problem that has real solutions, the only avenue is something supply side like synthid. But that harms users too, by introducing further barriers for indie users.
This isn't like text classification, the signal many orders of magnitude higher bitrate and so many more corners need to be cut. It's likely going to be nearly impossible or at least not remotely worth it to generate an audio signal that is truly undetectable in the foreseeable future.
Today. Trying to detect AI is like extracting water from puddles in a lake that is quickly drying up. What is the point in the short term if it's impractical in the long term? It will catch some low-hanging fruit in the best case, and will find false positives in the worst.
You are right, the output of a model that generates music directly is, for now, easy to categorize as AI.
What this big flux of AI generated music online isn't really that. It'a a tiny bit autogenerated stuff and a whole lot of automatically remixed stuff. The reason it can not be easily classified as AI is because quite a bit of human produced music is also that, and you'd just shut out real users.
Honestly I hope that the AI filter would be much better in terms of false positive than the aforementioned one, if only because it should be easier via statistical methods.
Nobody open sourced their detection algorithm as that would just trigger a cat-and-mouse game between Suno/Udio and a detection platform (and Suno/Udio have way more VC money than you do), but plenty are being sold as a service and work very reliably.
This is the nut. This isn't actual AI generated music. It isn't intended to be real music that people listen and enjoy. It's just filler to populate tracks that pay out to scammers, so that scammers can direct bots and hijacked accounts to play their tracks and steal a share of the platform revenue.
It's not even muzak at this point, at least that honest about what it is and why it exists. It's the music version of the automated AI videos on YouTube, which takes a Reddit posts, have an AI do a voice over and then run Subway Surfer in the background (Though I haven't seen one of them in months).
Right now the way that revenue split works is you pool together all the cash from humans and hand it to whoever has the most bots.
AI simplifies the creation, doesn't mean it's good and will be listened to. And if it will, then what's the problem?
You can talk about ethics, IP, etc. but we're not even there yet.
Now that AI has cargo-culted these traits I'm getting a lot of recommendations of videos that will initially seem "ok", and then I realize after about a minute that the narration will have some weirdness, and the script will have a lot of the typical ChatGPT "tells", and of course the video comes off as pretty low effort after that.
My YouTube recommendations have become increasingly useless, which honestly might be a good thing because it's made it so that I have less desire to use YouTube.
The first is AI-generated content. This can start with nothing more than an idea. Some of it is uniquely-presented stuff that's actually kind of interesting: I got sucked into a nice Ken Burns-style narrated documentary about the rise and fall of Baldwin Piano a few weeks ago. It was a little wordy, but it worked. It took awhile before a very glaring error in diction made me rewind for a double-take, note that no human would ever make that mistake while narrating, and then burn the channel from my feed.
The second problem is very different: Cloning individual people and channels. When a person (or nearly as likely, a bot) elects to use a bot to clone someone else's style, persona, and everything else then that's... that's very unsettling.
---
The first problem? It's whatever. I don't like it, but there may come a time when I accept it. At this point it's mostly harmless and really guilty of nothing more than wasting some of my time now and then.
The second problem? It can be reprehensible.
And it's particularly bad with a channel like Ryan Hall. I don't have any idea of how he is as a person (never meet your heroes), but I like to presume that he's generally a swell guy. And moreover: He's important.
When the weather turns iffy, I put his stream on and it's mostly just background noise. I usually give it very little attention.
But when he mentions the name of the small city I live in then that means that shit is just about to get very real here -- very soon. That's astoundingly useful to me, and the safety of the people I care about.
I also find a lot of value in obvious parody. It's can be fun, and it can make people think. The music of Weird Al or There I Ruined It, the crazy stories in The Onion, the memes. That's all good. But this Ryan Hall business? It's bad.
So, there's definitely a line.
And I don't know where the line should be drawn. But using bots to deceive and thereby dilute the value of the content of Ryan Hall's channel is definitely on the wrong side of that line.
Honestly couldn’t tell in the moment but now that I know it’s generated it somehow feels “cheaper” and I dislike listening to them.
The time spent listening to AI music _could_ instead been spent listening to something created by a human.
That is what pisses me off the most!
From this attitude you might as well get your entertainment from spam or ads.
People who create AI music are largely not sharing it with others for any reason other than to create a revenue stream. They are also not consuming new AI music to be able to develop influences and synthesize new ideas. The system builds brick walls where there was once osmosis.
How can art evolve under these conditions?
Who decides that? We do, collectively. Why do we have that power? Because we define art. Why do only humans have this power? Because art is an innately human thing, so we get to decide.
If not they most definitely are listening to other music that influences them. If you have proof that such a producer listens to 0 music feel free to share it.
Also, a lot of us value the fact that music is made by a person. Digital tools have been around for a long time and people have bickered about that, but ultimately they still require a person with some knowledge to sit down and actually produce the music, to do the thing. Writing prompts until you get something interesting can be fine, but what people are doing is carpet bombing us with whatever nonsense comes out because they have a financial incentive to do so.
I have plenty of experiments back when I did more digital music where I would mess with frequency modulators and such until I just found something interesting. I don’t see the harm in activities like that. But that’s not really what’s happening here. It’s deliberately lazy, corner cutting work to spam music platforms for profit. Yes there is a gray area between these two scenarios but that gray area isn’t the problem.
Digital music has always been fine to me, as long as the song being produced feels like it took a human some amount of effort.
In my mind the better mindset is to think that the problems are not fixed size, and instead these tools can allow for bigger and cooler projects, and/or projects that wouldn't be possible (or at least would be infeasible) without some kind of technological assistance.
AI tools can be used to create slop that is either "bad" or extremely bland at an effectively-infinite speed. It could also be used to make some really cool and interesting stuff if a person is really willing to spend time and effort to make it cool. Usually this requires more than just "prompting" though.
It's certainly different than those low-effort channels that mass upload hundreds of videos a day because they're able to automate the entire video-making process; those are completely soulless, again almost by definition. Those exist to just try and effectively skim revenue from adsense (or subscriber revenue in the case of Deezer), and making something that people will actually "enjoy" isn't the purpose.
Of course, this isn't a new problem; I remember a few years ago (before generative AI became viable for this stuff), there were "tutorials" on the best way to upload hours and hours of noise or silent music to Spotify to extract revenue, and of course let's not forget the infamous "Elsagate" stuff that plagued YouTube. AI has maybe accelerated the problem but it certainly wasn't the first thing to create "slop".
I'm hardly the first person to make this point, but AI is a tool. Tools can be good or bad; if AI is a tool that you can use to actively help you be more creative then I don't think that's bad. If you're just generating something to pad a resume or extract ad revenue, that's slop.
Who cares if people are mass uploading AI content? I care what the listen rates are.
> The consumption of AI-generated music on the platform is still very low, at 1-3% of total streams, and 85% of these streams are detected as fraudulent and demonetized by the company.
If they were more commonly consumed by real revenue generating users then these companies likely wouldn't care as much. As it stands, it saves them a lot of money at very little real user downside to try to catch the fraud at both ends (the fake listeners + the fake uploads) rather than just one end.
Before AI, 99% of anything was trash and now with AI, perhaps 99.9% is. But the thing that matters is whether the remaining 1% or 0.1% is good or meaningful for us or not. Though I guess soon enough even AI music will be meaningful for us, but I don't think this precludes the existence of human musicians.
It’s close to how young people have never experienced pre-Fortnite/Roblox times, so they are fine with shelling out money for microtransactions.
??
You say you deleted the tests, because you "should test it"? The logic seems inconsistent.
Sanity checking LLM-generated code with LLM-generated automated tests is low-cost and high-yield because LLMs are really good at writing tests.
I shipped a really embarrassing off-by-one error recently because some polygon representations repeat their last vertex as a sentinel (WKT, KML do this). When I checked the "tests", there was a generated test that asserted that a square has 5 vertices.
I'm closely supervising the LLM, giving it fine-grained instructions — I generally understand the full interface design and most times the whole implementation (though sometimes I skim). When I have the LLM write unit tests for me, it writes essentially what I would have written a couple years ago, except that it tends to be more thorough and add a few more tests I wouldn't have had the patience to write. That saves me quite a bit of time, and the LLM-generated unit tests are probably somewhat better than what I would have written myself.
I won't say that I never see brain-dead mistakes of the "5-vertex square" variety (haha) — by their nature, LLMs tend towards consistency rather than understanding after all. But I've been using Claude Opus exclusively for while and it doesn't tend to make those mistakes nearly as often as I used to see with lower-powered LLMs.
No, they're absolutely shit at writing tests. Writing tests is mostly about risk and threat analysis, which LLMs can't do.
(This is why LLMs write "tests" that check if inputs are equal to outputs or flip `==` to `!=`, etc.)
even more consolidation and lock in
Pisses me off on YouTube - it's really hard to find something genuine in the sea of the AI written, AI subbed, AI generated and AI published - it's a scourge not because it's there, but because the channels are lying about it AND because 99.99999% of what I encountered it's not worth the waste heat processing a "publish 100 catchy videos about current affairs".
Hard to believe these models won’t get better and better at producing music that humans want to listen to.
AI music I've heard universally sounds bland and robotic.
I assume this “AI-generated” music is created the same way an LLM generates text: use samples from a corpus strung together into a new [derivative] output.
But it seems plausible that algorithmic generation can be used at any stage of the process. How much disclosure do we (listeners) require? At what point is it unacceptable “AI-generated” music?
The answers are going to be subjective. And human. And dealing with this, I think, is going to take a direction like the “typewriters in college” headline from a few days ago - human involvement, low automation … things that don’t scale.
That’s kind of how the music industry produces music these days. There are a few song writers that write for most artists, music producers who sample other music to string together songs for most artists etc. That’s why most music sounds the same and why AI generated music can be indistinguishable from mainstream music.
https://web.archive.org/web/20230314190913/https://www.riffu...
https://huggingface.co/riffusion/riffusion-model-v1
But, I'd expect everything in the past 3 years to diffuse the audio waveform directly.
Touring, merch, etc will also serve as good "proof of give-a-shit".
Meanwhile, AI is ingesting their publicly available data to improve itself, with the implicit (if not explicit) goal of making those projects irrelevant (why read the docs, participate in a forum or chat, or submit a PR when you can ask your AI thing to just write the code you want instead?).
Furthermore, if a software developer is of the opinion that AI is "bad" in some way and they want to resist it, I think it would make the most sense to keep their code private. Open source is feeding AI.
Deezer will tag it and refuse to promote it once it's tagged as such. You're not gonna stumble upon it by leaving the autoplay on and it will not appear on any of its editorial playlists. Quite frankly this problem would be completely gone if every streaming service implemented this same policy.
Deezer also does some other things right: they boost the artist payout if the listener intentionally searches for an artist/song/album instead of stumbles upon it via autoplay/playlists, they introduced lossless audio a decade before Spotify, and you don't even need an API key to interact with its metadata (of course you need to oblige by their rate limits).
Some criticism so that this doesn't look like a pure promotion: their apps are absolute crap in comparison to Spotify and Apple Music, and even in comparison with TIDAL, which itself isn't really a pinnacle of user experience. It's definitely the most frustrating one out of the bunch that I have direct experience with.
> Today’s announcement comes as Deezer conducted a survey last November that found that 97% of participants couldn’t tell the difference between fully AI-generated music and human-made music.
Unable to tell it wasn't made by a human, but they can tell it's not very good apparently.
For the non-fraudulent listens, I'm very curious how many of these are part of auto-generated playlists. Are people just being served this music as part of a feed, or are they actually seeking it out? I'd be very surprised if it was the latter.
I use LLMs for code every day, but if I could flip a switch to turn it all off and prevent this shit from happening to the arts, I probably would.
Don't understand how one can experience anything but infinite dread when confronted with the effects of these models on the arts.
Maybe I am getting old. But I don't think so...
I would absolutely push that button a thousand times as well.
I do suspect we are in for a lot of verified-human platforms where your fee goes to supporting establishing an artist or author's humanity beyond a reasonable doubt.
I suspect we are going to see that model quickly go out of favour though.
I don’t see how verifying that the author is a human helps in any way.
I also don’t think it’s a big problem but that’s another discussion
e.g. Game speedrunners film the whole process to prove they did it themselves.
Presumably you had some ideas when you envisioned "human-verified platforms".
would you as a label sign an artist you'd never seen perform? maybe there is value in a platform working under similar constraints.
I guess there could exist a Spotify that is limited to music performed live for people who like that. Or simpler: a checkbox you can click to filter it to music known to have been performed live.
But that doesn't sound like something I'd want imposed on all music on a platform. Scrolling through my SoundCloud favorites right now, less than half of them perform live at all, and a lot of it is remixes that are never performed live. And most of them are pseudonymous. I'd lose more than half of my music if the platform required music to have been performed live. A lot of music isn't even performable live.
that's fine. there's room for multiple platforms. personally I would pay for the thing I describe, sounds like you wouldn't. but the question is not whether you or i would, but whether enough people would to make it a viable business - whether it's the platform, or the method, or a label that licenses its music in a certain way, or what.
> The consumption of AI-generated music on the platform is still very low, at 1-3% of total streams, and 85% of these streams are detected as fraudulent and demonetized by the company.
Even pre-AI, music has always been a winners-take-most business. Per an article from 2022, the vast majority of artists have fewer than 50 monthly listeners[0], which I suspect is far lower now due to the flood of AI.
Not sure about Deezer, but for Spotify there is some kind of minimum to get you into any algorithmic rotation. People try to game this with bots, i.e. botted streams, but the problem with bots is that the accounts are bots, so the recommendations just become music for other bots, hence the part where 85% of the streams are botted. So it doesn't actually work, and you have to rely on old-fashioned promotion to get into any algorithmic playlists.
So 44% of uploads being AI-generated sounds bad, but it's extremely unlikely anyone will ever encounter them naturally, the same way that people don't naturally discover random, non-AI artists with 10 monthly listeners and tracks with less than 1000 plays. This isn't a defense of AI music slop, by the way; it's more pointing out that the "making a song" part only takes you about 20% of the way to becoming an artist people want to listen to. A harsh lesson our friends in /r/SunoAI are learning.
[0] https://www.musicbusinessworldwide.com/over-75-of-artists-on...
"Extremely unlikely", you say? https://www.theguardian.com/technology/2025/nov/13/ai-music-...
Sounds like a free backup service to me.
https://support.spotify.com/lc/artists/article/ai-credits/
The problem is that subjective judgements by streaming platforms on where an AI line is drawn in music production is difficult.
If you human-write a song but use AI to produce a synth stem or bass stem and then mix it down and use AI mastering is that better or worse than if you use AI to help you write something but record with human musicians and a bit of AI assist?
And what if you use AI entirely to write and compose but use human performers to record?
And what if the AI is trained only on licensed content?
It's the same thing with writing. No one cares that you asked a chatbot to help you reword a paragraph in your essay. The problem is zero-effort slop delivered by the truckload to your social media feed.
Someone will end up in the middle and then you’ll be responsible for accepting or rejecting it.
The bulk is obvious but the debate isn’t for the obvious.
If your "work" is mostly AI, and if you don't disclose it, it goes to /dev/null. And yeah, you can get into a debate that it's unfair to reject 51% but allow 49%, but that's how the real world works - otherwise, nothing would ever get done. You also get a DUI for BAC of 0.08% but not 0.07%. That's not an argument for putting DUI laws on hold until we can figure out a more perfect approach.
Spotify, for example, already said that any track that gets under 1000 streams will not get any money. What if it says “any track that uses more than a proportion of AI will not make any money” - but refuses to say how it makes those decisions so that people can’t game the system.
But this can be easily fixed by turning the autoplay, the slop's best friend, off.
Me personally, I sniff AI on Spotify by empty "about" sections. Which is sad as I always held dear that it's the music that must speak for the author, not the vice versa.
Lots of people don’t care about whether the music they listen to is human created or not - just as lots of people don’t care about lots of other AI slop so long as they are entertained by it.
On the other hand, this does seem to be rekindling, at least somewhat, an interest in people going to see small shows of real people making music. Which was historically what music was about for the vast majority of our human history. Mass market pop as a viable business was a particularly 20th century anomaly.
And oddly, in people buying real vinyl by real people.
Remember: AI use is mandatory and non-negotiable. Hopefully the Trump administration will be rolling out AI-use metrics for the whole population, so we can track progress against our goals.
I'm not sure I'd care if AI generated music was competing against my own organic music, but having the stream-reward diluted down by bots is actually hurting artists.
My feeling is that if the AI is this good, the audience will just prompt the AI themselves and cut out the middleman.
https://www.reuters.com/legal/government/us-supreme-court-de...
Arguably, this makes sending such an album to a distributor a contractual violation as well, since you must assert that you own the rights to license it to them and are empowering them to collect royalties on your behalf.
I call this the instant imitator trap. If anything AI generates stands out from the slop, the slop generators will just imitate it, thus quickly making whatever standout quality from the "original" work also slop.
I wrote about it here: https://tombedor.dev/creativity/