> Recently I was listening to music and doing some late night vibe coding when I had an idea. I love art and music, but unfortunately have no artistic talent whatsoever. So I wondered, maybe Claude Code does?
Do I need to read further? Seriously, everyone has talent. If you're not reaady to create things, just don't do it at all. Claude will not help you here. Be prepared to spend >400 hrs on just fiddling around, and be prepared to fail a lot. There is no shortcut.
Yeah, it's just weird to expect people to find AI-generated art interesting when the person generating it has no unique take or talent. This is the worst case where there is absolutely 0 creativity in the process and the created "art" reflects that imo.
I don't get what the "AI experiment" angle here is? The fact that AI can write python code that makes sounds? And if the end product isn't interesting or artistically worthwhile, what is the point?
What's the point if human-made art isn't interesting or artistically worthwhile?
(Most of it isn't.)
Art is on a sliding scale from "Fun study and experiment for the sake of it" to "Expresses something personal" to "Expresses something collective" to "A cultural landmark that invents a completely new expressive language, emotionally and technically."
All of those options are creatively worthwhile. Or maybe none of them are.
> What's the point if human-made art isn't interesting or artistically worthwhile?
Because it is a human making it, expressing something is always worthwhile to the individual on a personal level. Even if its not "artisticallly worthwhile", the process is rewarding to the participant at the very least. Which is why a lot of people just find enjoyment in creating art even if its not commercially succesful.
But in this case, the criteria changes for the final product (the music being produced). It is not artistically worthwhile to anyone, not even the creator.
So no, a person with no talent (self claim) using an LLM to create art is much less worthwhile than a human being with no/any talent creating art on their own at all times by default.
Because people don’t want to listen to robots. There was a radio station here in Norway caught playing AI music to save on royalties, it was not good for them.
Perhaps it's your sphere; I know many musicians (mostly Jazz and people in punk bands) and they aren't thrilled to say the least. Like most things, it's contextual.
> Oddly none of the anti-‘s were musicians themselves.
It is clearly plain to anyone who is a musician or hangs out with a lot of musicians that the independent music world is livid about this stuff. Everyone I’ve talked to, from acoustic songwriters to metal singers to circuit-bending pedalheads are united in their absolute hatred of this technology.
(Yes, follow-up commenter, I’ve seen the Timbaland interview)
Music is about the human experience, emotions, mistakes, accidents, discoveries.
I could listen to music by real people being vulnerable and expressing themselves, or I could listen to a computer soullessly regurgitating a stock "blues" melody with inane lyrics about a trash can. Why would I ever pick the latter?
I wouldn't be surprised if it has, or is currently in the process of, doing so. The results are good enough at this point that I think you could probably drop a few songs into a popular Spotify playlist and someone who didn't listen too closely would be fooled. I assume someone is already doing this.
it's not art (for humans) if it's not made by a human with a human story.
AI can be used as the tool with which art is made, but not as the maker itself.
now, on the other hand, maybe AI can make it's own form of art for other AI's to consume. However, for the human, the creation of art will always need the human taste and story involved
> The instrumental and vocals were both generated using Suno with a lot fiddling around with the prompts. The video was edited by a human in kdenlive :-)
> For complex AI generated music, tools like Suno and Udio are obviously in a different league as they're trained specifically on audio and can produce genuinely impressive results. But that's not what this experiment was about.
We make art because humans are compelled to express themselves. That's it. That's the whole thing. It's not stack ranked. Humans make art because, in the words of Pile, "I want answers to some questions that I can’t speak."
The idea that you'd stop trying to express yourself because you're comparing your own artistic voice to the output of an LLM and somehow seeing it as less valid, or less worthwhile, is just sad.
I don't mean that as an insult, I mean it's genuinely sad for you and for all of us as a species.
If the reason you were making music wasn't that you enjoyed making music, perhaps stopping is the right choice for you. If that was the reason, then AI is irrelevant.
I do enjoy making music, and I don't do it "by hand". I use lots of tools (instruments, electronics, a computer for recording and mixing, the internet for distribution). As long as I'm the one directing the tools, it's still art and it's still my music.
While I'm not against AI music, do not you think there's a difference between laying down some beats in ableton with your own bass + guitar writing+playing, vs prompting an LLM?
There were always musicians who were better than you. If that didn't stop you, why did AI? Were you only making music to be the best? Surely you knew that was extraordinarily unlikely. If you like making music, then make music and like it.
No, I definitely see why people hate on AI music. I appreciate that you had fun, but these songs suuuuuck.
Claude is excellent at a few things, decent at quite a few more. Art and music are not one of these things.
Ar they good as tools to aid in the creative process if you know how to use them and have some restraint? Oh absolutely. As replacements for actual art? Oh absolutely not.
While the author explicitly wanted Claude to be in the creative lead here, I recently also thought about how LLMs could mirror their coding abilities in music production workflows, leaving the human as the composer and the LLM as the tool-caller.
Especially with Ableton and something like ableton-mcp-extended[1] this can go quite far. After adapting it a bit to use less tokens for tool call outputs I could get decent performance on a local model to tell me what the current device settings on a given track were. Imagine this with a more powerful machine and things like "make the lead less harsh" or "make the bass bounce" set off a chain of automatically added devices with new and interesting parameter combinations to adjust to your taste.
In a way this becomes a bit like the inspiration-inducing setting of listening to a song which is playing in another room with closed doors: by being muffled, certain aspects of the track get highlighted which normally wouldn’t be perceived as prominently.
Related: ChatGPT Canvas apps can send/receive MIDI in desktop Chrome. A little easter egg. You can use it to quickly whip up an app that controls GarageBand or Ableton or your op-1 or whatever.
It can also just make sounds with tone.js directly.
Curious to see how this worked, I tried this on Deepseek using Claude Code Router, following the author’s guide, with two small changes: Make it an emo song that uses acoustic guitar (or, obviously an equivalent), and it could install one text-to-speech tool using Python.
It double-tracked the vocals like freaking Elliott Smith, which cracked me up.
My journey started after my wife found a Ukulele on the side of the road near where I lived a few years ago and took it home. Then often when I had a short break, I started just tugging at strings, trying to fully internalize the sound of each note and how they relate... After a few months, I learned about Suno and I started uploading short tunes and made full songs out of them. I basically produced a couple of new songs each week and my Ukulele playing got a lot better and I can now do custom chords. I'm all self taught so I literally don't know any of the formal rules of music. I shun all the theory about chords and composition like chorus, bridge, outro... I just give the AI the full text and so long as the main tune is repeated enough times with appropriate variations, I'm fine with it.
TBH, as a software engineer, I was a bit surprised at how rigid music is. Isn't it supposed to be creative? Rules stand in the way of that. I try to focus purely on what sounds good. For me, even the lyrics are just about the sound of the voice, I don't really care what they say, so long as it makes a vague general statement (with multiple interpretations) and not cheesy in any way.
Very interesting experiment! I tried something related half a year ago (LLMs writing midi files, musical notation or guitar tabs), but directly creating audio with Python and sine waves is a pretty original approach.
It layers a pentatonic guitar melody with filter sweep, a saw/triangle bass, warm e-piano chords, TR-808 drums, and a sparse music box that drifts across the stereo field.
I'm blown away.
I do acknowledge the possiblity that it might be heavily plagiarized from an original composition in the training set - I wouldn't know.
>I love art and music, but unfortunately have no artistic talent whatsoever.
Then go pay someone to teach you to play <instrument>, and you'll get a life skill that will be satisfying to watch grow, instead of whatever this soulless crap is.
edit: Oh god after listening to those samples, send Claude to the same music teacher you choose...
Do I need to read further? Seriously, everyone has talent. If you're not reaady to create things, just don't do it at all. Claude will not help you here. Be prepared to spend >400 hrs on just fiddling around, and be prepared to fail a lot. There is no shortcut.
(Most of it isn't.)
Art is on a sliding scale from "Fun study and experiment for the sake of it" to "Expresses something personal" to "Expresses something collective" to "A cultural landmark that invents a completely new expressive language, emotionally and technically."
All of those options are creatively worthwhile. Or maybe none of them are.
Take your pick.
Because it is a human making it, expressing something is always worthwhile to the individual on a personal level. Even if its not "artisticallly worthwhile", the process is rewarding to the participant at the very least. Which is why a lot of people just find enjoyment in creating art even if its not commercially succesful.
But in this case, the criteria changes for the final product (the music being produced). It is not artistically worthwhile to anyone, not even the creator.
So no, a person with no talent (self claim) using an LLM to create art is much less worthwhile than a human being with no/any talent creating art on their own at all times by default.
This song was generated from my 2-sentence prompt about a botched trash pickup: https://suno.com/s/Bdo9jzngQ4rvQko9
It is clearly plain to anyone who is a musician or hangs out with a lot of musicians that the independent music world is livid about this stuff. Everyone I’ve talked to, from acoustic songwriters to metal singers to circuit-bending pedalheads are united in their absolute hatred of this technology.
(Yes, follow-up commenter, I’ve seen the Timbaland interview)
I could listen to music by real people being vulnerable and expressing themselves, or I could listen to a computer soullessly regurgitating a stock "blues" melody with inane lyrics about a trash can. Why would I ever pick the latter?
https://youtube.com/watch?v=atcqMWqB3hw
From the author:
> The instrumental and vocals were both generated using Suno with a lot fiddling around with the prompts. The video was edited by a human in kdenlive :-)
> For complex AI generated music, tools like Suno and Udio are obviously in a different league as they're trained specifically on audio and can produce genuinely impressive results. But that's not what this experiment was about.
It's not just good at producing complete songs though, AI has made it trivial to take garbage and make it sound good.
I largely stopped making music because imo unless you're in the top 5% of musicians AI is probably able to write better music than you.
I guess it's the same with visual artists. Unless you're really, really good it's hard to understand why anyone would produce art by hand these days.
The idea that you'd stop trying to express yourself because you're comparing your own artistic voice to the output of an LLM and somehow seeing it as less valid, or less worthwhile, is just sad.
I don't mean that as an insult, I mean it's genuinely sad for you and for all of us as a species.
I do enjoy making music, and I don't do it "by hand". I use lots of tools (instruments, electronics, a computer for recording and mixing, the internet for distribution). As long as I'm the one directing the tools, it's still art and it's still my music.
It won't be long before this becomes:
> I largely stopped making _____ because imo unless you're in the top 5% of making _____ AI is probably able to make _____ better than you.
Especially where _____ is anything that can be created digitally.
Claude is excellent at a few things, decent at quite a few more. Art and music are not one of these things.
Ar they good as tools to aid in the creative process if you know how to use them and have some restraint? Oh absolutely. As replacements for actual art? Oh absolutely not.
Same goes for the entire genre of tools.
Especially with Ableton and something like ableton-mcp-extended[1] this can go quite far. After adapting it a bit to use less tokens for tool call outputs I could get decent performance on a local model to tell me what the current device settings on a given track were. Imagine this with a more powerful machine and things like "make the lead less harsh" or "make the bass bounce" set off a chain of automatically added devices with new and interesting parameter combinations to adjust to your taste.
In a way this becomes a bit like the inspiration-inducing setting of listening to a song which is playing in another room with closed doors: by being muffled, certain aspects of the track get highlighted which normally wouldn’t be perceived as prominently.
[1]: https://github.com/uisato/ableton-mcp-extended
It can also just make sounds with tone.js directly.
It double-tracked the vocals like freaking Elliott Smith, which cracked me up.
My journey started after my wife found a Ukulele on the side of the road near where I lived a few years ago and took it home. Then often when I had a short break, I started just tugging at strings, trying to fully internalize the sound of each note and how they relate... After a few months, I learned about Suno and I started uploading short tunes and made full songs out of them. I basically produced a couple of new songs each week and my Ukulele playing got a lot better and I can now do custom chords. I'm all self taught so I literally don't know any of the formal rules of music. I shun all the theory about chords and composition like chorus, bridge, outro... I just give the AI the full text and so long as the main tune is repeated enough times with appropriate variations, I'm fine with it.
TBH, as a software engineer, I was a bit surprised at how rigid music is. Isn't it supposed to be creative? Rules stand in the way of that. I try to focus purely on what sounds good. For me, even the lyrics are just about the sound of the voice, I don't really care what they say, so long as it makes a vague general statement (with multiple interpretations) and not cheesy in any way.
I do acknowledge the possiblity that it might be heavily plagiarized from an original composition in the training set - I wouldn't know.
Then go pay someone to teach you to play <instrument>, and you'll get a life skill that will be satisfying to watch grow, instead of whatever this soulless crap is.
edit: Oh god after listening to those samples, send Claude to the same music teacher you choose...