Wrongplanet Musicians' Showcase
AngelRho
Veteran
Joined: 4 Jan 2008
Age: 46
Gender: Male
Posts: 9,366
Location: The Landmass between N.O. and Mobile
it is very difficult to program a music generator.
here is my latest attempt.
it is cacophonous to almost every ear.
i must restrain it somehow with yet another layer of filtering for human mind compatibility. i never think i will achieve that though.
it's a long way in the future for even bright minds much more capacious than mine.
https://clyp.it/idebhbwr
AngelRho
Veteran
Joined: 4 Jan 2008
Age: 46
Gender: Male
Posts: 9,366
Location: The Landmass between N.O. and Mobile
here is my latest attempt.
it is cacophonous to almost every ear.
i must restrain it somehow with yet another layer of filtering for human mind compatibility. i never think i will achieve that though.
it's a long way in the future for even bright minds much more capacious than mine.
https://clyp.it/idebhbwr
It's got potential. Sounds like a fun jazz improv tune hard-quantized to straight 16ths. Back when I first got into music with computers, the quantize button was my friend. Still is! I'd say just gradually ease up on how much you quantize stuff and you'll get that "human filter" you're going for. If it were up to me, I'd take your piano sketch there and see about scoring it for a larger group of instruments. Could be fun.
And I agree that making a "music generator" is tough. I'm not a, um...computer savvy or math savvy guy. So my process is create an object, stare at the screen for 10 minutes, create another object, stare some more, connect the two, push a button, and get frustrated when they don't do anything! I think I've had PD on every computer I've owned for the last decade and NEVER was able to do anything with it. I've gotten a sine wave out of SuperCollider exactly ONCE. I bought Logic back in version 7 when it still required the dongle and cost $1k, and I was so upset the first time I tried to use it and couldn't figure out how to load a plugin. I spent all that money and went, like, a week before I ever got just the basics figured out. So getting this far with PureData is a big accomplishment for me!
I'm still at work on it. I've added time-point sequencers, each one handling different series row forms, and they start synchronized. It doesn't handle transpositions and other transformations yet. Maybe I'll get to that over the weekend. I've also experimented with diatonic note collections so the dissonance isn't quite as dense. I'm also plugging in sample libraries along with my synth. I'd also like to write a sub-patch that will quantize chromatic scale input to traditional scales and modes. As it is, there are still some dense tone clusters, just not as harsh as the video I posted. Also working on improved sound quality for recordings. The version I have now is just nice enough my best friend falls asleep when I'm playing with it.
I'll put up a new video as soon as I've made sufficient enough improvements. So far the only significant changes are the clocks start synced, they trigger events based on a time-point scale, and there are numerous cosmetic improvements--each patch window and sub-patch window is neater and better organized, with cable appearances at a bare minimum.
here is my latest attempt.
it is cacophonous to almost every ear.
i must restrain it somehow with yet another layer of filtering for human mind compatibility. i never think i will achieve that though.
it's a long way in the future for even bright minds much more capacious than mine.
https://clyp.it/idebhbwr
It's got potential. Sounds like a fun jazz improv tune hard-quantized to straight 16ths. Back when I first got into music with computers, the quantize button was my friend. Still is! I'd say just gradually ease up on how much you quantize stuff and you'll get that "human filter" you're going for. If it were up to me, I'd take your piano sketch there and see about scoring it for a larger group of instruments. Could be fun.
And I agree that making a "music generator" is tough. I'm not a, um...computer savvy or math savvy guy. So my process is create an object, stare at the screen for 10 minutes, create another object, stare some more, connect the two, push a button, and get frustrated when they don't do anything! I think I've had PD on every computer I've owned for the last decade and NEVER was able to do anything with it. I've gotten a sine wave out of SuperCollider exactly ONCE. I bought Logic back in version 7 when it still required the dongle and cost $1k, and I was so upset the first time I tried to use it and couldn't figure out how to load a plugin. I spent all that money and went, like, a week before I ever got just the basics figured out. So getting this far with PureData is a big accomplishment for me!
I'm still at work on it. I've added time-point sequencers, each one handling different series row forms, and they start synchronized. It doesn't handle transpositions and other transformations yet. Maybe I'll get to that over the weekend. I've also experimented with diatonic note collections so the dissonance isn't quite as dense. I'm also plugging in sample libraries along with my synth. I'd also like to write a sub-patch that will quantize chromatic scale input to traditional scales and modes. As it is, there are still some dense tone clusters, just not as harsh as the video I posted. Also working on improved sound quality for recordings. The version I have now is just nice enough my best friend falls asleep when I'm playing with it.
I'll put up a new video as soon as I've made sufficient enough improvements. So far the only significant changes are the clocks start synced, they trigger events based on a time-point scale, and there are numerous cosmetic improvements--each patch window and sub-patch window is neater and better organized, with cable appearances at a bare minimum.
but the really fundamental aspect to the creation of music is the human imagination.
some songs are like universal discoveries by emotional and musical geniuses and produce a song that is eternally correct and also captures the human imagination and transports it to another place of experience.
brilliant recent composers to me are like billy joel (pre 1980) on albums like "streetlife serenader" etc.
they just are so well iterated and conducted renditions of an experience.
but ai stuff, now matter how much work you put into the composition programs, will never produce anything "catchy".
and further to that, one can work for long periods of their life in achieving the right "sounds" synthetically, but then fail to compose anything of note that capitalizes on it.
i don't know really.
you are astute in perceiving it is in 1/16th quantized fashion.
but it is really a mess if i do not impose that artificial order on it.
sometimes i have much more fun in playing backing notes to actual songs, because they excite me.
the songs i write somehow fail to miss that mark.
AngelRho
Veteran
Joined: 4 Jan 2008
Age: 46
Gender: Male
Posts: 9,366
Location: The Landmass between N.O. and Mobile
Oh, I absolutely agree.
When I studied composition, we were taught to approach it from a number of different ways. One method that I really like is to take a blank sheet of paper and draw graphically what you want sonically and then write music that does that. You can use symbols that represent specific types of events, have a timeline, etc. I’ve seen students use flowcharts. There’s no right/wrong way to do it.
Some music, to me anyway, is about the process of creation, beginning with some kind of seed, or some kind of “DNA” if you will, and slowly emerging over time. Ambient and some space music has that kind of feel to it. Serial music does, too. So the role of compose is not to perform at the keyboard or deliberately plot out each note for himself or someone else. For me, it’s more: Here’s where I start, here’s where I’m going. And in between I want to handle each note THIS WAY. I’m imposing these rules on the composition, and it’s going to have THIS shape, and it will express itself like THIS.
Once you break down all your musical decisions like that, you CAN plot out each note, each dynamic, each tempo and timbre variation. However, after years of doing this and not really achieving much with it, I can tell you it’s laborious and time-consuming, not very satisfying, and doesn’t really pay off in terms of your audience. On top of that, humans are prone to errors and inconsistencies. And yes, I know those things are part of what makes human music appealing. But from the composer’s POV it’s distracting.
But if you know you want to follow a specific generative process or method, you really do save a lot of time and energy if you can automate the time and labor intensive bits—“the notes”—and focus more creative energy on the overall shape and nuance along the way.
You are correct that it will never be catchy. But that’s never the point with me. I could probably write a probabilistic algorithm based on existing melodies, but in that case it’s easier just to do that stream-of-consciousness style. No, my goal is for someone not to still be humming the tune, but more for her to remember how she felt while she was there. There’s a reason why generative music algorithms end up as part of art installations in museums and galleries. For the music consumer, this kind of thing is better suited to relaxation or just filling in otherwise dead sonic space.
I don’t believe that diminishes the human creative role at all. It just pushes creativity beyond “the notes” and emphasizes music creativity beyond the collection of pitches one might choose. In other words, the computer is picking the notes according to rules I would have imposed on myself anyway. Why spend hours on making sure all the correct notes are picked and played at exactly the right time when a scheduler could have done that and spared me all that time better spent elsewhere? All the computer is doing is realizing a composition based on ideas that I, a human, cane up with. So the role of human creativity is never lost, even when the performance is computer generated.
When I studied composition, we were taught to approach it from a number of different ways. One method that I really like is to take a blank sheet of paper and draw graphically what you want sonically and then write music that does that. You can use symbols that represent specific types of events, have a timeline, etc. I’ve seen students use flowcharts. There’s no right/wrong way to do it.
Some music, to me anyway, is about the process of creation, beginning with some kind of seed, or some kind of “DNA” if you will, and slowly emerging over time. Ambient and some space music has that kind of feel to it. Serial music does, too. So the role of compose is not to perform at the keyboard or deliberately plot out each note for himself or someone else. For me, it’s more: Here’s where I start, here’s where I’m going. And in between I want to handle each note THIS WAY. I’m imposing these rules on the composition, and it’s going to have THIS shape, and it will express itself like THIS.
Once you break down all your musical decisions like that, you CAN plot out each note, each dynamic, each tempo and timbre variation. However, after years of doing this and not really achieving much with it, I can tell you it’s laborious and time-consuming, not very satisfying, and doesn’t really pay off in terms of your audience. On top of that, humans are prone to errors and inconsistencies. And yes, I know those things are part of what makes human music appealing. But from the composer’s POV it’s distracting.
But if you know you want to follow a specific generative process or method, you really do save a lot of time and energy if you can automate the time and labor intensive bits—“the notes”—and focus more creative energy on the overall shape and nuance along the way.
You are correct that it will never be catchy. But that’s never the point with me. I could probably write a probabilistic algorithm based on existing melodies, but in that case it’s easier just to do that stream-of-consciousness style. No, my goal is for someone not to still be humming the tune, but more for her to remember how she felt while she was there. There’s a reason why generative music algorithms end up as part of art installations in museums and galleries. For the music consumer, this kind of thing is better suited to relaxation or just filling in otherwise dead sonic space.
I don’t believe that diminishes the human creative role at all. It just pushes creativity beyond “the notes” and emphasizes music creativity beyond the collection of pitches one might choose. In other words, the computer is picking the notes according to rules I would have imposed on myself anyway. Why spend hours on making sure all the correct notes are picked and played at exactly the right time when a scheduler could have done that and spared me all that time better spent elsewhere? All the computer is doing is realizing a composition based on ideas that I, a human, cane up with. So the role of human creativity is never lost, even when the performance is computer generated.
there are classes about how to compose, but the historically important songs come from the imagination of those who are gifted with creation of melodies and sentiments.
here is an unquantized hammond organ accompaniment i played to a split enz song called "i hope i never"
you may not have heard this song in america.
anyway, one important thing to do is to stay back in the background with your levels so that is you do not dominate the stage where much more gifted people are performing.
one can not quantize an accompaniment to a real song because it will go out of sync every time and rather rapidly.
so i have fluffed certain chords and tempo's, but not many.
https://clyp.it/plygkz52
Wow, AngelRho, that is a great description of the creativity behind generated music. I've not listened to very much of it beyond some of Kaitlyn Aurelia Smith's work, but would like to find more, especially the kind that fills in the sonic space in a way that shapes or fits the mood, like a score. I haven't made any of my own, as my particular interests as a creator are not satisfied by that process, but as a listener, I can definitely be fascinated by the computer-generated patterns created by human-defined rules.
I also noticed on Smith's latest release, that some of the generated sequences seem to have been transcribed and then played on regular instruments (woodwinds and things). Kind of a neat idea.
Interestingly, as I think about generated music, in my head plays the Solaris (2002) score. I'm fairly certain the pieces are not generated, but it definitely has a similar looping, evolving, hypnotic quality to it, and is one of my favorite film scores.
AngelRho
Veteran
Joined: 4 Jan 2008
Age: 46
Gender: Male
Posts: 9,366
Location: The Landmass between N.O. and Mobile
I also noticed on Smith's latest release, that some of the generated sequences seem to have been transcribed and then played on regular instruments (woodwinds and things). Kind of a neat idea.
Interestingly, as I think about generated music, in my head plays the Solaris (2002) score. I'm fairly certain the pieces are not generated, but it definitely has a similar looping, evolving, hypnotic quality to it, and is one of my favorite film scores.
I’m a big Solaris fan, too! Yes, great soundtrack that fit the film like a glove.
The new Blade Runner movie, same thing. The film overall was underrated, IMO. Didn’t remotely compete with the first one, but I enjoyed it and the score.
I never thought of Solaris as generative, but it does have that kind of quality.
The way I’m working with PD right now you could easily integrate it into a DAW, export MIDI data to Finale or Sibelius, and have a score that a live orchestra could perform. It is certainly within the realm of possibility. And with a live orchestra, you have the added creative layer of the conductor interacting with the other musicians for even more sound sculpting.
The sample libraries available out there are truly impressive. I use EWQL Symphonic Orchestra. I like it because it’s easy to set up “out of the box” and instantly get good results, although the best results will always be whatever tweaks you make to it. VSL is more detailed and versatile, but it’s also crazy expensive, so EWQL is where I’m putting most of my energy. I also have GPO and Lumina from ProjectSAM. ProjectSAM is another one of those “out of the box” libraries that, to me, works better for working out ideas and getting inspiration, very beginner-friendly, but not as professional. You can do an entire score by pressing one note on the keyboard. So that makes for quick scoring on a deadline, but after a while everything you compose starts to sound alike. I use it to sketch out ideas, and right now it works wonderfully with my PD experiments. But I think it’s more expensive than it’s really worth.
It would be like going to your favorite high-class restaurant only to find out everything they serve was frozen.
The new Blade Runner movie, same thing. The film overall was underrated, IMO. Didn’t remotely compete with the first one, but I enjoyed it and the score.
I never thought of Solaris as generative, but it does have that kind of quality.
The way I’m working with PD right now you could easily integrate it into a DAW, export MIDI data to Finale or Sibelius, and have a score that a live orchestra could perform. It is certainly within the realm of possibility. And with a live orchestra, you have the added creative layer of the conductor interacting with the other musicians for even more sound sculpting.
The sample libraries available out there are truly impressive. I use EWQL Symphonic Orchestra. I like it because it’s easy to set up “out of the box” and instantly get good results, although the best results will always be whatever tweaks you make to it. VSL is more detailed and versatile, but it’s also crazy expensive, so EWQL is where I’m putting most of my energy. I also have GPO and Lumina from ProjectSAM. ProjectSAM is another one of those “out of the box” libraries that, to me, works better for working out ideas and getting inspiration, very beginner-friendly, but not as professional. You can do an entire score by pressing one note on the keyboard. So that makes for quick scoring on a deadline, but after a while everything you compose starts to sound alike. I use it to sketch out ideas, and right now it works wonderfully with my PD experiments. But I think it’s more expensive than it’s really worth.
It would be like going to your favorite high-class restaurant only to find out everything they serve was frozen.
I like that metaphor.
I'm getting back into composing, as I was in a depression for a while that made it difficult to think that hard for that long, haha. The latest thing I put on bandcamp has a couple orchestral pieces, the first and last tracks. The first was done with GPO, Chris Hein Horns, and a couple other libraries I don't recall. It was a lot of work to make them sound like one thing. The other was done after I subscribed to the EWQL composer cloud, so it's mostly those libraries.
I tried out Symphobia, but it wasn't what I was looking for. I'm happy EWQL has the subscription model now, cuz that really streamlines the process when I'm in the mood to work with an orchestral sound.
https://themightysun.bandcamp.com/album ... trumentals
i love my cat so much, i will be destroyed when she dies.
this is her (i thought it was a "him" at that stage.
she turned uop in my backyard as a stray and i immediately adopted her and she quickly warmed to me and me to her.
so here is a song dedicated to my cat who is called "pumpkin"
https://clyp.it/1haopi4m
i always wanted to play "twinkle twinkle little star" so i recently did it.
it is elaborated on.
https://clyp.it/s00dgdsa
yeah whatever. s**t in terms of business and arrangement but i like it because i was thinking it all the way home from wollongong.
https://clyp.it/trmkszul