An Interview with Lancelot Blanchard and Perry Naseck
We had the delight of talking with Lancelot Blanchard and Perry Naseck about their work with Jordan Rudess on AI music technology, bridging the gap between musicians and technology, and the future of generative AI in music.

Could you give people a sense of where you're coming from? What drew you to music and AI, and what is your technical background?
Lancelot Blanchard: I'm a classically trained pianist and very interested in computer science. I studied computer science in London at Imperial University, and during my studies, I was interested in AI because, I guess, it's our generation's thing. I saw all this research happening when deep learning got big in the 2010s, and because of my music background, I was curious about how and when that was going to be applied to music. The way I saw it, when there's a technological improvement, there's usually a music application for it.
I saw a gap in AI and music, as it is either made by musicians who don't have the technical skills to work on it, or technologists who do not understand musicians, and so they make technologies that are not necessarily useful for musicians. I don't want to say I completely have both skills because I'm working on both of the skills at the same time, but I feel like I have a bit in both worlds. So I thought it was a nice niche to look at how we could use generative AI to create tools that were useful for musicians.
When the opportunity with Jordan Rudess came, I thought it was perfect because we had one of the best musicians on earth to be the music part, and so Perry and I could be the technical part and try to create the new tools that will hopefully shape the way generative AI will happen in the future.
Can you describe Jordan Rudess's participation as well as how you both recently appeared on Rick Beato's YouTube?
Lancelot: I think Jordan is really into this project because he sees the potential of it for general music making, touring, and live music performances. So showing it on the Rick Beato channel was a way to get feedback. If you look at Jordan's Instagram account, he posts a lot using AI stuff, and it always gets some reactions, and I think he likes that. He's enjoying seeing where people stand, and he loves trying new technologies and seeing how people react to them. I think making this AI jump is kind of pioneering and showing it in front of the whole of Rick Beato's channel is probably a way for him to gauge interest, and it also really aligns with our research perspective, which is to understand at some point how audiences are going to react to it.

Perry, you described what you do as bridging the gap between the audience and AI. If Jordan is far away on a stage and interacting with this model, I imagine it's not necessarily clear what's going on. So, how do you create that bridge?
Perry Naseck: The first thing we kind of realised is that the AI is so good that it's almost not believable sometimes, in that it's very plausible that it was just a backing track Jordan has rehearsed or something, because when you go see Pop Idol or something like that the majority of performances we see are backing track performances, so we need to figure out not only how to explain that to the audience, but then how to turn that into an interaction that is fulfilling in the same way as when you watch a performer and you have these very human reactions to another human on stage.
So there are a few things we kind of figured out with that. One is that in the same way you see someone play every note, you need to see the AI play every note. And it doesn't need to be that the AI is playing an instrument, but you need to see some sort of visual cue that is happening and that it's reacting in real time, visually the same way that it is musically. My background is in kinetic sculpture, which I think is a really ideal spot for it. When we walk into a performance venue, we're used to seeing lights and maybe some moving set pieces and choreography and all sorts of things, but we don't have any expectation of what a kinetic sculpture is going to do.
So it is similar to having a new performer walk on stage and having to develop a relationship with the audience, as a new kinetic sculpture also has to develop a relationship with the audience. The audience doesn't have an expectation of what's going to happen. We have an expectation that the lights are going to sync with the beat in some way and do this or that and scale with the energy of the piece, but we don't have an expectation of what the kinetic sculpture is going to do. And so it's able to distinguish itself as this is actually the AI. It's not just another piece of the production.
What do you think of the sentiment that AI is auto-complete on steroids? Is that accurate?
Lancelot: I think it's interesting. I think humans work on predictions. So I think it's also a good model of the way humans think. If I see cars moving or you moving or I hear music, our brains always try and predict what happens next, and we get a reward signal if we are good at predicting. So I think the world works in autocomplete. When you talk about creative fields, I think breaking those expectations is important.
Going away from that auto completion, or finding new creative ways of completing sequences, is the key. So I think there is some kind of autocomplete, but there are definitely tasks that are less creative where we just want accurate autocomplete, right? We want the most probable outcome. We want something creative and accurate. Creativity is an open field of research in generative AI. Some people believe in it, some people don't.
We are starting to get models where Jordan is able to play and be like, "that's cool. That's fun." I think these are the wow moments where it's "Okay, perfect." It is autocompleting. It's completing something that Jordan would do. It can diverge, but it's completing in a way that I think is novel enough that keep you on your toes. I think that's the interesting part.

Jordan mentioned a positive and humanistic direction for AI music. By working closely with one musician, you're taking a different approach from scraping all the data you can find on the internet.
Perry: As we're developing these tools, we're starting with looking at what the end goal is and what the purpose of this is. What are we trying to do? We try to focus on augmenting creativity and musicianship. So Jordan came to us with a very specific idea in mind. He wanted to be able to play along with a version of himself. So right out of the gate, we had this very clear vision of what would be useful to Jordan. He would train the model and play along and say, "I want to go back and change this bit or change this training. Why did it respond that way? It should have responded with this." By asking these questions and then creating this creativity cycle. He's now playing new things he's never played before because of this tool that we are all developing. It's pushing him to explore new bounds that he hasn't necessarily explored before, and that's a very beneficial output for Jordan.
I think that that kind of customised process is very different from what we often see in the headlines, where other companies are trying to make an AGI-type thing, where they've compiled all this music, you type in a text prompt, and you get something. But it's not immediately clear how useful it is to musicians. I mean, maybe you're learning guitar and you need a beat to play on top of, but we already have tools that do that. Or maybe it's being used by people making films, and they need a score or something, but then, where is the musician in that equation? And so, we're looking at specific instances of where things are useful and using that to guide how these tools develop.
As a result, the musicians we work with are more appreciative of these tools and come to be fond of them because they're so specific. It kind of mirrors the way that people learn and play music, where you have an individual say in the way that you play and in the way that you learn. I think that gets clouded a little bit when we try to generalise about how AI music is going to work into a tool that can do everything.
The model we have for Jordan uses Jordan's data that he provided and an open music database. There's no question of rights of who owns anything. It's Jordan's model. We made the tool, but it's his model, and there's no question of other artists having the mix rights or anything. It's immediately taken care of because it was purpose-built.
Lancelot: I love this answer and something I wanted to add which I think we usually say is I think we really focus on live music performance as well instead of just building the instrument and I think it's a very good test bed because you have got to make an instrument that is fun to play because you want the artist to have fun on stage, because if the artist doesn't have fun while playing you're in your system then it'll be a boring show and the audience will get bored. So I think it's interesting, it's like we have this goal in mind where we cannot cheat, we have to make something that's human-centred and fun, otherwise it's not going to work. I think that is the interesting bit.

A Practical Guide to Generative Music AI for Developers
Join our comprehensive course on AI music technology, where you'll learn to harness the power of artificial intelligence in music creation and production.
Learn More