Building a new AI music format, with Bronze CEO Lex Dromgoole

This interview is an excerpt from What’s Good: Creativity in the Age of Artificial Intelligence, a zine co-created with Refraction that focuses on insights, learnings, and creative applications of AI technology.

Water & Music originally conducted this interview as part of Water & Music’s Season 3 research sprint on creative AI. We included Bronze in our in-depth roundup of music AI tools.


Bronze is a new technology format that allows music creators to use AI and machine learning as creative tools for dynamic composition and arrangement.

You may recognize Bronze for their generative music experiences with artists like Jai Pau, Arca, and Richie Hawtin, as well as with films like Everything, Everywhere, All At Once. The UK-based company is currently in the process of scaling their technology into their own no-code platform, where artists can interact directly with Bronze as a tool.

At their core, Bronze’s underlying AI model moves beyond producing a static piece of music, to something more like a score — providing a set of instructions that acts as a guide from which many different possible types of performances can manifest.

Read on for an interview with Bronze co-founder/CEO Lex Dromgoole, where we dive into the underlying technology and philosophy behind Bronze and the role AI could play in musicians’ careers and workflows in the future. The interview has been condensed and edited for clarity.


Introduction to Bronze

Cherie Hu (founder, Water & Music): What is the best overview or quick summary of Bronze as a company and the products that you’re building?

Lex Dromgoole (CEO, Bronze): It’s never easy to distill this down. We’re not a company that’s using AI in its entirety to make music. I guess that’s one of the first distinctions. In terms of Bronze as a business, we are primarily a music company who uses AI as part of our process to create something which is extending the capabilities of recorded music past where they currently are. And what I mean by that is we want to allow people to create and, very importantly, be able to release music that has many of the qualities of performance and all of the qualities of recorded music.

And it turns out one of the good things that AI is very good at is helping us express, as a model, the nature of a performance. Currently, when we create a piece of music, we structure it and we arrange it to be static, to be inert. And we refine it and we distill it down to one specific thing. And the aim of Bronze is to allow us to create an arrangement of music for variation, music that always exists within variation, and then release that rather than the static piece of music. AI helps us in some regard because rather than creating a kind of static arrangement of things, we can build models of a performance or a model and a resynthesis of a type of sound.

And when all of those things are combined cumulatively within a sort of new music production paradigm, you can end up creating something that’s hopefully far more expressive than the current paradigm of recorded music. That’s our goal. That’s what we’re aiming for. We’ve built the foundations of it.

It’s quite ambitious to take on the idea of actually defining a format for music. Especially when you’re building, and maybe it sounds a bit grand to say this, but it’s the only format for recorded music that does anything other than change the quality of the music. There’s never been a format that’s changed the nature of the music itself. It’s always been a different type of sound quality, or a different type of spatialization. So it is the first format that can actually do that. And if you’re gonna undertake that,  I think you have to stay on top of all of the possible ways and directions in which this could go and make sure that what you’re building will be able to express that in the future in some way. Because you don’t wanna build something that in three, four years time, no one’s using because it can’t service all the domains in which we experience music.

CH: I do think prompt engineering will be gone in like a year. That’s an example.

LD: Explain what you mean by that…

CH: Ok, so I just saw a YouTube video interview with someone who’s a professional prompt engineer. I’m like, what does this mean? So, I think right now in the current limitations of these interfaces, it’s how do you best interact with something like ChatGPT or just the standard OpenAI playground to get the output that you want. It’s assuming the prompt is the default way that we’re going to interact with these models in the foreseeable future.

LD: I was also thinking as well that I think anyone whose point of distinction for their business is that they use AI… if it hasn’t already hit them, it’s gonna mean almost nothing. It’s like saying “we use computers to make music.” Everyone uses computers to make music. It’s gonna be assumed that you’re using it as part of your workflow. And if the only thing that you have is that you are using AI to do something, then you have nothing. I don’t know whether you agree with that?

CH: Oh my gosh. Such an interesting parallel. I think Web3 plays a similar role, or a lot of projects were marketing themselves solely on the basis that they were NFTs and then there’s just a ceiling on that. We don’t care about that. Or, what can you do with it? What’s the actual use case?

LD: Yes, exactly. Is it better? Is it more fun? Is it more interesting?


Contextless music

CH: Content oversaturation is a huge concern for a lot of people and artists; also career prospects, industry employment prospects. Some people have talked about wealth concentration as a big concern, especially thinking about who will power the biggest models that are the most popular… Those are just some examples.

I’m curious if there are any of those, or any other larger-scale effects of this technology that you think are either overrated or underrated? Effects that you think people are thinking about or over-indexing for that are probably not gonna have that much of an effect. Or, trends that we’re not thinking about enough that we should be considering more

LD: Okay. So, let me split it up into some strands.

For different types of people, clearly music serves different purposes. Some people love creating music, and some people just want music. They just want it to happen. And those to me are two almost completely distinct user groups that you would design a product for.

So, someone’s just made a cute video and they want a piece of music to accompany it immediately. They don’t want to make that piece of music. There’s loads of people doing that. We have automatic content creation for that. There’s a discussion to be had about whether automatic content creation via AI is a better solution than the existing kind of library music that is made by humans. I don’t have an opinion on that because I actually think some of the better ones are approaching the same quality, because that music is… I would use the term “contextless.”

CH: Interesting, yeah.

LD: Almost by definition, it’s contextless. I actually was talking to someone about sample libraries recently and saying that’s the same thing … We were talking about sample libraries versus sampling someone else’s music. And my perspective on this is that when you sample another piece of music, that music has a kind of cultural significance to it. It has a cultural context to it because in some sense it has already permeated into culture in some way, and it was created to exist in a certain context by the original artist. Often one of the most interesting things about sampling, particularly in rap and hip-hop, is that it took things and reframed them in a new context. And that was really interesting.

When you contrast that with professional sample libraries and that idea of sampling, which is you construct a piece of music from samples, I don’t find that interesting because I think that a lot of those libraries are actually by design made to be contextless. They’re made to have almost this kind of inert quality to them so that they appeal to the broadest possible number of people. So, in a sense they’re contextless. And actually the only interesting thing about sampling is the reframing of the context.

I’m sorry, that was a bit of a digression.

CH: No, it’s great.

LD: Yeah. So, we have automatic content creation for what I would say is contextless or broad appeal music to accompany something else. And it’s music that we want made. We don’t want to make it.

The vast majority of people who use music software, who are in some sense musically creative, don’t want music made for them. I’ve spent 20 years of my life in recording studios with artists and people who want to spend more time in the studio, not less. Generally, there’s a reason why we spend 18 hours a day in a room together crafting recordings. It’s because we love the process of it. It’s fundamentally enriching, the process of making music, probably more enriching than the completion of a piece of music, and probably more enriching than the release of a piece of music, given that we just put something out on a DSP and it comes up on a Friday on a page with loads of other music.

So yeah. They don’t want to shortcut a lot of the things that maybe some of these products are designed to shortcut, but those products serve the other use case perfectly, right?

CH:Yes. Yes.


Why we need a new format

LD:I think what we’re gonna find is there’s gonna be a split. There’s going to be a whole heap of products that use AI in the way that hopefully remove the kind of paper cuts from the process of creating music, the things that no one wants to do, like naming files or organizing certain things or doing repetitive processes that really don’t contribute to the creative process, but we have to do as part of the sort of fabric of making music. And I think it can contribute a lot to that in the ways that I described using a combination of interactive machine learning and probably some of the other things that we’ve talked about as well.

That’s a really interesting part of it. And then secondly, creatively, is the thing that I’ve described from my perspective what underpins the heart of Bronze is that once people start using AI within their creative process as a way of prompting something unique into being, what they will immediately discover is that is a model and they probably don’t want to commit to just one prediction from it. They’ll want to express the model as the piece of music.

Actually, I don’t believe that any of the current formats we have for releasing music work in any domain other than streaming. They don’t work well for games. They don’t work well for any kind of experiential things. If I go to MoMA and experience the Philippe Parreno piece that we did, and it’s something that has a musical arc that happens over three years, that’s far more interesting to me than something that loops every 30 minutes because it’s made using a different system.

So actually, I think the formats that we currently have are only suitable for one domain in which we experience music, which is basically static streaming or recorded music. And in the future, it’s gonna become more and more apparent when we start using AI tools as part of our creative process to discover new sounds or to generate new timbres for parts of our musical arrangements, that we don’t want to commit to one specific iteration of that — we want to express the model itself. That’s where I see it becoming really interesting from a creative perspective.

I think we need to move beyond recordings. We have to express the model itself, not a recording of the model, not one static iteration of the model. 🤖


What our members are talking about

Didn’t have time to drop into our Discord server this week? No worries. Stay up to date right here in your inbox with the best creative AI links and resources that our researchers and community members are sharing each week.

Thanks to @NatalieCrue, @Kat, @brodieconley, @yung spielburg, @aflores, and @cheriehu for curating this week’s links.

You can join the community discussion anytime in the #creative-ai channel. (If you’re not already in our Discord server, click here to get access.)

Music and entertainment case studies

AI tools, models, and datasets

Legal developments