Making music with AI? Start with these ethical guidelines
This article was originally published on Resident Advisor as part of Water & Music founder Cherie Hu’s guest-edited month of June, featuring specially curated content exploring new possibilities in music and tech.
Nearly every day we hear something about the acceleration of artificial intelligence technology that bears unprecedented potential for the state of humankind, both good and bad. The future is either magic — or we’re all doomed. Imagine the freedom of automating the most tedious tasks, or the nightmare of mass unemployment. Somewhere in between is a rational balance of excitement and anxiety for the possibilities ahead. These mixed reactions are largely driven by our lived experiences with capitalism and the material conditions in the 21st century, where technological development outpaces social welfare.
AI has advanced especially dramatically in the world of music — from tools that can automatically organize your sample library, to music generators that non-musicians can use to create songs with the click of a few buttons, to simulators that can mimic other artists (and everything in between).
The builders and investors behind these tools include small independent start-ups, big tech corporations and venture capitalists. Major labels are simultaneously fighting AI and trying to find their place in it. Some tools are more transparent than others about the sources of the data that trained their tech, or attributing and compensating artists whose work was included. Legally, it’s unclear how intellectual property will play out both in the training data sets used to create these tools and in the new music generated from them.
As AI-powered creative platforms become not only more common but increasingly sophisticated, musicians face questions about how (or why) to integrate these technologies into their own music, as well as the very destiny or existence of their livelihoods. It’s only natural to worry that the rise of AI may lead to the devaluation of human artistry, and it’s reasonable to wonder about the moral pitfalls of using algorithms to generate or manipulate music, too.
Amid these valid concerns lies a largely unexplored realm of artistic possibility. Is there a chance to creatively and ethically leverage the power of this technology to further our work? We can try, but we need an ethical framework to show the way forward.
As a field, ethics is an intellectual and moral discipline with multiple schools of thought, not just the stuff of ancient philosophers and modern day scholars. Ethics are guiding principles that help us determine what’s right and wrong, fair and unfair, beneficial and harmful, and so on. Nuanced and highly dependent on context, ethics are always at play — whether we realize it or not.
In research for Water & Music’s third season, on music and creative AI, a team focused on AI Ethics ran a sentiment analysis survey for the music industry. Feelings included familiarity, enthusiasm and apprehension for the present and future of creative AI tools. Common concerns included the oversaturation of music and its effect on personal career impact, but notably less common were ethical concerns about bias in data or the impact of musicians’ own use of AI on the careers of others in fields like writing, marketing, design — until prompted about it. This conveys a disconnect across the board on ethical approaches to creative AI that can hopefully be fixed with more thought and intention.
Consider how the concept of ethics on the internet has evolved over time. A couple decades ago, the idea would conjure standard decency and lawfulness in basic websites, emails and chat rooms. But now? The internet is social media, television, doorbell cameras, telehealth, banking, shopping, working, the newspaper, video games, everything. The ethics of today’s internet is vast and complex. Our daily life consists of algorithms, content and activities that affect everything from our moods and behaviour to politics and public safety. Best practices, policies and norms probably should’ve been worked out earlier. But based on the regulatory response to the rise of social media, our government doesn’t seem set up for the pace of technology. The role of AI in our daily life is going to evolve in a similar way, only the stakes are even higher. We don’t have to be complicit this time around.
Most of us want a better world but don’t know how to act on it. Holly Herndon, a prominent artist in the music AI conversation, has frequently discussed the need for large-scale societal change alongside the advancement of AI, referencing how Gregory Coleman — the drummer behind the Amen break, one of of the most sampled sounds of all time — never received royalties and died homeless. We may not all agree on intellectual property and derivative work, but we can all agree that this situation is awful (and easily preventable).
History has proven that we, the people, must get our shit together. Disruptive tech innovation calls for disruptive socioeconomic and sociopolitical betterment. We can’t throw technology at everything, and our humanity has to fill the gap.
The current landscape of ethics in AI
Unlike so many new technologies, there’s no algorithm for ethics. And while there might not be a one-size-fits-all template, there is a global consensus on the need for ethical practices in developing AI. Numerous resources have been created by research institutions, government bodies and tech companies to guide the ethical development and governance of AI across various sectors.
One caveat of these widely available AI ethics frameworks is how they’re primarily focused on the research, development and business practices of AI. This is understandable, since the main responsibility lies with AI developers and policy-makers. But addressing ethical aspects of AI usage by end-users is essential.
Most frameworks are usually targeted at non-musical applications like healthcare, facial recognition, surveillance or self-driving vehicles, though there are some useful examples of where AI ethically meets music. Some existing AI music companies have made efforts to develop their tools with proprietary training data sets from music they obtained with permission (or compensated accordingly). SoundCloud’s Musiio app made excellent use of Rolls Royce’s publicly available Aletheia Framework. The work of Holly Herndon and Mat Dryhurst serves as valuable cases for ethical AI-generated music. Their Spawning API adds a “consent layer” for building AI projects with data sets that respect non-consenting data, and their Have I Been Trained? tool enables artists to opt-out of having their work used in training data. Herndon’s project Holly+ is an impressive exercise in open experimentation for making music using her voice. The duo also speak publicly on the implications of technology and the future for creatives.
Despite the lack of music-specific and end-user specific ethical frameworks, the Markkula Center for Applied Ethics is a wealth of valuable resources for developing practical ethics in action. Their Ethical Decision Making resource covers various industries, professional paths and personal use cases for applied ethics, including an evergreen framework for anyone and everyone, an Ethics in Technology Practice guide for tech developers and a section dedicated to generative AI. Adapting their guidance can help bridge the gaps in existing ethical frameworks towards responsible AI use for music.
On developing a custom ethical framework for music and AI
An ideal ethical blueprint for music and AI must be flexible enough to accommodate diverse use cases, but also the dynamic tech, legal, economic and social landscapes. Ethical decisions must adapt to the competitive AI sector, legal grey areas, pending legislation, a music business that is sensitive to change and a working class under pressure. Other factors, such as tech updates, lawsuits, business acquisitions, stockholders, trolls and social media controversies, can dramatically alter the circumstances, requiring a flexible guide for ethical decision-making.
It helps to discern what an ethical framework should not be, just as much as what it should. Tech ethics experts caution that an ethical framework must not be treated as a quota to meet, or just be a checklist to complete, but rather an ongoing process. It’s vital to approach an ethical framework as a verb, not a noun.
If a framework is too broad it can be hard to implement. It’s also important not to sanitize concepts — reducing real-world harms into quick bullet points without the serious reflection they deserve. This can lead to moral blindness — the inability to see the ethical aspect of the decision you’re making — and moral disengagement, which is convincing yourself that ethical standards don’t apply to you in a given context. Examples of this sanitization include trivializing heavy subjects with euphemistic framing like “loss of work” or “legal trouble.” While glossing over is convenient, we must examine the severity and scope of consequences, even when it’s disappointing.
The journey to this framework
The resulting framework is the product of my extensive research, soul searching, iteration, overthinking, and then some more research for good measure. What follows is a standardized set of thought processes inspired by the actionability of Datasheets for Datasets and the Aletheia Framework, (two useful and approachable guides on ethical best practices in AI) while largely modelled from the Markkula Center for Applied Ethics, to lead your ethical decision making in using AI with music.
An ethical approach for music and AI
All AI ethics frameworks start from a basic foundation: that AI should deliver good and never do harm. The main priorities for an ethical approach to AI with music should be to use it for creativity, with integrity and in solidarity.
Deliver Good: AI must produce some kind of clear benefit, such as enhancing creativity, increasing productivity, saving time, connecting with others, expanding your reach, furthering your career or giving back to society.
Do No Harm: Wherever there’s potential for harm, we must (both individually and collectively) try to mitigate that harm with ethical decisions and solutions. Don’t use AI to exploit, deceive, hurt or steal from others or in ways that lead to devaluing creativity, loss of talents or skills, legal harms, widespread financial/economic distress, increased disparity, violating privacy or safety or negatively impacting future generations.
For Creativity: Use AI with a creative, purposeful mindset (not just for the sake of it) to further your work, finish projects, as an art tool, to push creative boundaries, facilitate collaboration, make cool stuff and be the artist you want to be.
With Integrity: Use AI responsibly. Be honest and transparent about your use of AI. Thoroughly vet the set of tools and data you use, as well as the commercial motivations of the third-parties they come from. Do your homework on legal matters, attribution and fair compensation. Always aim for consensual use of others’ work. Stay true to yourself. Resist apathy and complicity about the negative potentials of AI. Hold yourself accountable to the outcomes of your actions, including taking ownership of decisions and not blaming the algorithm.
In Solidarity: Only use AI with respect for your fellow artists and other creative fields, gratitude for music fans, compassion for others and otherwise meaningfully advocate for the best policies, norms and conditions for all of society and rebel against exploitation. Prioritize the empowerment of the working class over the profits of the powerful few.
This is a step-by-step thought exercise for navigating ethical considerations, making ethical decisions and maintaining ethical outcomes for using AI with music, adapted from the Framework for Ethical Decision Making by Markkula Center for Applied Ethics.
01. Identify ethical issues
- What are the potential outcomes?
- What are the potential conflicting pros and cons?
- What’s the weight of the benefits and weight of the disadvantages?
- Be realistic, even if it’s inconvenient or a downer.
02. Identify facts
- Who are the stakeholders that could be affected? Think about your collaborators, other artists, fans, your music scene, the broader music industry and so forth.
- What’s behind your tech tools? Who are the owners, where is the funding coming from, what is the business model, the terms of service, the training data sources and the quality of that data?
- Think about circumstances like laws, jurisdictions, rightsholders, social factors, public interest and financial risks.
03. Think about this through six ethical lenses:
Rights lens: Respecting the rights of people at stake — the rights of other artists, the right to meet one’s basic needs, the right to privacy and safety, the right to be given credit and compensation for labour and the right to autonomy.
Justice lens: Fairness and justice, like the equitable distribution of benefits and burdens in society, the equitable balances of things in both music and business, bias in data and discrimination in algorithms, the repair of past injustices especially for marginalized and vulnerable populations and the exploitation of the working class.
Utilitarian lens: Maximize benefits with minimal harms in both the short term and long term: weigh the potential good of increased efficiency, novel creative possibilities, wider audience reach, increased income and tech innovation, versus the potential bad of job displacement, devaluation of human artistry, homogenization of music tastes, pressures of the attention economy and greater power or wealth disparities.
Common good lens: Impacting community as a whole, including the broader and long-term impacts on your music scene, fandom, the music industry and overall music appreciation and pursuit of creative skills within society at large.
Virtue lens: Being your best self and the artist you want to be. Actions that maintain your creative ambition, honesty, loyalty, generosity, self-control and so forth, over vices that lead to dishonesty, laziness, greed or apathy.
Care lens: Taking into account relationships and connections, including the personal and societal duty to take care of your collaborators, colleagues, fans, friends, family and giving back to your community more generally.
04. Evaluate choices, test them
- Which options are the best or worst?
- How can you best proceed with all your concerns in mind?
- How would others perceive this choice?
5. Go forward, then look back
- Reflect on the decision you made.
- Follow-up accordingly.
- Learn, course correct, optimize gains, stay accountable and fix harms.
A quality coexistence with AI starts with artists leading by example. By embracing an ethical approach with music, artists stand a chance to successfully and responsibly navigate this technology. The principles of delivering good, doing no harm and only using AI for creativity — with integrity and in solidarity — provide a compass for a paradigm shift that’s more ethical for all.
Through the lenses of rights, justice, utilitarianism, common good, virtue and care, artists can carefully evaluate the potential impacts of their AI-driven creative processes. And by actively engaging in this ethical process, artists can better shape the future of AI in music, ensuring that it serves them as a powerful tool for creative expression while still upholding what is essential for a thriving society. Ultimately, artists have the collective capacity to pave the way towards an AI and human partnership that is both transformational and ethically grounded.
Noise: The Political Economy of Music by Jacques Attali
The role of the arts and humanities in thinking about artificial intelligence (AI) by John Tasioulas, Ada Lovelace Institute
The global landscape of AI ethics guidelines by Anna Jobin, Marcello Ienca, Effy Vayena
The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions by Jess Whittlestone, Rune Nyrup, Anna Alexandrova, Stephen Cave
Datasheets for Datasets by Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, Kate Crawford