Introducing our revamped music AI database: 130+ tools, models, and datasets
Welcome to our newly updated Music AI Tools database! Whether you are a startup founder looking for market research, an artist looking for tools to augment your creative process, or an artist team member who needs to understand what tools may actually be a value add for your client — our database can help.
We first published this database on May 20, 2022, as part of an editorial series mapping how AI could transform the music industry. In the 20 months since, the number of tools in our database has more than quadrupled , amidst an unprecedented wave of development in music and audio AI at large.
Nearly every big-tech company, music streaming service, and major rights holder now has a stake in the game. The last year alone saw over a dozen new music generation models enter the market — most notably, Google’s Lyria and Meta’s AudioCraft — as well as a deluge of partnerships (and lawsuits) between music AI startups and rights behemoths. Founders are hoping to gain an advantage on training data through direct licensing deals, while labels, publishers, and PROs are scrambling to get ahead of disruption and set new precedents around AI licensing and compensation.
New database features: Measuring market impact
As the quality of music generation models continues to improve at a rapid pace — a phenomenon we've called “ music's Midjourney moment ” — 2024 will set the tone for which tools are not only technically impressive, but also driving wider consumer and B2B adoption.
Hence, we’ve redesigned our Music AI Tools database with a new focus on tracking not only tech specs, but also market impact.
In our original Apps tab, we’ve added brand-new columns for:
- Monetization models — e.g. is the app free? Charging subscriptions? Focused on B2B licensing?
- Target users — e.g. professional versus hobbyist musicians; solo content creators versus B2B content teams
- Monthly active users — where data is either publicly available or directly reported to our research team
- Social media profiles — so you can more quickly get a sense of a given AI tool’s audience reach
We’ve also added three brand-new tabs for Models, Datasets, and Industry News.
In the Models tab, we’re tracking the rapid development of new models for audio synthesis, style transfer, and other use cases — primarily from big-tech stakeholders including Google, Meta, Spotify, Adobe, TikTok, and Sony — with links to original papers where applicable.
In the Datasets tab, our goal is to provide an easy way (with links where possible) to explore the open datasets being used in major music AI models, what type of audio data comprises them, an intuition on the amount of audio currently openly accessible, and their associated licensing terms.
In the Industry News tab, we are aggregating all relevant music AI news for industry professionals, spanning product launches and updates, brand partnerships and endorsements, investment and M&A deals, litigation, and lobbying.
How to contribute to our research
This database is the culmination of nearly two years of work from Water & Music’s AI research team (co-led by Cherie Hu,Yung Spielburg, and Alexander Flores), plus contributions from our wider community of founders and music-industry professionals.
It is meant to be a living resource that matches the speed of the AI market. We hope more contributors and companies will participate to ensure that our representation of the music AI market is as accurate and comprehensive as possible.
If you notice a key music AI company is missing, or would like to issue a correction, please fill out this form and we will review your suggestion ASAP. Please note that to maintain the integrity of our research process, we will no longer be accepting anonymous database submissions.
A massive thank-you to all the companies that provided user data for this relaunch, including Endel, Moises, Beatoven.ai, Trinity/CreateSafe, Pixleynx, AIMI, Suno, Musicfy, WavTool, Lemonaide, Splice, Tuney, CassetteAI, Soundverse, Revocalize, and Mayk.it.
HOW TO USE THIS DATABASE
Below is a non-exhaustive list of ways to use our database to identify interesting trends in the AI market at large:
1. Identify top music AI use cases and needs
We are currently tracking over 130 tools across 20 use cases.
Our top use cases are:
- Music composition (melody and underlying music) (37.5% of dataset)
- Audio synthesis (20%)
- Timbre transfer (especially vocal transfer) (15%)
- Source separation (i.e. split a master into stems) (12.5%)
- Voice synthesis (text-to-sing/rap) (10.5%)
- Lyric generation (10.5%)
Below are a few app highlights from the leading categories:
Music Composition
- BandLab — A highly developed ecosystem of creative tools with 60 million registered users, including the AI-powered SongStarter tool.
- WavTool — A browser-based, AI-integrated DAW for more advanced producers. The product is still in its early stages.
- Suno — Recently announced an integration with Microsoft Copilot that allows users to generate music and vocals via text prompts to a Discord bot.
Audio Synthesis
- Splash — High-quality audio synthesis trained on the company’s own samples, with the ability to generate up to 3 minutes of audio on a premium plan.
- Emergent Drums — DAW plugin for generating drum samples.
Timbre Transfer
- Voice-Swap — Voice transfer tool that just crossed 100,000 users, featuring partnerships with the likes of Chicago house staple Robert Owens and Farley "Jackmaster" Funk. To our knowledge, this is the only voice model platform that is artist-run, with artist-friendly watermarking technology and licensing deals.
- Controlla.xyz — High-quality voice model creation and conversion, with a “blend voices” feature.
Source Separation
- Audioshake — Industry leader in terms of B2B partnerships and endorsements. Time Magazine named it one of the best inventions of 2023. It's also noteworthy that Audishake has a DRM system preventing people from processing copyrighted material, which is a distinct feature from some of their competitors.
- Lalal.ai — High-quality, easy-to-use desktop and mobile apps. Notable because they don’t have any DRM monitoring; users can process any master at their own risk.
Aggregators & Tool Suites
- Sounds.Studio — This DAW brings together all of the use cases mentioned above under one roof to imagine the next generation of AI-enabled music creation. They have integrated several major music AI generation models into their interface, including GrimesAI (courtesy of Grimes and CreateSafe ).
- Moises — High quality suite of AI tools including the above uses and few more such as chord and key detection
In contrast to the above use cases, there is still a major market gap in AI tools for mixing (we only clock 3 total in our database). We suspect mixing has historically been a difficult task to automate, due to the nuanced complexities of the traditional process (e.g. adding and removing plugins, tweaking knobs, going backwards and forwards in a session’s version history).
While mastering tools have successfully integrated AI into their features, tools like The Strip with more wide mixing capabilities are just starting to emerge. This is completely at odds with our late 2022 AI survey , which showed mixing as one of the highest desired areas for AI tooling among music creators.
2. Understand target user personas and adoption
The top 5 most targeted user bases among the apps in our database are:
- Professional musicians (68%)
- Hobbyist/casual musicians (41%)
- Solo content creators (17%)
- Professional content teams (16%)
- Software developers (7.5%)
However, with a range of use cases that includes functional features like audio analysis, tagging, search and transcription, in addition to the more creative features such as melody generation, music AI companies are increasingly leaning into other user segments including game developers, leanback consumers, and even DSPs themselves. Many music AI companies are also launching their own APIs, opening up the possibility of large-scale integrations with B2B software customers, which we classified under software developers. You can view the full list of target user segments in the database itself.
In light of MiDIA Research's recent report predicting 100 million paying users of creator tools by 2030 — with learning and skills-sharing being the most popular verticals — it should be no surprise that “hobbyist” musicians are one of the most frequently targeted user groups in our database, and are concretely driving some of the highest adoption (e.g. over 1 million monthly users for Mayk.it, and 60 million registered users for BandLab). It's important to clarify we classify tools as targeting “hobbyist” musicians if they are making it easier to create without traditional music knowledge or ability to play instruments, even if those hobbyist musicians might graduate into being professionals further down the line.
A note on user data
We last did direct outreach for user data in Q4 2023, and received responses from 18 out of 133 companies (13.5% of our dataset). This amount of data makes it hard to draw wide industry conclusions about music AI adoption, but is helpful to benchmark between many influential companies that are participating. We hope more companies will contribute user data as our database grows.
3. Track industry partnerships and endorsements
The “Industry News” table is linked to our “Apps” table — which means you can easily gauge relative levels of press coverage and industry endorsements as you scan through our list of music AI apps.
Let’s walk through a handful of examples:
- Audioshake: Find links to see that Audioshake has partnerships with publishers like Reservoir and artists like Boi-1da, and is also one of TIME's Best Inventions of 2023 , as mentioned previously.
- Boomy: Recent news — including a fraud detection partnership with Beatdapp and a distribution partnership with Warner-owned ADA — suggests that Boomy is looking to secure its reputation as a trusted rights holder and rights manager, not just as a creative AI tool.
- Endel: Skim the user data column to see Endel is one of the most proven apps on our list in terms of market adoption, with 1M+ monthly active users for its consumer-facing wellness app (iOS and Android). Then head to the “Industry News” column to see partnerships with Universal Music Group , Spinnin' Records , and Amazon Music , and news about Endel’s $20M+ in venture-capital funding to date from the likes of True Ventures, Avex Group, and the Amazon Alexa Fund.
4. Understand key commercial and legal trends on music’s backend
The “Models” and “Datasets” tabs can give you insight into how some of the world’s biggest tech companies are building their music models, especially when it comes to training data.
We currently have 26 AI models for music generation and understanding represented in our database. Many of the highest-performing models are coming out of streaming services and social media behemoths including Spotify (LLark) , Google ( MusicFX / Dreamtrack ) , Meta ( AudioCraft ) , and ByteDance ( Make-An-Audio ) . All of these companies are intent on integrating AI-generated music into their digital content, which will likely create tensions with rights holders who depend on these platforms for licensing revenue.
The 23 music training datasets in our database cover data types including music/text, speech/text, and music/stem pairings, for use cases including Midjourney-style text-to-audio synthesis, audio analysis, and upscaling. (Importantly, many of the music AI models in our database rely on proprietary datasets that are not listed in the “Datasets” tab.)
A recurrent challenge identified across these datasets is the scarcity of high-quality, labeled data. Consequently, there's substantial value placed on collecting text descriptions associated with well-known audio, often sourced from public datasets (e.g. the UMG-sponsored Song Describer Dataset ). This approach is especially intriguing in a music-industry context due to the differing licenses required for the audio itself versus the text labeling around it.
Additionally, certain datasets are specifically designed for evaluation, not training . These are often released alongside major generative model papers, providing a standard set of audio for final inference and enabling comparative analysis with previous models. We’ve labeled evaluation datasets explicitly in our Models tab under the “type” column.
Whether you're an industry veteran, an emerging artist, or a tech enthusiast, we hope our database offers a wealth of dynamic insights that enhance your understanding of and engagement with music AI as it continues to evolve in 2024 and beyond.
We encourage you to delve into our data and reach out at members@waterandmusic.com with any questions or feedback, and/or submit new data or corrections via this form .