WATCH — The latest music AI tools in action from Never Before Heard Sounds, Harmonai, and more

Over the last few months, three teams working on the bleeding edge of music AI technology led private workshops for the Water & Music community, as part of our Season 3 research on creative AI for the music industry.

During these sessions, founders provided hands-on tutorials of their products — guiding our community members through how to best use them to produce creative outputs, as well as answering technical and business-related questions.

In case you missed them, below we have short recaps and private YouTube links where you can get caught up.


AI-powered creation with Never Before Heard Sounds

Yotam Mann and Chris Deaner from Never Before Heard Sounds led an exclusive workshop on their unreleased AI-powered creator tool. The workspace, which is fitted with AI-powered capabilities such as stem splitting, is particularly adept at making mashups. Yotam led us through the creative possibilities, and is also giving Water & Music members special access to the tool to create their own AI music concoctions!

View the workshop by clicking the thumbnail above, or navigating directly here. If you’re interested in beta access to the tool, please email yung@waterandmusic.com or find the beta access form in this channel in our Discord server.


Fine-tuning with Dance Diffusion

The team from Harmonai — the open-source music AI arm of Stability.ai — led a private workshop for Water & Music members on fine tuning our own personal models using their flagship model, Dance Diffusion. We also got a look at their latest creation enabling text-to-sample audio, a producer’s dream!

View the workshop by clicking the thumbnail above, or navigating directly here. Here are links to relevant Colab notebooks to follow along:


AI music video generation with Pollinations

Pollinations founder Thomas Haferlach lead a workshop for members looking specifically at a tools to help musicians create video content for their music. Pollinations focuses on improving user interfaces for creative AI models, in an effort to increase accessibility for those of us with little or no-code needs. Thomas showed us how to string together a series of models to create video for music using AI.

View the workshop by clicking the thumbnail above, or navigating directly here. Follow along with the video using the relevant links here:


Interactive generative music with AIMI

AIMI is an AI-powered creator tool putting new possibilities in the hands of artists and lowering the barriers of creativity for all. Founder Edward Balassanian ran an exclusive demo and early preview of their beta tool for our member community — walking through AIMI’s business model, creator studio, app, and unique model features that allow AIMI to bypass the need for the kind of coveted, copyrighted data that we associate with the music training data bottleneck. Enjoy some of the music while you’re at it!