9 design principles for a musical metaverse

You can view a truncated version of this report as a slide deck by clicking here.

tl;dr — Building off of Matthew Ball’s Metaverse Primer and our interviews with dozens of artists and industry professionals, we fleshed out a set of nine artist- and fan-centric design principles for building next-gen musical metaverse experiences, ranging from individual and social modes of expression and collaboration on the frontend, to interoperable and decentralized data infrastructure on the backend.

For each of these principles, we look at both the current state of said feature’s application across virtual world platforms (the “metaverse state”), and the many possible future states and applications of that principle on the horizon (the “hyperverse state”). The intended audience for this framework is anyone — especially artists, their team members, and their partners — who is excited about the creativity that the idea of the metaverse can catalyze for the music industry, but who needs a conceptual north star on where to start.

This report is part of our Season 2 research on music in the metaverse, which will be rolling out from July 7 to 15. You can follow along with the report rollout at stream.waterandmusic.com.


Click directly to jump to a section of your choice.


9 design principles for a musical metaverse

1. Individual sense of presence
2. User-generated content (UGC)
3. Massively scaled (# of users)
4. Scaled synchronicity (concurrent users)
5. Real-time rendered 3D virtual worlds
6. Persistence of worlds over time
7. Continuity of data across worlds
8. Interoperable network
9. Decentralization


The metaverse might be one of the buzziest, if not also most polarizing and confusing, concepts in modern technology.

First coined in 1992 by sci-fi writer Neal Stephenson, the term “metaverse” has since taken on a life of its own, sparking several sprawling definitions and reports among tech thinkers, researchers, investors, consultants, and bloggers alike. Instead of merely being a figment of fictional imagination, the metaverse now has a reputation as one of the highest-leverage points for driving groundbreaking innovations in gaming, 3D design/rendering, digital identity, virtual/augmented/mixed reality, creative AI, and much more. (Show us a technology, and we’ll show you someone who has branded it “metaverse.”) And thanks to developments like Facebook’s name change to Meta, brands all over the world are begrudgingly embracing the metaverse as a necessary long-term orientation for their businesses; depending on whom you ask, the long-term market opportunity sits at anywhere from $8 trillion to $13 trillion by 2030.

Forever influential arbiters of culture, artists and music companies are now jumping on the metaverse bandwagon at an unprecedented clip, envisioning entirely new paradigms for artistic creation and fan engagement. Even a few years into the COVID pandemic, indie artists and major celebrities across genres are still investing in metaverse-centric partnerships with brands in VR, gaming, and more in an attempt to cut through the noise and reach new audiences. (Companies like CAA, Warner Music Group, and Billboard are also hiring for metaverse-specific roles to help artists accomplish just that.) According to our own research, music metaverse startups — catering to virtual/augmented/mixed reality, gaming, livestreaming, creative AI, and other applications — have raised $2.4 billion in venture capital funding in the last 18 months. Major M&A moves may also be pulling unsuspecting artists further into the metaverse discourse by force: As of March 2022, one of the world’s biggest gaming platforms, Epic Games, owns one of the world’s largest independent music marketplaces, Bandcamp.

This concentration of activity and investment compelled us to make music’s role in the metaverse the focus of our Season 2 research, as we continue to explore and analyze opportunities for music-industry innovation.

That said, we immediately ran into a major challenge from a research perspective of trying to nail down just what the metaverse is (and is not). As we wrote in our announcement post, the metaverse suffers from several conflicting narratives competing for attention. Mark Zuckerberg is building a highly centralized vision for the metaverse via Meta at the same time that underground, digital-native music communities are experimenting with shows in Web3-native virtual worlds like Decentraland and Voxels. Leading tech thinkers like Matthew Ball are painting an idyllic view of a metaverse characterized by 100% interoperable, persistent worlds that we can pass through freely; meanwhile, many artists and celebrities are launching their own isolated virtual worlds and calling them “metaverses,” even though they are the complete opposite of interoperable, don’t talk to each other on an infrastructural level, and only exist for a few hours online, limiting the opportunities to explore and build long-term community.

The lack of clarity around what exactly all these businesses are throwing these billions of dollars into — and, more importantly, why — can be alarming to outside observers. At the outset of Season 2, we sought to combat this confusion by developing a usable definition of “the metaverse” that would be legible and actionable for a music-industry audience. After all, a shared understanding within our fast-growing community of researchers as to what exactly the metaverse constitutes is crucial for developing a coherent, reliable thesis about where it might go, in music and beyond.

That said, given the chaos of the current landscape, we ultimately concluded that the first step in providing a believable vision of the metaverse involves accepting that a perfect, static definition does not exist. Public attitudes around the term are constantly shifting, and innovators are constantly iterating on and repositioning the goalposts for what’s technologically possible from the ground up — such that trying to apply a bounded, idyllic definition top-down might be a lost cause.

Hence in our research, while we did spend several weeks studying more theoretical and technical definitions of the metaverse, we also spent much of our energy in the latter half of this sprint working backwards from the point of usage and practice. Arguably the most effective way to capture the state of music in the metaverse today is to meet artists, rights holders, developers, and other music-industry stakeholders where they’re at — capturing how they are using the technologies currently available at their disposal to understand the metaverse today and what they are hoping to achieve in it.

By comparing this grounded analysis with an examination of more theoretical frameworks, we can cut through the media hype and paint a clearer, more realistic, and more actionable roadmap for what truly groundbreaking, digital-native music metaverse experiences can look like.
What follows is not a hard definition, but rather a set of artist- and fan-centric design principles for building musical metaverse experiences. The intended audience for this framework is anyone — especially artists, their team members, and their partners — who is excited about the creativity that the idea of the metaverse can catalyze for the music industry, but who needs a conceptual north star on where to start, and where we might be going. We wanted a distinctly W&M framework for the metaverse to reflect not only pre-existing definitions already put forward by technologists, but also the ways that the metaverse is perceived by those actively working in the music industry today, guided by these professionals’ creative visions for how music might look and feel different in an interoperable, interconnected, and decentralized world.

[return to table of contents]


Our open, community-based approach to laying out design principles for the music metaverse takes into account both an ecosystem-level point of view and individual-level perspectives.

Ecosystem level: Weekly reading groups

From an ecosystems perspective, we used investor and writer Matthew Ball’s nine-part Metaverse Primer — first published just over a year ago in late June 2021 — as a starting foundational framework for our understanding of the metaverse. While many other definitions for the metaverse exist, the frequent citation of Ball’s work on the metaverse in other arenas gave us confidence in its strength as a starting point for our own work. (It’s worth noting that Ball’s own book on the metaverse comes out on July 19, 2022, shortly after our Season 2 research rolls out fully!)

With the help of W&M’s tech lead Alexander Flores, we read and annotated each part of Ball’s primer using Hypothesis, then discussed each section via weekly group calls in our members-only Discord server, unpacking how each concept featured (virtual platforms, payment services, content/assets, etc.) could apply to the music industry. The weekly reading groups allowed us to create a space for group learning around otherwise esoteric or inaccessible technical concepts, and to benefit from the knowledge and backgrounds of our community members.

Individual level: Industry interviews and brainstorms

In parallel, we also wanted to take a more grounded, individual-level approach to defining the metaverse, with a particular emphasis on how those working in and around music today view the concept.

As with previous season sprints, we took an inherently social, community-driven approach to Season 2 from the beginning. We kicked off our process with a live FigJam brainstorming session, in which we asked our members to define the metaverse in their eyes, and to recount their best (and worst) musical metaverse experiences to date. This immediately identified a handful of themes to focus on in our theoretical and practical research in terms of what excited our community the most about the topic — namely new value flows, new creative tools, shared digital experiences, and more interoperable, interconnected worlds.

Then throughout the season, over 15 of our community members undertook qualitative interviews with nearly 30 individuals working on metaverse-related projects, including musicians and artist managers, software developers, virtual-world builders, metaverse music promoters and marketers, and more. (You can see a full list of contributors and interviewees at the bottom of this article.) To get a sense of how the emerging metaverse is constructed in the minds of those working in both music and tech, we asked each of our interviewees to 1) define the metaverse in their own terms, 2) explain what inspired them to explore the concept of the metaverse, and 3) articulate their grand vision of what happens when their product or project achieves its goal. Many of these interviews were also conducted openly in our member Discord server, in the spirit of cultivating a community-driven learning and research culture around a topic that might otherwise feel sprawling or confusing.

Inclusive and emergent > exclusive and bounded

The final stages of our process involved repeated group walkthroughs and discussions of how various features listed in Matthew Ball’s metaverse framework were or were not present in the music-specific metaverse projects being used and built today. This process was complex, but produced what we consider to be a current, emergent, and evolving understanding of the metaverse that was more usable for a music-industry audience.

Ball’s particular definition of the metaverse reads:

“The metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”

There’s a reason Ball’s framework has been cited so much: It’s an all-encompassing, far-reaching definition, combining aspects of both frontend user experience and backend data/network infrastructure. And yet, in gathering data on music-industry usage around tools and programs, we had to accept that the current state of the metaverse, defined in a sociological way via interviews, is far from meeting all the criteria that Ball’s bounded, ecosystem-level definition requires. The ultimate rub is that in exploring how music fits into the metaverse, it would not make sense to leave out certain worlds and tools with high engagement and potential simply because they did not fit in an idyllic, predetermined definition from the outside. Such an exclusion-based approach would be limiting for our research, which is focused on developing a broad, systems-level perspective of the ways that music intersects with metaverse platforms.

For instance, as part of our interviews, we spoke with the artist Jagwar Twin about their current project — a self-described metaverse that includes music releases paired with various elements of user-generated storytelling and worldbuilding augmented by technology (namely, web design, conversational AI, and the use of NFTs), but does not include a persistently rendered real-time 3D virtual world as part of its outputs. Hewing to a strict, bounded definition might see their project excluded from our research. However, we feel that their project offers useful insight into the ways that artists are employing technology and the concept of the metaverse to create new ways (and worlds!) through which to collaborate with and mediate relationships with their audiences; excluding their project from our research scope would rob our work of important context and depth.

Disclaimer: Contributors’ affiliations with music/metaverse companies

Some of the contributors to this research project are also active founders, leaders, employees, or contributors in the music and metaverse companies/communities that we cover. As in our previous research, Water & Music takes the position that embedded affiliation within a community should not prohibit one from writing about it. In contrast, we believe that being embedded in a community allows for individuals to develop a more nuanced understanding of how that community thinks and behaves, which arguably leads to better writing and critique.

That said, as a matter of W&M policy, we require that all community contributors proactively disclose any affiliations that they may have with organizations we study. We confirmed that none of our community interviewers were directly employed or affiliated with the specific platforms they were assigned to cover. Our proactive disclosure policy aims to ensure that our process is fair, that we are as transparent as possible with our audience, and that our researchers maintain an appropriate degree of separation from the subjects in our research.

[return to table of contents]

9 design principles for a musical metaverse

Drawing from the combination of our group reading of Matthew Ball’s metaverse primer, our qualitative interviews, and our ongoing community input on what features and experiences are most important to a musical metaverse, we arrived at the following set of features and design principles:

These elements, which we go through in detail in the following sections, are ordered roughly on a spectrum from front-end user experience (and, in turn, levers that artists can most immediately pull today) to backend infrastructure (i.e. those features that will take a lot longer to implement, and whose responsibility and opportunity for impact may fall outside of the music industry). We include why we feel that a given feature is an important (or less important) part of the ultimate concept of a metaverse, with some specific context around music-specific implementations.

Given the persistent gap between people’s wildest imaginations of what’s possible and the reality of what can be accomplished technically today, we had to avoid confusion in our community when discussing the present state (lacking many features) versus the potential future state (containing all features) of the metaverse.

Hence in this article, we make a distinction between the current state of each principle’s application across virtual world platforms (the METAVERSE state), while also pointing to the many possiblefuture states and applications of that principle (the HYPERVERSE state). For both metaverse and hyperverse states, what’s ultimately most important for our framework is to provide an entry point for artists and artist teams to start experimenting in these ecosystems today.

It’s important to note that while we address these features separately in this report, they are not completely discrete, but rather are all interdependent on each other in order to make a large-scale vision of the metaverse come to life. We often discussed multiple different principles simultaneously during reading group sessions, illustrating how they rely on each other in nonlinear ways as technology progresses.

[return to table of contents]


An individual sense of presence in the metaverse is driven by a combination of identity, or the ability to exist and be recognized as an independent entity in the metaverse (i.e. to “feel seen”), and expression, or the ability to voice one’s feelings and enact agency over one’s own possible paths in the world.

Many current metaverse experiences begin by guiding users through building and customizing their avatar. As Jon Vlassopulos, metaverse investor and former Head of Music at Roblox, tells us: “The metaverse is a place to have an embodied presence as an avatar, a space where you can play and work.Design features that encourage self-expression kickstart a crucial narrative of the coexistence of diverse identities in the metaverse regardless of one’s IRL appearance. It’s not just “come as you are,” as Nirvana pointedly stated; it’s “come as you want to be.”

It might be tempting to think that vividness — defined by rich, complex visual environments — is essential to establishing an individual sense of presence in the metaverse. It’s undeniable that metaverse worlds rendered in hi-definition enhance the immersiveness of metaverse experiences as a whole. However, we ultimately found that vividness is not a necessary requirement to maximize one’s individual sense of presence specifically. Rather, presence can emerge in the most rudimentary ways, through environments that prioritize compelling character development and lore over richly designed worlds alone.

In our interviews with young adults, we found it was not the most graphic-laden games that produced long-term value for them. Stickiness instead comes from compelling experiences that are often based in detailed, fast-paced community-driven story lines that enable users to have more visibility and agency over their future paths in these worlds, as opposed to being bombarded with high-fidelity graphics that might stop game flow with slow rendering.


Based on our interviews and community discussions, we believe an individual sense of presence is one of the primary factors that music fans and communities consider when deciding whether they want to invest more time in a given metaverse experience in the long term.

Nestled snugly in the middle of Maslow’s Hierarchy of Needs, right above physiological and safety needs, is a sense of love and belonging. This core tenet of the human condition doesn’t go away in the metaverse; in fact, in many cases, the need to be part of a thriving community is amplified as interactions become increasingly remote and digital-first.

A key insight from our research is that individual presence is a necessary ingredient for social immersion on any scale — whether we’re talking about dozens of people in a Voxels party, or millions of gamers in a Fortnite show. Supporting an individual sense of presence within a community can be a major differentiator for users, especially to those who don’t otherwise see themselves in real-world environments comfortably. As technology design and experience agency Jam3‘s Chief Client Officer, Jordan Cuddy posited: “If we can create worlds where people can feel less alone and where they can feel their most self, that’s a mission worth undertaking.” Second Life expert Wagner James Au told us similarly in an interview: A lot of the users are very shy, uncomfortable in real life, but through the anonymity (of these realms) are more willing to collaborate.”

The importance of musical experiences as a catalyst for the type of social immersion discussed above came up repeatedly across our interviews, especially from those artists building grassroots musical experiences in the metaverse. Vandal, the prolific artist, metaverse venue builder and event promoter, told us that for the grassroots musical communities forming around decentralized platforms like Voxels and Decentraland it’s “…more about meeting people, coming together in the chat, sharing, and doing it live.” This sentiment was echoed by artist and metaverse community builder EZ, who spoke with us at length about the multiple music-focused digital communities they are involved in — MetaJAX, Cipher, and more — and the ways that they enable meaningful connections that also support artist growth: “The people who will like you are out there. Just keep pushing and find the people who will connect with you and your community.” It would seem that as IRL, music experienced in the metaverse provides a substrate through which individuals can find both wider community and personal meaning.

Audio quality also plays a part in establishing a feeling of presence within the metaverse, and especially in music-specific contexts. Jacqueline Bošnjak, founder and CEO of Mach1, a company that develops spatial audio technology, explained the importance that “high-quality lush, buttery mixes that envelop you” play in creating a sense of presence in the metaverse, observing that “you listen with your whole body!” Good audio creates immersiveness, and also supports the ability for users to both develop and become a part of others stories within the metaverse.

For music specifically, it shouldn’t come as a surprise that the most engaging virtual concert experiences have boasted high-quality sound as a prerequisite, owing to the production capabilities of game developers. That said, we also know from the history of the MP3 and streaming services that most users don’t care about audio quality as much as some of us like to think. Much more important is the ability to connect with others and drive social experiences with minimal friction. In our interview with music metaverse platform Ristband, for example, we learned that their users mostly just wanted to hear the music and weren’t that bothered if that sound matched the natural reverb of a live concert hall.


Given that the current state of virtual world platforms varies so widely, defining how each may or may not enable an individual sense of presence is more about defining the state of some of the characteristics that contribute to said feeling of presence.

Avatars play a key role in enabling user communication and interaction, as well as providing a canvas through which users may express their identity. In several of the platforms we visited, users were able to outwardly express their emotional state through their avatars. In Nowhere, this was done simply through users selecting emojis to appear above their avatar (which were rendered as floating orbs). In both Second Life and Decentraland, fans enjoying a virtual concert could have their avatars dance along with the performance, with many even engaging the flying functionality to float or boogie in space above the dance floor (see screenshot below). While it may sound odd to those who have never experienced a virtual performance, Shelley VanWitzenburg from the TRU Band Room told us that communal avatar dancing produces meaningful emotional connections between and among performers and their audience members.

A screenshot of audience members throwing down on the dance floor during a virtual performance within the TRU Band Room in Decentraland.

There are, of course, still some limitations to the sense of presence enabled by user avatars in the metaverse. Across our community metaverse visits, avatar movement was often restricted just to walking around and conveying limited emotion. Correspondingly, there were also differences in the emotional impact created by platforms that had users view their avatars from the third-person perspective (looking at), rather than a first-person POV (through the avatar’s eyes). In some cases, community testers even expressed the feeling that playing as an avatar, no matter how well rendered, felt unsettling in and of itself — conjuring the feeling of the “uncanny valley.”

Means for self-expression through avatars is also often wrapped up in the platform’s own financial incentives. In-game economies are designed to maximize profits, and so decking out an avatar in wearables as a means of defining one’s identity is often financially inaccessible to many (though we did speak with some virtual concert promoters who emphasized the free distribution of wearables within their communities). This financialization of identity reproduces many of the same issues that we face IRL, with social status and perceived success becoming a function of one’s class or financial position.

In terms of social interactions, a basic standard for most platforms is to allow for some form of in-world text or voice chat among users. Through our community explorations of specific metaverse platforms, we realized that not all voice chats are created equal. In some worlds, such as Nowhere and Second Life, audio chat was proximity-based, with users only able to interact with those whose avatars were nearby (i.e. recreating the bounds of interaction IRL). Though there was also a “God-mode” in Nowhere where session hosts could speak to everyone, the main proximity-based audio function was both confusing and limiting for our community testers. One tester noted that it was “very easy to feel alone [in Nowhere],” alluding to the relational aspect of sense of presence elaborated above.

Finally, we would be remiss if we didn’t mention concerns around safety and diversity in the metaverse — which present major roadblocks for realizing one’s sense of presence, in the sense of making one feel comfortable enough to express one’s true identity and feelings. Returning to Maslow’s hierarchy of needs, safety appears as an essential precursor that must be met before an individual can come anywhere near achieving self-actualization. Unfortunately, as the digital world closely mimics the real world, one glaring issue ever present is the safety of users from sexual assault and harassment in the metaverse. As the metaverse continues to develop, our hope is that builders, including artists and fans, can continue to learn and create mechanisms that protect all users through creating communities with enforceable policies (and technical solutions) on all forms of harassment.


As stated above, we consider the hyperverse to be the potential future state (containing all features) of the metaverse and more. For this to be fully realized, we see the need for vivid and interactive experiences that engender a sense of presence to become the norm, not the exception. We anticipate that only with improvements in the quality, interactivity, and scale of virtual communication and visual identity (i.e. avatars) will users be able to experience worlds where they can be who they want to be (highest sense of self-expression), do what they want to do (more options), and create communities they want to be in — all without friction.

There will also likely be new developments that further allow for human sensations, across all five senses and a myriad of emotions, to be engaged in the hyperverse. We know that producing a sense of touch in virtual worlds is quickly becoming possible thanks to breakthroughs in haptics. Other possibilities include enhanced levels of smell (sprays) and feel (wind/fan, cold/heat, mist/rain) when engaging in the metaverse, and even the potential for chewing. We also heard when speaking with interviewees from Gen Z & A, that some anticipate the hyperverse will move beyond the VR Headset, with a neural link à la The Matrix being a possibility. Similar to what Elon Musk’s Neuralink proposes, this would completely redefine what it means to be human in a move that goes far beyond presence in and between worlds and towards a different state of being.

[return to table of contents]


User-generated content (UGC) refers to any asset made on a platform that was not created by the platform itself. As metaverse platforms scale, there may simply be too much space and too little time for the platform itself to be the only source of content, such that UGC infrastructure becomes an increasingly important strategic investment. UGC also provides a source of creativity in content development, with users ultimately able to build on top of the initial ideas seeded by a given UGC-enabled metaverse platform, producing results that could not be achieved internally by the platform alone.

Gabe Newell, creator of the legendary game Half-Life, discusses the creative advantages to enabling UGC in building virtual experiences, noting that users will “…start innovating. They’ll do a bunch of obvious stuff, they’ll do a bunch of terrible stuff, and they’ll do a bunch of amazing stuff that’s hugely valuable and illustrates why the open community-oriented approach is the right one for a wide variety of post-Internet services and products.” Put simply, the level of creativity from a closed group is unlikely to rival that of the open internet.

We think of UGC capabilities in two ways – single-player and multiplayer — each of which has different social and technical implications. Single-player UGC, which is common in many single- and multiplayer games, has a limit to the number of people who are in the chain of creation. An example of this is making one’s own avatar or designing skins or other personal items that are restricted to the creator’s use only. Platforms that provide the ability for users to leverage the assets made by other users facilitate multiplayer UGC environments, in which case the possibilities in terms of scale of users and derivative content extend into infinitum.


UGC is very closely tied to individual sense of presence in the metaverse. Larger gaming platforms like Roblox and Fortnite point to how impactful it can be to put the power of creation in users’ hands and give them the ability to influence their environment, regardless of the quality of the environment’s graphics. Despite incredibly simplistic and pedestrian visuals, Roblox has a staggering 43.2 million daily active users and 9.5 million developers building on the platform. Likewise, Tim Sweeney, CEO of Epic Games, recently told Fast Company that “about half of Fortnite play time by users is now in content created by others, and half is in Epic content.” For both Roblox and Fortnite, giving users the ability to influence and create their environment adds depth and interactivity to the experience by allowing them to activate their own creativity and sense of personal self-expression.

Multidirectional UGC in particular lends itself well to musicians and artists engaging directly in creation with their fans in their own metaverse experiences. In the case of opening up their IP for fan use (the ability to use their name and likeness as an avatar, on verch, to play their music), this form of UGC can also potentially unlock viral potential in and across worlds. We can look to the rise of decentralized metaverses such as Voxels and Decentraland as hubs for multiplayer UGC experimentation — where there are countless examples of artists building custom, publicly accessible venues in which they perform livestreamed concerts, hold art shows, and engage with their fans directly, growing their communities in the process.

Lexicon Devils, a group of artists who self-describe as metaverse architects and experience designers, have helped a number of groups create event venues within Voxels, including their own headquarters where they frequently host live-streamed concerts from artists working in punk rock and related genres. Spinkick.eth from the Lexicon Devils team described to us the parallels between creating musical experiences for the metaverse and musical creation itself, noting how working within their team “kind of feels like being in a band” and produces “a real sense of community.”

At the same time as empowering artists to build out and create their own musical experiences for the metaverse, the presence of UGC also provides the foundation for broader music-industry infrastructure to develop around metaverse platforms; after all, artists, not just fans, are also metaverse users. Lexicon Devils are a good example of this. While they are a group of artists on the surface, they also developed explicitly to support building metaverse venues, as well as promoting and hosting events for others, including artists who either don’t have the technical skills or time available to do it themselves.

This mimics the role of IRL venue owners, who often also act as bookers and concert promoters. Metaverse consultancy Bittrees similarly sprung up to meet the needs of artists and musicians wanting to build in the metaverse, through providing a full suite of services including building and managing metaverse venues, as well as promoting concerts and events. Their co-founder, Ian Prebo, told us how they see their role as more than just venue builders, but also as “community builders” who help to “facilitate onboarding to metaverse worlds.” For example, they provide organizational support to community-driven musical events like the Metaverse Music Fest, a concert series that takes place in venues across several metaverse platforms and spotlights emerging artists.

As artists move into the user-generated metaverse, we will see increasing amounts of music-industry infrastructure and jobs built out in order to support artists as they directly create new experiences alongside their fans. We expect that many of these jobs will mimic traditional industry roles, such as booking agents, record labels and sound engineers, while others will be entirely new, like metaverse concert UX designers.


UGC already permeates so many platforms and experiences, both in the metaverse and on social media at large. But when creating in a centralized platform like Roblox, developers take on a huge amount of ownership and censorship risk without the ability to migrate their games or audience. As a music-specific example, developers have free reign to create their own music-related games and experiences in Roblox, such as RoBeats, which was created by a third-party team and has registered over 220 million visits to date. If Roblox decided to make even small tweaks to the ways it handles music that negatively impacted gameplay, the developers of games like RoBeats would have little recourse other than to complain to Roblox Corporation.

Much of the music-related UGC being leveraged at the moment in metaverse worlds revolves around user- and artist-built venues, which artists secure either by building their own or by partnering for performances. On decentralized metaverse platforms such as Decentraland and Voxels, for example, UGC is the name of the game, with many music venues being designed by enterprising users and developers. The Band Room in Decentraland, originally designed as a supplementary attraction for The Rocking Uniquehorns NFT project, has since become an ongoing venue in its own right, hosting over 200 shows in the last 180 days. Soundsplash is another event hosted by the Web3-focused record label DAORecords in Voxels. Hosted at the label’s Voxels HQ, Soundsplash promises 12 weeks of artist performances that all take place on a purpose-built, water-themed event stage, where users can hang out in the pool while taking in a set from their favorite artist.

A screenshot from the Soundsplash event stage, located at the DAORecords headquarters within Voxels. During events, the central stage area would be filled by a live streamed artist performance.

One performance-related consideration around UGC is that assets created in single-player UGC environments are often in more of a final form, can be optimized based on that user’s computing resources, and therefore have more predictable graphics performance within an interactive experience. In multidirectional UGC environments, there are more unknowns around what other users may try to do with an asset, or what types of computing resources they have, making performance and experience quality less predictable.In the cross-platform world VRChat, users have the ability to create their avatar from scratch using the free and open-source 3D creation suite Blender. This gives users incredible freedom to express themselves to the limits of their own imaginations and design abilities —with the caveat that some avatars are so detailed that other players’ processing power is too weak to handle them (It is possible to be too expensive to look at!). To address the resulting lag and reduction in quality, VRChat had to provide a setting where you can dumb down other players’ avatars in your gameplay as a way to free up ever scarce CPU resources.


In the long term, we believe the hyperverse manifestation of UGC will look the most different not just in how it is made, but also in how it is distributed, attributed, monetized, and owned.

Understanding UGC in a music context naturally brings us to addressing rights and compensation: The capital flowing into games and virtual worlds today is staggering, but who actually owns the assets that are being created? As we discussed briefly above, the current state of the metaverse is such that everything created on a centralized platform like Roblox or Fortnite can only be utilized on said platform, and moreover is totally exposed to the risk of the platform closing, changing policies, or banning and censoring a user. In the future, the biggest change we expect to see with respect to UGC is that it will be truly user-owned and deployable across multiple hyperverse environments.

This is also where UGC intersects with some of the other features discussed in this primer, such as interoperability, continuity of data, and decentralization. For UGC to be fully user-owned and controlled, it needs to be fully independent of the platforms on which it exists, with content creators having full agency over how it’s deployed and used by others. While the metaverse is not inherently the same thing as Web3 (though we heard this conflation often in our interviews), one of the solutions most often offered up to the problem of true user ownership is the use of blockchain technology.

The basic requirements for portability and interoperability of UGC are:

1) Standardization of file types for content across the hyperverse — virtual wearables, for example, all need to adhere to the same file format for them to be interoperable and portable across worlds. To achieve this will require extensive coordination and cooperation from those building hyperverse platforms.

2) A shared database or storage medium from which UGC data can be freely accessed by all hyperverse platforms — this is where the blockchain comes into play. While storing of the actual assets may still occur on more hybrid centralized infrastructures, employing on-chain data mediums such as NFTs for UGC metadata makes them easily available for use across hyperverse worlds. The format also provides the added benefit of establishing provenance, with credit for UGC directly attributed to its original creator, and ownership based on who holds the NFT.

Both requirements are necessary if we are to have a hyperverse where, say, a virtual bucket hat is wearable no matter where you’re hanging out in the virtual realm. This is especially important for those working to build new metaverse worlds. It allows them to instantly tap into assets that are available, because they are using the same file formats and reading the same, shared data as every other builder. This allows for powerful composability across platforms, producing the conditions for prolific and quick innovation in how the metaverse develops.

[return to table of contents]


For experiences to be massively scaled, they don’t necessarily need to accommodate every person to experience it simultaneously in interactive environments. Instead, being “massively scaled” is largely about access: Does the technology afford the opportunity for the largest number of people to participate, with the fewest barriers to entry?

The ability to reach many more people at once than what would otherwise be possible in the physical world is a key value proposition of musical experiences in the metaverse. In theory, the “total addressable market” for the metaverse is anyone with an internet connection. Or, as our interviewees at Ristband put it: “You can reach 2.8 billion gamers on their own turf.” Travis Scott’s 2020 experience in Fortnite remains the benchmark in terms of audience size: 12.3 million Fortnite players have experienced the show “live” within the game, and the official video on Travis Scott’s YouTube page has amassed 189 million views.

Of course, having a total addressable market of 2.8 billion gamers doesn’t necessarily translate into a massive number of interested users for a music-specific metaverse experience. (In fact, at Water & Music, we’ve previously called this a total un-addressable market in relation to dwindling interest in 2D concert livestreams in particular.) And even if there are millions or billions of users interested in the musical metaverse, there’s an intriguing question around whether audience scale alone has any positive effect on the user experience — or eventually starts to take away from it. Of course, showing up to a virtual concert or event and knowing that most of your friends are also there adds significant social value to the metaverse, when it comes to the power of network effects. But there may be a threshold where the number of additional users in the virtual world begins to produce diminishing returns in terms of improving the overall emotional impact of a metaverse-based music experience, especially with regards to quality and latency.


Having the ability to massively scale a metaverse experience has major implications for both supply-side businesses (including artists) and end-users. Imagine being able to fit 1,000 Wembley Stadiums worth of fans into one show — or 1,000 artists being able to play their own sold-out Wembley Stadium in the same virtual world at the same time. As scale increases, so does the potentialfor virality and social proof — “a psychological and social phenomenon wherein people copy the actions of others in an attempt to undertake behavior in a given situation,” as defined by Robert Cialdini in his book Influence. The more people in a space, the more possible interaction and engagement.

One interesting downstream effect of increasing the number of possible users is the development of outlier user experiences. In a flight simulator game, playing as the pilots and flying the planes are likely to be experiences prioritized by most users. As the flight simulator world scales, however, we might find people opting in to be the air traffic control attendants or work at the bar in the airport. The same effect applies to contribution and engagement opportunities around musical experiences in the developing metaverse. In the future, in an industry-specific context, we might even see metaverse users acting as “roadies” or merch booth salespeople for a given virtual concert. Ultimately, with scale come more diverse opportunities and paths to follow in a given world.


Currently, for centralized providers, the number of users able to view an event or experience at the same time is basically unlimited. However, this comes with several important caveats. For massive events in the metaverse, most users are not actually experiencing the event truly together. Rather, the total number of users is split across multiple replicated worlds hosted on separate servers, in order to absorb the supply side challenge of servers having to process and track each individual user’s actions.

Of course, there are all kinds of technical things happening behind the scenes to make each user’s individual experience still appear fluid. But the upper limit for users to be in a metaverse world together and have each person’s state within the world updated in real-time is actually quite low. Decentraland, for instance, only recently increased its practical co-presence limit to 100 by turning players further away into the digital equivalent of a cardboard cutout. Fortnite only allows a maximum of 100 users within a Battle Royale match, and as Matthew Ball points out, “…most players are never really together … This means that, while the server needs to track what every player is doing, each player’s device doesn’t need to render them or track/process their actions.” So while Travis Scott might be able to have 12.3 million users synchronously view his concert in Fortnite, all of these viewers are not actually viewing the concert from the same server and able to keep up with each other’s actions in the game in real-time.

(In the rare instance a game world is architected to truly support synchronous presence for all players, something has to give. During the most recent Guinness World Record for “most costly video game battle,” over 5,000 players were connected and present on the same server. In order to cope with this, EVE Online employs a “time dilation” mechanic, slowing the game down to 10% of its normal speed resulting in a battle that took over 14 hours to play out, and only stopped because the game servers had to undergo scheduled maintenance.)

For the purposes of the virtual concerts, which are the main form of musical experience taking place in the metaverse today, the lack of ability to track every single audience member’s actions is not a major issue. At an IRL stadium concert, you can’t actually interact with all 25,000 other people individually, even if they are technically in the same audience; the same goes for a metaverse concert. As long as most users can easily find their friends, hang out with them, and enjoy the show, then a virtual concert could be considered a success.

That said, with increased scale for music events in the metaverse also come new onboarding challenges. Just like IRL, artists and their teams must consider how to navigate language barriers and provide adequate support to attendees pre, during, and post-events at different audience sizes. There is a difference between being in a room with 50 versus 1,500 people, and the same goes for a room with 50 versus 1,500 avatars.


In the future, the ability for hyperverse experiences to synchronously reach a massive amount of users will likely not change much from the current state. It’s already fully possible today to have millions of users experience the same metaverse concert at the same time. Just ask Travis Scott, Lil Nas X, or Marshmello.
What will change in the future is the number of users that can concurrently interact with each other while viewing a hyperverse experience. As technology improves — especially in the areas of computing power, improved bandwidth, and decreasing latency — it will be possible to have massive numbers of users, together and able to interact with each other concurrently across the hyperverse. This change will bring about many new possibilities for interactions and musical experiences, which we discuss in full in the next section.

[return to table of contents]


Whereas massively scaled audiences in the metaverse simply refer to how many people can collectively access a given environment with the lowest barrier to entry, scaled synchronicity refers to the ability to truly experience that environment simultaneously on an infrastructural level (i.e. in the same server), and to affect what happens to one’s self, to others, and to the overall environment. The key shift in framing is from mere collective observation to proactive collective influence.

Let’s revisit Fortnite as an example of this difference. Of course, there is value in scale, and millions of players can come together across multiple different devices (PC/Mac, PlayStation, Xbox, Nintendo Switch, mobile, etc.) to be in the same game together, influencing each other’s experience. However, the current computing power available to metaverse builders means not only that users are often placed together in small servers with a capacity of only around 50 to 100 people each, but also that the resulting viewing experience is relatively limited in terms of true interactivity. With the Travis Scott experience, players could not impact the flow of what happened at all; they were effectively passive concert viewers.

The primary reason that scaling beyond a small number of concurrent viewers in a server is so difficult is because it requires immense computational power to offer each player their own simulation of the game, world, or concert they are active in, on both the client and server side. Moving forward, however, we can imagine a single simulation or world where all players impact events for every other player, and we end up with new, more powerful possibilities for narrative development and audience interactions. Some companies like Unity and Genvid are beginning to dub these experiences as Massive Interactive Live Events (MILEs), borrowing the acronym format from Massive Multiplayer Online Role-Playing Games (MMORPGS) like League of Legends.


Since social experiences drive engagement, the opportunities offered through massive groups of people being together concurrently is a way towards more interactivity between those people. Again, influencing what’s going on is a totally different ball game than merely observing.

An older example — and one that Matthew Ball and Jacob Novak have referenced in their previous writing — is Twitch Plays Pokemon (TPP), an experiment where viewers of a Twitch stream of a bot playing Pokemon Red determined what that bot would do through simple inputs. What TPP proved was that a bunch of people from all over the world could come together and experience the same game play while interacting with it, and that that could make a compelling viewing experience (the channel is still on air). These first steps in coming together as players within a single simulation continue to develop in other arenas outside of gaming, such as in fan-controlled football, where the fans call the plays.

The music- and fandom-based possibilities enabled by massive numbers of users able to experience the metaverse together at the same time are almost limitless. An existing example of the basic possibilities comes from the alt pop group Twenty One Pilots, who enabled fans to collaborate together in real-time to select the flow of their setlist for their fall 2021 virtual concert in Roblox, an event which attracted viewers from over 160 countries.

In the more expansive realm, imagine a massive group of fans able to work together to build and iterate on collaborative experiences in real-time such as building entire artist-focused worlds together, or designing official metaverse games and attractions around an artist’s work and likeness. These types of activities ultimately provide benefits to both the artists themselves and their fans: Artists gain the ability to activate their motivated, loyal fandoms to produce experiences that also serve as new content and marketing attractions (almost like virtual street teams), while fans are able to activate their identities as lovers of specific artists in new, creative ways with others.


Overall, the current metaverse state allows for various numbers of concurrent users to enjoy certain experiences or worlds together in real time. But current limitations on computing power mean that social experiences are still limited to relatively small numbers of people (roughly 100 concurrent users per server in both Decentraland and Fortnite, for example) actually being able to establish interactions with each other inside a given game or metaverse world.

For users to exist and interact concurrently in a shared environment, all of the users present must simultaneously maintain the same shared state for that environment. For example, if I’m a musician who wants to jam with one of my friends in a metaverse environment, when I play a chord on my virtual instrument my friend needs to receive the audio output of that chord on their end as close to the exact moment I played it as possible.

This problem of maintaining a shared state becomes exponentially harder for a server to pull off with every additional user you add — a dilemma sometimes referred to as the “n-squared problem.” What this means is that there are upper limits on the number of users that may experience and interact within metaverse worlds simultaneously, especially for smaller worlds that don’t have the power and resources of AAA games like Fortnite or Roblox. For instance, we heard in our interviews with Vandal and Coldie — two longtime artists, venue builders, and event promoters in Voxels — that they’ve continuously pushed the limits of the platform to host concurrent users at concert and art events. Coldie described looking to his left while hosting a virtual version of the Bitcoin 2020 Conference and seeing a whole building disappear and thinking, “What the hell is happening? Are we going to make it?” Our interview with metaverse designers Lexicon Devils confirmed that many venue builders are working continuously with the Voxels team in a tight feedback loop to improve the world’s mechanics and increase the current limits on concurrent user experiences.


A future state where there is effectively an unlimited number of users who can share a game state would open up possibilities around both the depth and breadth of play — expanding the social possibilities of virtual worlds and opening them up to much more serendipity, in a similar way that “massively scaled” environments unlock “fringe” character experiences for users. The ideal scenario is to have as little tradeoff as possible between graphics, smoothness, gameplay and the ability to have large numbers of concurrent players.
Given that many of the flagship examples of the power of live concurrent users in music environments have taken place in games, we believe that the gaming industry will continue to set the standard, in terms of advancing the compute and network power needed to make even larger social events happen in virtual worlds that are truly synchronous (especially in a cloud gaming context). Importantly, while music will continue to serve as an early cultural petri dish for these social gaming experiences (especially with virtual concerts that activate interactive in-game experiences), it will likely continue to serve as an ancillary activity within environments that are ultimately game-first.

[return to table of contents]


Real-time rendered 3D virtual worlds are exactly what they sound like – experiences that are accessed through software, rendered in three dimensions, and experienced in real-time, where the world updates instantly in response to users’ actions.

Importantly, metaverse worlds with this feature set can be encountered through multiple access points, not just VR headsets. A real-time rendered 3D virtual world might include something like Fortnite, where a person can enter via their web browser, mobile device, or gaming consoles to experience a concert performance from one of their favorite artists in real-time.


Real-time rendered 3D virtual worlds augment many other key dimensions of the metaverse. Being able to see oneself and others in a smoothly rendered 3D space creates a more immersive experience, and heavily influences the all-important “individual sense of presence” discussed earlier in this article, in line with improving the overall flow of social interactions.

Some of the most compelling musical experiences we came upon that leverage the power of smooth 3D rendered virtual worlds were dance battles in social VR apps like Neos. The most committed and engaged fans participating in these battles have motion-capture suits or other ways to animate their avatars with real-time, full-body movements, which help to break down the borders between real-life and in-world experiences. There are also clubs in VRChat where people are dancing and drinking behind the screen IRL, but ultimately socializing and interacting in 3D rendered worlds.

Applied further to musical experiences, smoothness of rendered visuals can be a make-or-break proposition for whether audiences actually enjoy music-focused events in the metaverse, such as virtual concerts or participatory events such as the dance battles described above. “Time to fun” (TTF) is a fascinating metric militantly observed and orchestrated by some of the most successful mobile games in the world. In music, few would want to watch their favorite artist perform in a 3D world plagued by the same slow and choppy loading of the pre-YouTube web video watching experience. Time spent waiting for rendered graphics to catch up is ultimately time spent not having fun. (Just ask frustrated viewers of Meta’s own VR concerts in Oculus, which have run into processing issues even when the “concert” itself was just a prerecorded video.)

Though the 3D virtual world component of the metaverse is important for enabling certain types of user experiences, we ultimately felt that for musical experiences, the 3D aspect in itself is far less important than metaverse experiences simply taking place smoothly in real-time, where platforms can respond and allow for user communication and collaboration. Interactive creative platforms such as Endlesss and ToneStone, both of which are intended to enable collective music creation experiences but do not rely on 3D visuals or worlds, forced us to ask the question: Does a musical metaverse really need a 3D world component? We found the answer to be no.

On Endlesss, musicians are able to come together via a simple shared UI to “jam,” which involves users creating individual musical parts separately, and then working together in real-time to combine these parts and produce a musical composition. Founder Tim Exile told us how through creating a platform that allows for real-time musical collaboration Endlesss essentially wants to “make the process the product,” with the draw for users being the real-time collaboration itself, with no fancy 3D graphics required.


Many of the existing available metaverse experiences meet the basic requirements of being considered real-time rendered 3D virtual worlds (the future is here!), including the majority of those metaverse platforms that we explored as a community during our Season 2 research meetups. For example, virtual worlds such as Roblox, Nowhere, and a.live all boast 3D rendered virtual environments that allow for real-time, interactive experiences for users through features such as in-world games, easy social interactions via voice and text chat, and even musical events where performers are able to interact with audiences live. Notably, while these worlds all have or will soon boast the functionality to be experienced via VR headset, they all are also easily accessed via a standard web browser, making it easy for most users to get started in the metaverse.

Real-time rendered 3D virtual worlds are not only limited to VR, but can also be experienced via augmented reality (AR) platforms on any device with a camera. For example, we took in a beta version demo of Popins, an AR app that is experienced through one’s mobile device. The app allows musicians to perform in real time, with their 3D image (captured via volumetric video technology) displayed on whatever real world background a user chooses via their device camera. Think Pokémon Go, but instead of a cute animated creature, you get Sir-Mix-A-Lot performing a freestyle rap directly on your patio door (or wherever you point your mobile camera). The real-time nature of Popins also allows for interactions between performers and the audience via instantaneous voice chat, creating an interactive experience for fans tuning in to a live concert.

The limitations in the current state of the metaverse when it comes to real-time rendered 3D virtual worlds lie in the quality of the experiences in question, even at the highest levels of production. A Pitchfork review of Charli XCX’s recent virtual concert in Roblox pointed out that during the event, “glitches abounded: At some point there were two versions of the British pop star, one levitating motionlessly in a T-pose while the other prowled around awkwardly. Multiple recorded tracks would play at once, worsening fatigue from an already-limited song selection.” Another recent essay from Yannis Philippakis, front person of the band Foals, explored why they are not interested in performing in virtual worlds until the rendering and experience of these worlds is able to produce similar affective experiences to those produced by real life performances: “That’s my concern with the metaverse: are we going to walk away from a virtual performance and feel like we cheapened something that was powerful, spiritual and joyous by putting it into something cold, remote and sanitized?”.

Across those platforms we tested as a community for this research, none stood out to us as fully optimized in terms of both real-time performance and rendered visuals. Some metaverse worlds such as Decentraland, for example, required high levels of CPU usage and produced choppy visuals that broke the feeling of real-time experience, especially for those users without highly optimized computers. The experience in Second Life felt similarly uneven to our group of community testers, despite the platform having been around and actively developing since 2003. On entering and exploring Spotify Island, a branded game world in the popular metaverse platform Roblox, one of our community researchers commented that the “graphics feel the same as when many of us were super young,” noting in their case that meant a full 15 years ago.

Of the platforms we visited, our community of researchers felt that the demo for Dreamwave, a company which builds VR experiences for others, felt the most realistically rendered, with in-platform avatars mimicking human movements accurately, accompanied by hyperrealistic touches in the graphics rendering such as blades of grass blowing in the wind. This high performance in terms of 3D rendering, however, may be precisely because Dreamwave was a product demo with a limited scope, rather than a fully fledged metaverse world that needs to support a massive user base with extensive functionality and user-generated content, all which require high amounts of compute resources.

Another useful metric for gauging the viability of a given metaverse experience or virtual world is “immersiveness,” which researchers Jay D. Bolter and Richard Grusin describe in reference to a medium “whose purpose is to disappear […] As computer scientists themselves put it […] the viewer should forget that she is in fact wearing a computer interface and accept the graphic image that it offers as her own visual world.” We found across our research that existing platforms generally fail to eliminate awareness of the boundaries between the device being used for access and the experience itself, and thus do not produce a fully immersive experience. This isn’t to say that a user can’t feel a sense of presence within these worlds — simply that we’ve yet to encounter a metaverse experience where the entire outside seems to melt away, placing our focus and presence entirely within the virtual realm.


As alluded to above, in the future, the major difference in how hyperverse platforms perform as real-time rendered 3D virtual worlds will be in terms of the quality of the experiences they enable. With forthcoming upgrades in computing power and networking technologies (which will increase bandwidth and decrease latency), we expect that hyperverse experiences will be able to present accurately rendered versions of the same experiences we have available to us IRL, matching the detail and experience of the physical world.

Imagine being able to watch an artist perform in concert in the hyperverse and have it be rendered in such a way that you feel like you are actually there. Advancements like Unreal Engine’s MetaHuman and Meta Research’s Codec Avatars are bringing us closer to this reality, in terms of real-time, hyper-realistic 3D rendering of characters and worlds within games. The achievement of realistic immersion in the hyperverse is especially important for musical experiences, where the ability to establish emotional connections between artists and fans is crucial.

Beyond producing experiences that accurately mimic the feeling of the physical world, improvements in technology over the coming years will also generate new ways for artists to engage with and emotionally connect with their audiences that wouldn’t otherwise be physically possible. Where Foals’ Philippakis hopes for a hyperverse that will fully recreate today’s physical concert experiences, there also lies massive potential in the hyperverse to produce entirely new types of emotional experiences for audiences. Rather than a hyperverse world that allows you to stand in the crowd and feel like you’re at Glastonbury, instead imagine being able to have one’s avatar fly around an artist who is performing in real-time on a virtual planet, with the experience rendered in such a way that it feels extremely immersive, the realistic sensation of flying augmenting the experience of watching and hearing the artist perform. That said, the believability of these types of fantastical experiences is still off in the future, waiting for technology to catch up to imagination.

[return to table of contents]


Persistence of metaverse worlds refers to the ability for metaverse platforms to always remain online and continue “to exist and develop internally (at least to some degree) even when there are no people interacting with it,” in the words of game researcher Richard Bartle. Put differently, a metaverse world can be considered persistent if it exists while no users are experiencing it. It may even evolve and change without any user interaction.

Without diving too far down the philosophical rabbit hole, this quality of the metaverse activates questions around the importance of human presence for its existence. One way to think about persistence of the metaverse is in relation to the common philosophical thought experiment which asks: “If a tree falls in a forest and no one is around to hear it, does it make a sound?” In the metaverse, if a tree falls in the digital forest and no one is on the platform to hear it, that platform is most certainly persistent. However, it could also be argued that without people and interactions, the metaverse doesn’t meaningfully exist in any way outside of a series of 1’s and 0’s on a server (per our earlier discussions on features like synchronicity, user-generated content, and individual presence). Following from this line of thought, the opportunity for user-generated content and social interactions at large might be considered an essential aspect of persistent metaverse worlds, even if you yourself are not present as a user.

An additional property that follows from this basic premise of persistent metaverse worlds is that in-world events occur simultaneously for all users. One way to think about this is that persistent metaverse worlds are bound to a commonly shared clock or time-space, in the same way that we all simultaneously experience the forward movement of time in real life (IRL). As a result, metaverse worlds that are persistent also allow for ephemeral occurrences to take place, where it is possible for individuals to miss out on events which take place and for the world to evolve while individuals are away from the platform. This feature produces social dynamics such as FOMO (the fear-of-missing-out on occurrences).


Persistence is a key feature of metaverse worlds because it produces an environment that can support ongoing social and infrastructure development across time, catalyzing relationships, social interactions, and narratives that may compel individuals to want to spend more time in these environments.

Under the notion of persistence, individual users can come and go, but the metaverse will continue to change and evolve while they are away. As one of our community researchers noted: “The world doesn’t stop spinning, so the metaverse shouldn’t stop spinning.” This allows for evolving social dynamics and interactions with virtual world environments to trigger human emotions like surprise, delight, and even frustration, that make being in the metaverse worthwhile. In this way, persistence is an enabling function for other key features of the metaverse that we discussed earlier, such as maintaining an individual sense of presence within the metaverse.

In relation to musical experiences, persistence allows for users and fans not only to experience live, in-world events together, but also to form ongoing relationships and communities out of these events — with the full knowledge that they can continue to interact and develop meaningful bonds within the metaverse and across time.

One prime example of this continuity that some of our community members participated in during Season 2 was Shelter VR, a persistent, virtual club experience in the social VR app VRChat. In early May 2022, Shelter hosted “MUST DIE!,” a hybrid URL/IRL experience that took place simultaneously in their VRChat club and in person at VR World NYC. In-person concertgoers could tune into and travel around a VR version of the event onsite; the organizers also set up a physical room where in-person concertgoers could meet and chat remotely with their VR counterparts, which would call in from a virtual recreation of said room. People who met IRL at VR World could continue their interactions in the virtual mirrorworld in VRChat long after the show concluded.

(Importantly, many artist-branded virtual concerts — say, Travis Scott’s Fortnite show, Lil Nas X’s Roblox concert, or Megan Thee Stallion’s Hottieverse — are not persistent, but rather are ephemeral in nature. That said, they do occur against the backdrop of wider, persistent metaverse worlds like Fortnite and Roblox, which provide artists the potential ability to engage fans in these worlds in the future.)

From the artist’s perspective, persistence of metaverse environments enables them to build narratives that engage their fans across time, using metaverse worlds as a canvas on which to tell stories. An artist who might hold a virtual concert within a persistent metaverse world can also create additional content within that world, such as an easter egg hunt for artist-related content, that continues to engage fans beyond the event itself. This creates an environment for fans to connect and continue to grow their fandom, together, and at their own pace.

The importance of long-term storytelling for creating immersive experiences in the metaverse is something we heard frequently from interviewees, including the artist Jagwar Twin — who is working with Josh Hubberman from CTHDRL, a digital design and experience studio, to create an immersive metaverse experience for fans around their upcoming album release. They told us how narrative building in the metaverse, when done well, “allows fans to get in the mind of artists and immerse themselves in the story,” with persistence of worlds being an important feature of being able to create engaging stories over time.

Persistence can also enable new forms of artist-fan interaction.  For example, as creative AI technology continues to improve, persistence of worlds might enable artists to produce AI versions of themselves that are able to realistically interact with fans in the metaverse, without the artist themselves being present. Jagwar Twin is working towards this through the release of an AI version of himself which fans will be able to interact with directly. Speaking to the power of this AI-twin based approach, Jagwar Twin likens it to “fans making the map”; namely, the persistence of the experience allows for Jagwar Twin to cede agency over this particular creative project, inviting fans to take some control over its unfolding development and creating a deep and ongoing bond between the artist and their audience.


Persistence is a fairly common feature across many existing metaverse platforms, with most achieving a high level of persistence, outside of downtime caused by server updates and outages, or network downtime in the case of metaverse platforms that run on the blockchain. For example, Second Life maintains a high level of persistence, with an entire world that is accessible to anyone at any time. This feature contributes to its overall richness as a fully formed virtual world capable of generating deep social interactions and relationships.

One note on the persistence of metaverses run by centralized organizations is that these worlds are in fact only persistent as long as those centralized organizations continue to support, maintain, and keep their servers online. As an example, if Second Life’s user count dropped below a certain threshold, its parent company Linden Lab would likely be unable to maintain the servers on which it is hosted, owing to the high costs and economics around maintaining its servers. Decentralized metaverses such as Decentraland, Voxels, and Sandbox attempt to mediate this risk through decentralization of server infrastructure and network requirements to users via hosting on the blockchain. However, these platforms are all still only as persistent as the reliability of the blockchain on which they are hosted. If a blockchain network experiences downtime, then metaverse platforms running on that chain will also experience outages.

From the perspective of artists looking to create musical experiences for the metaverse, the current state of persistence will mainly inform decisions around where they choose to build and engage. If they decide to build on the more established centralized metaverse platforms, they are taking a risk that those platforms will continue to remain online and act in ways that intersect with their own interests. A notable advantage to building on centralized platforms like Roblox, Minecraft, and Fortnite is that they have massive, well-established user bases, and provide opportunities for artists to reach these users directly. On the other hand, decentralized metaverse platforms like Voxels, Decentraland, and The Sandbox have smaller overall user counts currently, but offer stronger guarantees of persistence into the future; changes in platform structure will likely not happen unilaterally without user and builder input.


A persistent hyperverse in the future would expand in terms of its reliability, with no possibility of interim downtime and a guarantee that the hyperverse is secure from going offline well into the future. This is less a change in the state of persistence itself, but rather an improvement in the infrastructure that allows the hyperverse to maintain persistence at scale over time. For example, to achieve this level of stability of persistence, the hyperverse will likely need to be hosted on a decentralized network with a high degree of network reliability and a massive user base with decentralized nodes, providing security against network failure.

Additionally, in an ideal future, persistence would be maintained across the entire hyperverse, with interoperable hyperverse worlds all maintaining persistence across time, allowing for building within and between multiple hyperverse platforms and worlds without worry of failure or inconsistencies in experiences for users. This type of interoperable persistence would provide a massive unlock for artists who are building and storytelling within the hyperverse; they would be able to reliably know that the experiences they are creating will stand the test of time, as well as have the ability to mix and match the experiences they wish to create with the best tools available across worlds.

[return to table of contents]


User-level portability of data across platforms refers to the ability for an individual to bring and utilize their user data from one metaverse platform or environment into another, without the need for manual bridging or transferring of data from one platform format to another. For example, if a user has created an avatar with specific features including items of digital clothing or gear that they have acquired on one metaverse platform, those items should automatically travel with them should they wish to use their avatar on an entirely separate metaverse platform, without manual transfer.

In short, an individual should be able to have their history and identity travel with them across the metaverse, creating a continuity of reputation and social status across environments. To give a practical music-industry example: A musician who received some form of digital recognition or badge for performing a sold-out VR concert in one metaverse world should be able to display that same badge in another metaverse world in an effort to promote an upcoming show. Even more, they should be able to perform in that other world with their avatar sporting the same virtual concert wardrobe that they used to wow their fans in the first world.

This is the antithesis of dominant Web2 approaches around data, where individual user data is generally locked-in and only useful within the gated walls of specific platforms. For instance, Facebook Badges, which represent accomplishments among groups convened on the platform, are locked to the platform. While they may appear on group Facebook posts or individual user profiles, they cannot be taken outside of the Facebook ecosystem and used to represent an individual’s achievements or influence the way they interact with other platforms, algorithms, or discovery mechanisms.

Ideally, individual users would have complete ownership over their data in the metaverse, rather than relinquishing control of that data to third-party companies. Some analysts, such as Matthew Ball himself, have argued that user ownership of data is an essential part of creating the incentives needed for metaverse development and to ensure true continuity of data.

Nonetheless, it’s important to clarify that portability of data in the metaverse can still exist independent of user ownership. Services such as Google Sign-In and Epic Games’ Account Services options provide some portability of data across Web2 and centralized virtual platforms. For example, signing in with one’s Epic Games login when playing the soccer and racing game Rocket League allows users to play the game on any platform they like (PC, console, etc.), while having their in-game achievements and statistics follow them.

And, as noted by Ball, these achievements in some cases are portable to entirely different worlds. While a similar approach could arguably work for the metaverse, it would also limit some of the advantages of users maintaining ownership and control of their data.


The ability to easily bring one’s data including assets and reputation across digital platforms and worlds facilitates the ability for individuals to easily work, play, and build in and across the metaverse with minimal need for deep technical knowledge or the wasting of time and resources to bridge data between platforms. Having a smooth user experience is an essential part of attracting and keeping individuals interested and engaged.

The ability to transport data, assets & reputation around the metaverse deeply affects the potential for rich musical experiences. First and foremost, it would mean the ability to play and use music that you’ve purchased or collected across metaverse platforms. In the current market-dominant model of music consumption via streaming platforms, music is not purchased and owned, but rather subscribed to and accessed as a service, and only available for listening on one’s streaming platform of choice. Continuity of data across the metaverse could enable a return to a system of music consumption where users can own, play, and use their music widely and easily.

Projecting forward, it’s easy to see how artists might provide opportunities for fans and others to own and easily license and use their songs, samples, or stems to create remixes or entirely new pieces of music, for use across the metaverse. We spoke with Keatly Haldeman and Margaret Link of Dequency, a new music sync licensing startup which is doing just this, through the embedding of blanket song-use licenses within NFTs. These allow artists to set clear terms of use for their music, while also enabling purchasers to understand these limitations and easily deploy music across platforms and multiple licensed use cases.

Portability of data also produces many benefits related to the development of artist-fan interactions. For artists, the ability to interact with and use fan data across metaverse platforms will allow them to better understand their audience and tailor their offerings to fan desires in a modular way. This works both at an aggregate level, through understanding the music listening, interaction, and consumer habits of artist fanbases, and at the individual level, of understanding what individual fans are interested in and providing direct opportunities for them to fulfill these desires. In the aggregate, if an artist working in the pop genre has access to see that fans of Beyoncé uniformly love and respond to specific forms of metaverse engagement like scavenger hunts, then that artist can easily learn and take a page out of Beyoncé’s playbook for their own metaverse building. Likewise, at the individual level, if an artist can see that a specific fan has attended one of their metaverse concerts on one platform via their digital ticket, then that artist could theoretically target that fan directly to offer them access to a pass to an experience on another platform.

On the fan benefit side, portability of data would allow for individuals to proudly port their social reputation as fans of particular artists across metaverse platforms, through items such as virtual merchandise (“verch”). The importance of this reputational use case should not be downplayed, given examples like Lil Nas X’s recent virtual concert in Roblox, which produced verch sales revenues nearing $10 million. For many fans and music lovers, the social reputation gained through having their avatar wear merchandise from their favorite artists is equal to that of purchasing or wearing their favorite artist’s T-shirt IRL.

Of course, there are many questions around data privacy related to these types of interactions that still need to be worked out, and we see the ability to mask or control which data is made public as an essential part of data being portable. In the case of owning one’s data, which as we mentioned above is not strictly necessary for portability of data to exist, there are unique advantages around having the sole ability to control how that data is used. If an individual is ensured true ownership and control over their data, then they can confidently contribute to building and developing metaverse infrastructure and experiences without fear of their work being made worthless overnight, through unilateral actions such as changes in terms of service taken by powerful, centralized organizations.

Other advantages of full data ownership include the ability for individuals to share and monetize their data on their own terms, as well as the ability to uniquely shape how their identity is represented in the metaverse. Whereas in real life (IRL) we are forced to shape a single, unified identity around our individual presence (we only have one body!), in the metaverse individuals may wish to embody multiple identities, and user ownership of data is essential for giving individuals control and agency over their personhood and identity across metaverse platforms and worlds. As part of our research, we spoke with the artist Panther Modern, who told us how they currently employ a multiple-identities approach in their work as a musician through the creation of two separate avatars, JA and JB, developed using motion-capture technology and used for both creation and virtual performances. Where JA acts more like Panther Modern IRL, JB doesn’t sing and is focused more around the production of techno music. For Panther, the ability to fully own and control how these identities develop is essential. Portability and ownership of data are essential for Panther Modern to be able to both have control of their identities, as well as to be able to use them freely and to their fullest in supporting their creative ambitions.


Automated, user-level portability of data across metaverse worlds and platforms is still in its infancy. As mentioned above, data is currently made portable across internet platforms through centralized services such as Google Sign-In; however, this type of free data movement is generally in a limited form, and the ability for individuals to use their data as they wish is mainly relegated to simplifying login processes and the management of user accounts.

The advent of blockchain technology and crypto-wallets such as MetaMask or Rainbow have opened up some new doors for data portability, as an increasing number of decentralized metaverse platforms allow users to sign-in using their crypto wallets, and own parcels of land in the metaverse through holding representative NFTs in these wallets. However, portability of data across and between these separate metaverse worlds is still largely limited by the standardization of file formats and interoperability across platforms. While a fan may have purchased a really incredible virtual bucket hat from their favorite musical artist in Decentraland and have it linked to their crypto wallet, the ability to have their avatar wear the same hat in Voxels would only be made possible if the two platforms use or allow for a matching file format around avatar-wearable verch.

Music is made somewhat portable across digital platforms given the existence of common, standardized formats for audio such as .wav and .mp3. However, the fluid use of one’s library of music across metaverse platforms is similarly constrained by what individual platforms will allow in terms of formats and import options, as well as complications around ensuring that artists and copyright owners get paid for usage of their work. For instance, you cannot simply create a user account on one of the major metaverse platforms such as Fortnite or Roblox and expect to bring your own library of music directly into either platform for use, as this would engage several licensing and rights considerations.

As we heard from rights experts Deborah-Mannis Gardner (DMG Clearances) and Stacey Haber (SHHH Media), music licensing considerations around the metaverse are still being worked out, and standard and simple approaches are essentially nonexistent at this point (though both are working with the Web3 Music Rights Group, a private industry body that is exploring these issues and hopes to make forward progress in the next year). In fact, these complications mimic those currently experienced by 2D livestreaming platforms like Twitch, YouTube, and Instagram Live, where it’s not uncommon for music or concert videos uploaded or streamed to the platforms to be removed owing to copyright-related issues. This begs the question: If mass-market, well-resourced social platforms haven’t been able to parse the hornet’s nest of copyright law and develop an easy standard for music licensing, how can we possibly expect these newer metaverse platforms to develop a widely accepted licensing standard? Until these complex issues are better understood and legal approaches and practices standardized, simply using large swathes of music across the metaverse — let alone making that music truly portable — will remain difficult to achieve.

Nonetheless, some emerging metaverse platforms are thinking hard about how open standards and direct artist control of their rights and licensing will affect music. Greg LoPiccolo from ToneStone, a metaverse platform that focuses on accessible music creation, told us how they are developing their own unique file format which they plan to open-source to allow music created on the platform to be moved off-platform in the form of NFTs. Their format breaks songs down into their constituent parts or stems, and will allow artists to permission each part of a song, be it drums, bass, vocals, etc., separately for licensing and tracking usage, enabling remixes and varied use by purchasers. This approach is extremely flexible, and the hope is that it will allow music to travel widely and be easily used across metaverse platforms, while also minimizing licensing issues by giving artists the easy ability to permission how their music is used and to track and benefit from these usages.


In the future, user-level data portability will ideally reflect full user ownership and control of their data, including the ability to easily port all aspects of user data and virtual identity across the hyperverse while also ensuring that the original creators of virtual items, such as songs or verch, are properly credited and compensated for their work.

As discussed in the above sections, this ideal state will require many advancements across numerous areas, including the coming together of hyperverse builders to agree upon data, file, user interface, and other standards that will enable interoperability between platforms (more on this further below). Whether and how this happens is itself a function of marketplace development, and how individual companies either work together to build a connected functional hyperverse, or alternatively build separately to maintain competitive advantages. Likewise, in the case of making music portable across the hyperverse, new, simplified approaches to music licensing will need to be developed that provide creators with ample protections, while also making it easy for individuals to openly use and innovate on top of existing musical compositions and their constituent parts.

[return to table of contents]


In the broadest sense, interoperability refers to the ability of a computer system to interact with parts of another system, especially when it comes to the exchange of data and information. Think of this like the different protocols like IMAP and POP3 that exist underneath email. IMAP excels at allowing users to access their emails from multiple devices, while POP3 works best on a single device but is better at handling large volumes and has better features for offline work. But regardless of whether IMAP or POP3 underpins your email account, users can communicate and send emails to each other.

The social media platforms that have ruled our internet in the past 15 to 20 years each represent their own silo or walled garden. Within each of these silos, we are able to create specific identities — sometimes even multiple identities within a single platform through different profiles. Just think about the different types of things you post across Twitter, LinkedIn, Instagram, TikTok, etc.; they’re all part of who you are, but aimed at different audiences in different contexts.

Even most virtual worlds themselves are still walled gardens today, existing at various levels of interoperability within their own respective territory. If you think about IRL nightclubs being rebuilt in Minecraft, you can walk from one nightclub to the other within that platform, but moving from a Minecraft nightclub to, say, The Music Locker in Grand Theft Auto — let alone while keeping on the same avatar clothes and listening to the same music — is cumbersome if not impossible.

The promise of the future hyperverse is that each metaverse allows for interactions with other metaverses at the platform level. To ground this concept in a more concrete, real-world example, interoperability can also refer to cross-device continuity — i.e. picking up where you left off regardless of which device you’re using. In gaming, this means that you can continue from where you left off whether you were playing on your phone, PlayStation, or PC — if the game is available on all these devices, of course. (To understand how hard this is, note that it was only in January 2022 that Netflix was able to introduce the ability for people to synchronize deleting a movie or TV show from their Continue Watching row across all devices.)

In addition, if we think of our avatars as our new users profiles (to draw a direct analogy from Web2 social media platforms), essential to interoperability is being able to take your avatar(s) from one world into the next without a glitch. This is directly related to the principle of continuity of data that we discussed earlier in this piece, and requires the wide coordination of several different stakeholders across industries around a shared data standard to pull off. While file standards like VRM have emerged, broad adoption isn’t quite there.


In its ideal state, the metaverse as hyperverse would allow people to move their data from one engine to the next and from one platform to another. Before we get there, however, we can already define interoperability in a more narrow sense, as an existing mode that allows for cross-play across devices and stores. For artists performing virtual concerts within a metaverse this means that players, users, and their fans can all experience something together (within a single metaverse platform) whether they’re on a PC, mobile device or a gaming console. Core to the social aspect of live music is being able to share that experience with your friends, and having broad device support is critical in unlocking larger surface area when it comes to accessibility.

Consider, for example, the difference in rationale behind Travis Scott and Ariana Grande performing in Fortnite. The former was bound to have a large constituency of his fan-base who were already players of the Fortnite Battle Royale game. Ariana Grande, however, was more likely to find a whole host of Fortnite players who might know her name as a major pop artist, but might not know or be into her music. Offering them an experience in their world and through whichever device they preferred gave her access to a whole new set of audiences. With access to massive new audiences (an estimated 3 billion gamers worldwide) enabled by hardware interoperability, artists are also positioned to directly monetize these new fans and listeners by selling music and verch to them directly on their preferred devices.

Looking forward, artists stand to benefit even further if the metaverse is able to achieve cross-world interoperability, allowing users to move freely between metaverse platforms. This would allow artists to take their audiences with them, regardless of where they build or host their musical experiences. It also means being able to easily reach new potential fans, regardless of where any individual fan began their metaverse experience. Artists and their teams would no longer need to consider which platform has the biggest potential audience for their music, as they’d have guaranteed access to audiences from across the metaverse, and to the money they might be willing to spend on music and related merchandise.

Summing up the massive opportunity for artists – with both hardware and metaverse platform interoperability, musicians can reach ALL of the users, regardless of HOW they choose to access the metaverse, and WHERE they choose to hang out in the metaverse.


As we mention above, depending on which version we’re speaking about – hardware interoperability or metaverse platform interoperability – the current state of interoperability for virtual worlds differs. At present, depending on the metaverse platform, there is some ability for device interoperability or cross-play, where users are able to access and have their progress across a single metaverse world synced up. One can, for instance, play Fortnite across multiple devices (though the process to have this feature enabled for Fortnite was not easy, and offers a case study in how difficult it will be to achieve full interoperability in the metaverse).

Hardware interoperability alone, though, does not enable visitors of the metaverse to travel between individual metaverse platforms. Cross-world interoperability, where players could, for instance, take their Roblox avatar and walk it right out the edge of the map and into the Voxels world, is currently non-existent. The existing metaverse landscape is one of individual, closed virtual worlds and AR experiences, where each world represents its own walled garden, with user data and history, in-game items, and ability to roam freely across the borders of individual virtual worlds limited by platform.

As we noted briefly above, there is already some movement towards developing the shared standards and technical infrastructure needed to enable metaverse platform interoperability. The recently convened Metaverse Standards Forum, for instance, features a partner list that includes some of the largest metaverse-focused companies. Nonetheless, the outcomes of these and other parallel processes for standard setting towards interoperability are still far off in the future — contingent on the coordination of leadership and engineering efforts across multiple different industries that may not all share the same economic incentives. At worst, such efforts can often feel a lot like this XKCD comic:

The XKCD comic: Standards


Achieving the full vision of an openly interconnected and interoperable hyperverse is, at this point, aspirational. As discussed above, it rests on the outcomes of a giant coordination problem among builders and leaders of all sizes with differing interests — from those developing entire platforms themselves, to those working only on specific aspects of the problem, like virtual avatar companies. These players will need to come together to develop shared file formats and open data standards, while also likely giving up some short-term competitive advantages (i.e. walled gardens are great for creating moats around revenue sources) in favor of the long-term communal benefits that emerge from interoperability, such as a better environment for innovation.

As a result of this complex environment, whether full interoperability with zero friction becomes a reality at all, even in the hyperverse future, is still a giant question mark. The most likely outcome is a potential middle ground, where proprietary hyperverse worlds are somewhat connected and interoperable with each other, but only in very specific ways. For instance, while it might make sense for centralized platforms to get onboard with a common standard for interoperability of virtual wearables (allowing some commerce to travel), it might not make sense to allow users to take all aspects of their virtual identity outside of the platform (keeping social identity more walled).

Likewise, legal considerations will also impact which parts of the hyperverse are made interoperable. As we discussed earlier in this piece, complexities around copyright and licensing make bringing music IP into individual virtual platforms challenging, let alone allowing that IP to travel freely across hyperverse borders. In the musical realm, there are equally as many players requiring coordination on interoperability as in the virtual worldbuilding domain, with artists, labels, publishers, and CMOs/PROs all looking to produce innovation while also protecting their own established turf.

[return to table of contents]


In the context of the metaverse, decentralization can mean two things:

  1. Ownership decentralization; and,
  2. Technical (infrastructural) decentralization.

Ownership decentralization refers to metaverse platforms that are not owned, controlled, or governed by a central entity. This type of decentralization exists on a continuum and can invoke different kinds of “owners” — e.g. a diverse consortium of companies behind the scenes, versus a large user base of thousands or even millions of people. In most cases, when decentralization is being discussed in current discourses around the metaverse, it is referring specifically to platforms such as Decentraland, Voxels, The Sandbox, and others that intend to decentralize ownership and governance capabilities to their users, often via cryptocurrency tokens.

A key argument in favor of ownership decentralization is that it produces a much more open and democratic metaverse, where users are in control of development, rather than having centralized entities making unilateral decisions. Even if metaverse worlds themself are decentralized, though, they still need to be accessed through hardware devices (e.g. VR headsets, computers, mobile devices, gaming consoles) and distribution channels (e.g. iOS App Store, Google Play Store) that are more than likely to be designed, controlled and sold by centralized players.

In the case of centralized distribution channels, many limit accessibility by charging high fees simply for the right to be listed in their stores. For instance, both the iOS App Store and the Google Play Store currently take a 15% commission for the first $1 million of app store sales and 30% thereafter. Epic Games, the maker of Fortnite, and others, have attempted to challenge these access bottlenecks, bringing lawsuits against both Apple and Google on the basis of antitrust and unfair competitive practices claims.

Regardless of the outcomes of these cases, it’s undeniable that the companies which control the hardware and access side for platform distribution will continue to have outsized power to determine who is able to distribute and access metaverse experiences. User ownership of metaverse platforms provides many benefits, but if large, centralized companies can control how these worlds can be accessed, then true control for users will remain elusive.

Technical (infrastructure) decentralization refers to cases where the underlying technical frameworks on which metaverse platforms operate are distributed among many users, who are often also owners of that infrastructure. An example of technical infrastructure decentralization would be a metaverse platform that featured an in-world radio station which played music that was stored directly on a blockchain or a decentralized file storage system such as IPFS or Arweave.

Though there are many benefits to decentralized infrastructure in terms of user ownership and security, among others, one big challenge is that accessing data via this type of infrastructure is extremely slow, causing metaverse platforms to suffer from time lags and poor user experience — which, as we discussed earlier in the “real-time rendered 3D virtual worlds” section, can make or break the enjoyment of a given metaverse experience.

While they are often conflated in media and public discourse, the metaverse is not the same thing as Web3. Web3 and blockchain-based technologies may, in the end, provide a part of the technical layer that supports specific features of the metaverse, like decentralization and portability of data. Some have even argued that this is essential for achieving a “true,” open metaverse. However, in our view, the metaverse can still exist without relying on blockchain technology; it just might be more fragmented and less open than many hope for. Case in point — existing metaverse platforms like Minecraft, Roblox, and Fortnite all exist without reliance on blockchain infrastructure. Endlesss founder Tim Exile described the difference between the two in our interview, arguing that while Web3 may provide one of many possible tech layers, “the metaverse is the UX layer” which “adds an intimate, social layer” to a virtual experience.


Why, specifically, is decentralization of the metaverse important for musicians? The primary reason is one that we’ve touched on at multiple points in this report: Without user ownership and control, centralized metaverse platforms can easily change the terms on which they interact with their users, including developers and creators, in ways that do not reflect the preferences or interests of those users. (For example, Ubisoft recently announced the decommissioning of online services for 15 of their titles, meaning they will no longer support multiplayer experiences, and crucially, any Downloadable Extra Content that has already been paid for will not be available for redownload going forward.)   And users have no recourse because they are already locked in and path dependent on these platforms. With decentralization, artists maintain much more control over how they exist, build, and engage in the metaverse.

Decentralized ownership means artists can be sure that wherever they are building, they at least maintain some control and input into how those platforms unfold in the future. Many examples abound of popular, centralized social platforms making major changes that are then met with backlash from creators whose livelihoods were dependent on these platforms continuing to operate in a consistent manner. Though ownership decentralization does not mean that all platform changes will always benefit all users, it does shift decision-making power to be more democratic. This provides musicians intent on building in the metaverse at least some piece of mind that they won’t wake up tomorrow and have the terms of engagement changed entirely without their consent.

Likewise, technical decentralization can give artists and creators even more leverage to build how and wherever they like. Even if a decentralized metaverse platform made a change that was not beneficial to creators, if that platform stored its data and virtual assets via decentralized infrastructure, then creators could easily port their work over to other platforms that recognize the same file and data standards. This is why technical decentralization is often viewed as an enabling function for other important features of an open metaverse: It provides a structure for innovation, allowing creators to combine and compose cross-platform experiences in new and exciting ways.

Technical decentralization is also an essential part of ensuring longevity of data — which, for artists, means ensuring that their work and creations will continue to exist into the future. Imagine your favorite artist builds an immersive musical experience on a centralized metaverse platform that stores its data internally. It might be a virtual game that uses their music to enable play, or it might be an evolving concert experience, where users are able to control elements of each song modularly. If that platform goes out of business or for any reason decides to take its servers offline (which, to be clear, happens regularly), that artist and their fans lose the ability to access those musical experiences.

An immediate issue created by this scenario is the artist losing all of the resources that they spent into developing and launching the experience. But there is an important long-term consideration engaged here as well: Historical context. When artistic creations are put on the internet, they produce context around the work itself, around the artist as a whole, and around other works that come after and that may reference or be created based on the same ideas. This is the way that culture is created across time, from artists continuing to build on each other’s ideas. When centralized platforms vanish, they take all of this context with them, leaving artists stuck both without their work and the broad context it produced.


The current state of decentralization across the metaverse varies widely by platform.  Many existing platforms with the largest user bases such as Roblox, Fortnite, and Minecraft are fully centralized, with ownership and governance held by the founding companies, and data and infrastructure based on centralized servers.

However, several startup metaverse platforms including Decentraland, Voxels, Webaverse, and The Sandbox have sprung up with the promise of decentralized alternatives. These platforms usually use cryptocurrency tokens and/or NFTs to represent in-platform ownership of digital assets and data, as well as platform governance rights in some cases. These experiments currently exist on a spectrum of where they are on their way to full decentralization (a state often referred to as “progressive decentralization” on both the ownership and technical levels). Decentraland has launched a DAO, with membership based on holding their in-game currency $MANA that provides users with governance rights. Webaverse is using $SILK as an access token and also distributing NFTs as an incentive to build on their platform, tying it to specific functionality such as owning your own “world.” The Sandbox similarly launched a currency $SAND and promises that in the future it will provide holders with governance rights to their metaverse. Voxels is taking decentralization more slowly, launching Scarcity Island, an area within their metaverse that they are using to test decentralized governance on a smaller scale first, before taking it platform-wide.


The future state of the hyperverse with respect to decentralization is hard to predict, and hangs on a combination of technological, social, political and financial factors. On the one hand, wider adoption of blockchain technology might create the conditions for more decentralized hyperverse experiences to develop. On the other hand, political and social factors will continue to influence the ways that this technology is both structured and regulated, with possibilities to either encourage or potentially constrain decentralization of the hyperverse. If the SEC in the United States, for instance, decides to tightly regulate cryptocurrency as securities, this might create headwinds against future decentralization of the metaverse by producing bureaucratic and tax-related obstacles for artists and builders who want to take advantage of these technologies.

Wagner James Au, a researcher and journalist who has covered Second Life and the metaverse for many years, told us in an interview that they expect the hyperverse to land more on the centralized side of the ledger: “There’s no market movement toward decentralization, so we’ll probably wind up with several competing centralized metaverse platforms.” Nonetheless, it does seem that popular discourse around issues with centralized metaverses continues to grow, and new experiments in decentralized metaverses continue to pop up everyday. In fact, Neal Stephenson, the author who initially coined the term metaverse back in 1992, recently announced their participation in the development of a new, decentralized metaverse platform called Lamina1, billed explicitly as directed at “helping get artists and other value creators paid properly for their work.”

As we mentioned at the start, this work was less about being a “primer” or “defining” the metaverse. Rather it was to provide a shared set of artist- and fan-centric design principles for building musical metaverse experiences.

We are investing our collective energy to understand how to build the proper foundation for use of music in the metaverse for artists and their teams. We are committed to continuing to study the evolution of this realm, as experimentation and practice allow for iterative refinements and new usage cases. We’ll be talking with artists, rights holders, developers, and other music-industry stakeholders on an ongoing basis to continue capturing how they are using the technologies available and what they are able to accomplish in it.

We hope this provides our industry with a clearer, more realistic, and more actionable roadmap for what truly innovative, digital-native music metaverse experiences could be.

[return to table of contents]


👑 🧠 🎙️ 🔎 Brodie Conley, Chrissy Greco, Cherie Hu, Yung Spielburg, Tom Vieira
👑 🧠 💻 🔎 Alexander Flores
🧠 🎙️ 🔎 Katherine Rodgers, Kristin Juel, Lindsey Lonadier, Maarten Walraven
🧠 🤝 🔎 Panther Modern
🧠 🔎 Chinua Green, Julie Kwak, Mat Ombler, Tony Rovello, Demi Wu
🤝 🔎 Abhijit Nath, Christina Calio
🎙️ Dorothée Parent-Roy, Brooke Jackson, Cosmin Gafta, Diana Gremore, Duncan Byrne, Eric Peterson, JhennyArt, Josh Dalton, Mary Maguire, Muñeca Diaz
🔎 Natalie Crue, Robin Lynn, Chris Nunes, Gabriel Appleton, Jonathan Larson, Yanti
💻 Ana Carolina
🤝 Alex Kane, Anne McKinnon, Coldie, Dan Radin, Dani Balcells, Daouda Leonard, Deborah Mannis-Gardner, Dylan Marcus, Ernest Lee, EZ, Gavin Johnson, Greg LoPiccolo, Ian Prebo, Jacqueline Bosnjak, Jaguar Twin, Jillian Jones, Jon Vlassopulos, Jonathan Mann, Josh Hubberman, Keatly Haldeman, Margaret Link, Meredith Gardner, Mike Darlington, Peacenode, Portrait XO, Rohan Paul, Roman Rappak, Shawn Ullman, Shelley Van, Soundromeda, Spinkick.eth, Stacey Haber, Tim Exile, Tropix, Vandal, Wackozacco, Wagner James Au
🎮 Alexander (15), Jackson (15), Vondell (18), Ava (8), Noah (14), Olivia (6), Toma Zaharia (10), Valentina (14), Luca Zaharia (14)

👑 Project leads
🧠 Meta-synthesis analysts + writers + editors
🎙️ Interviewers
💻 Tech + visual design
🔎 Project development + background research
🤝 Interviewees + meetup/demo guides
🎮 Young gamer interviewees — all connected to us through W&M members 🙂