In the DSC November 2 editorial, I posed a question I’ve wondered about for a while. Why is it so hard to create a shared-reality world, whether it’s called Virtual Reality, Artificial Reality, Extended Reality, Digital Twins, or any of a number of other terms? And for that matter, what does it matter to data scientists?
It turns out that the answer to the first question is actually highly relevant to data scientists. In order to share a virtual world, you need to have a common conceptual framework about how things are represented in that world.
This is not just a matter of saying that two meshes (3D models) render the same way. Instead, it means that the communication protocols describe the same kind of actions, events, objects, and environments from one system to the next. It means that coordinate systems (where you are on the planet if you’re even on the same planet) must be, well, coordinated. The consistent framework even comes down to how you send messages back and forth, how you signal changes from one environment to the next, and, ensuring that there’s a way of moving from one universe (verse) to the next.
What makes this process more complicated is that you also have to make sure that people who shouldn’t have access to your personal information don’t, for everyone within the respective universe. Over the last twenty years, the read-only web became the read-write web, but even within that context, the enforcement of permissions to permit that capability is still mostly piecemeal and platform-dependent.
To go beyond that, to create the web of access rights and permissions to allow even the simplest of actions, will require either universal adoption of a single commercial platform (unlikely in the extreme) or the rise of a consistent standard that different platforms can employ to perform the same thing independent of implementation. There are currently a number of such standards that have been or are being proposed in this particular area, some exhaustive, others focused on specific aspects of such a shared world. If the history of standardization is a guideline, none of these, by themselves, will survive in their current forms.
However, a standard by itself should increasingly be seen, not as the final word, but rather as a model or prototype for people to examine, explore, implement and test. Just as a machine learning model should always be tested exhaustively with all kinds of scenarios and use cases, so too should a model of the metaverse be seen as an ensemble effort.
My suspicion is that the seeds are being laid down for a project that will take a decade to reach fruition, but that’s okay. The Internet today, arguably far more robust than it was in 1991, still took a decade for even the basic framework to firm up, and close to three to get to the point where most browsers are implementing more or less a single reference standard (the Mozilla model). Good engineering takes time, and the Metaverse is easily one of the largest software projects ever attempted.
Yet to achieve the Metaverse, we have to take the first tentative steps now.