Why the Metaverse Failed
11 Sep
Written By Roderick Kennedy
The research firm Gartner has a way of visualizing the development and adoption of new technologies. After an initial work and investment gold rush, there follows a "trough of disillusionment", where the bold claims of the early days haven't panned out, and investors, journalists and even the technologists themselves feel that things aren't going anywhere - at least going nowhere fast enough to justify the hype.
By Jeremy Kemp at English Wikipedia, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=10547051
In the Gartner Hype Cycle, this Slough of Despond is followed by the "Slope of Enlightenment", where the technology continues to develop and starts to achieve its best practices. Finally, there's a Plateau of Productivity, where it reaches the Nirvana of actual use in the real world and value to the economy.
Of course there are a number of problems with this model, if we can even call it that. The Y-axis is "expectations", which is a pretty nebulous thing to measure. In the full flush of hype, you could be halfway up the peak, or you could be at the top and ready to drop. You only know some time later where you were.
And the whole graph presupposes that this technology will end up being useful. If every Trough of Disillusionment was guaranteed to be followed by an upward slope, why would we even get disillusioned?
So you could equally be in the trough before enlightenment kicks in, or you could be on the tail end of an idea that just isn't going to work, and will never find its Plateau. Think, Apple Newton, or... jetpacks.
Now, we certainly seem to be in one or other of these two situations with the Spatial Internet, or as some prefer to call it, the Metaverse.
The idea of a Spatial Internet goes back a long way. In William Gibson's Neuromancer, he imagined hackers "jacking in" with a brain-computer interface directly to the network itself, experiencing a "shared hallucination", a direct view onto the data and communication structures of the "matrix". Conversely, Snow Crash by Neal Stephenson envisaged a simulated environment: what you see is not the raw data made visual, but a virtual world recreating aspects of the real world: you can own property, do business, socialize. This latter model took hold in the minds of many key thinkers in Silicon Valley and beyond.
In 1994, only 3/4 years after the Web burst onto the scene and made the Internet navigable to ordinary people, David Ragget, Mark Pesce and Brian Behlendorf published a proposal for "VRML: Extending WWW to support Platform Independent Virtual Reality", the first substantial technological effort to make the networked Metaverse a reality.
There was a whole hype cycle around VRML. But by the time of the dotcom crash, the tech writer Clay Shirky noted:
“The Quake 3-D engine is for something, while VRML is a technology in search of a use. The Quake engine’s original use is immaterial; once you have a world-modeling protocol… you’ve got the whole ball of wax — spaces, movement, objects, avatars, even a simple physics… once a tool is good for one thing, it’s easy to see how to make it good for another thing.”
— Quote Source
And so VRML reached its own Trough of Disillusionment, from which it has yet to recover. But contrary to Gartner, this was not the first such trough for VR. In 1984, VPL Research, founded by Jaron Lanier had launched, with products such as the DataGlove and the EyePhone (an early headset). There was much fanfare, and films like The Lawnmower Man built the hype around VR, but the hardware was expensive, and didn't find much use outside of military training.
So what we really see is multiple peaks and troughs. But will we ever see the plateau?
When I think of the Spatial Internet, I tend to imagine something like what Pesce and his cohorts proposed in VRML: a shared, common, navigable network. But what we have instead at present is not that. Many millions of venture dollars and person-years of effort have been put into downloadable, installable applications running on XR hardware. It's a mobile phone model: there's an app store, you buy an app, you run it. If you want to do something different, you run a different app.
Now, there are strong efforts at standardization: file formats like GLTF and USD offer a way for applications to share data, so it's possible for users to port some aspects of their personal data between apps.
What we do not have right now though, is a navigable network. When Tim Berners Lee created the World Wide Web, he presented two interlocking technologies. HTTP transfers whole files from a server to a client. HTML allows textual data to be formatted on arbitrary screens, and provides a set of common guiderails.
Guiderails are key. Everyone knows what a link looks like on a web page. Everyone knows that when you click a link, it takes you to a new server, a new URL, and that this is a one-way system - the receiving site doesn't need to give permission to the linking site, it treats each request afresh. Everyone knows that images can be embedded, and it's up to the browser to choose how to display them, according to a clear set of rules that HTML defines.
So you know when navigating the Web that each page you go to follows these same rules, and will be amenable to how you are used to viewing and interacting with sites.
There are no guiderails for the Spatial Internet. Because there's no protocol: we're still using HTTP for WebXR, when we really ought to have a realtime protocol that's built for spatial use-cases.
What should a link look like? Should it be one-way? I suggest: not always. Spatial applications are much more data-heavy than websites. You need to know whether, when you follow a link or portal to a different site, that site is: a) available, and b) has some way back to where you were before.
In 1997, researchers at the US Naval Postgraduate School proposed VRTP: a protocol for VRML that met the needs of spatial internet applications (https://faculty.nps.edu/brutzman/vrtp/). But because VRML was hitting its own trough soon afterwards, VRTP was not developed to completion.
If there is to be a Spatial Internet, we have to address this fundamental challenge. It's what we're doing with Teleport VR, and working with the standards bodies I hope we'll get Teleport or something like it adopted. From here, it could go either way: the last line of that Gartner curve has not yet been drawn.
https://roderickkennedy.com/virtual-reality/http-for-vr
Why the Metaverse Failed
11 Sep
Written By Roderick Kennedy
The research firm Gartner has a way of visualizing the development and adoption of new technologies. After an initial work and investment gold rush, there follows a "trough of disillusionment", where the bold claims of the early days haven't panned out, and investors, journalists and even the technologists themselves feel that things aren't going anywhere - at least going nowhere fast enough to justify the hype.
By Jeremy Kemp at English Wikipedia, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=10547051
In the Gartner Hype Cycle, this Slough of Despond is followed by the "Slope of Enlightenment", where the technology continues to develop and starts to achieve its best practices. Finally, there's a Plateau of Productivity, where it reaches the Nirvana of actual use in the real world and value to the economy.
Of course there are a number of problems with this model, if we can even call it that. The Y-axis is "expectations", which is a pretty nebulous thing to measure. In the full flush of hype, you could be halfway up the peak, or you could be at the top and ready to drop. You only know some time later where you were.
And the whole graph presupposes that this technology will end up being useful. If every Trough of Disillusionment was guaranteed to be followed by an upward slope, why would we even get disillusioned?
So you could equally be in the trough before enlightenment kicks in, or you could be on the tail end of an idea that just isn't going to work, and will never find its Plateau. Think, Apple Newton, or... jetpacks.
Now, we certainly seem to be in one or other of these two situations with the Spatial Internet, or as some prefer to call it, the Metaverse.
The idea of a Spatial Internet goes back a long way. In William Gibson's Neuromancer, he imagined hackers "jacking in" with a brain-computer interface directly to the network itself, experiencing a "shared hallucination", a direct view onto the data and communication structures of the "matrix". Conversely, Snow Crash by Neal Stephenson envisaged a simulated environment: what you see is not the raw data made visual, but a virtual world recreating aspects of the real world: you can own property, do business, socialize. This latter model took hold in the minds of many key thinkers in Silicon Valley and beyond.
In 1994, only 3/4 years after the Web burst onto the scene and made the Internet navigable to ordinary people, David Ragget, Mark Pesce and Brian Behlendorf published a proposal for "VRML: Extending WWW to support Platform Independent Virtual Reality", the first substantial technological effort to make the networked Metaverse a reality.
There was a whole hype cycle around VRML. But by the time of the dotcom crash, the tech writer Clay Shirky noted:
“The Quake 3-D engine is for something, while VRML is a technology in search of a use. The Quake engine’s original use is immaterial; once you have a world-modeling protocol… you’ve got the whole ball of wax — spaces, movement, objects, avatars, even a simple physics… once a tool is good for one thing, it’s easy to see how to make it good for another thing.”
— Quote Source
And so VRML reached its own Trough of Disillusionment, from which it has yet to recover. But contrary to Gartner, this was not the first such trough for VR. In 1984, VPL Research, founded by Jaron Lanier had launched, with products such as the DataGlove and the EyePhone (an early headset). There was much fanfare, and films like The Lawnmower Man built the hype around VR, but the hardware was expensive, and didn't find much use outside of military training.
So what we really see is multiple peaks and troughs. But will we ever see the plateau?
When I think of the Spatial Internet, I tend to imagine something like what Pesce and his cohorts proposed in VRML: a shared, common, navigable network. But what we have instead at present is not that. Many millions of venture dollars and person-years of effort have been put into downloadable, installable applications running on XR hardware. It's a mobile phone model: there's an app store, you buy an app, you run it. If you want to do something different, you run a different app.
Now, there are strong efforts at standardization: file formats like GLTF and USD offer a way for applications to share data, so it's possible for users to port some aspects of their personal data between apps.
What we do not have right now though, is a navigable network. When Tim Berners Lee created the World Wide Web, he presented two interlocking technologies. HTTP transfers whole files from a server to a client. HTML allows textual data to be formatted on arbitrary screens, and provides a set of common guiderails.
Guiderails are key. Everyone knows what a link looks like on a web page. Everyone knows that when you click a link, it takes you to a new server, a new URL, and that this is a one-way system - the receiving site doesn't need to give permission to the linking site, it treats each request afresh. Everyone knows that images can be embedded, and it's up to the browser to choose how to display them, according to a clear set of rules that HTML defines.
So you know when navigating the Web that each page you go to follows these same rules, and will be amenable to how you are used to viewing and interacting with sites.
There are no guiderails for the Spatial Internet. Because there's no protocol: we're still using HTTP for WebXR, when we really ought to have a realtime protocol that's built for spatial use-cases.
What should a link look like? Should it be one-way? I suggest: not always. Spatial applications are much more data-heavy than websites. You need to know whether, when you follow a link or portal to a different site, that site is: a) available, and b) has some way back to where you were before.
In 1997, researchers at the US Naval Postgraduate School proposed VRTP: a protocol for VRML that met the needs of spatial internet applications (https://faculty.nps.edu/brutzman/vrtp/). But because VRML was hitting its own trough soon afterwards, VRTP was not developed to completion.
If there is to be a Spatial Internet, we have to address this fundamental challenge. It's what we're doing with Teleport VR, and working with the standards bodies I hope we'll get Teleport or something like it adopted. From here, it could go either way: the last line of that Gartner curve has not yet been drawn.