The term metaverse has been inescapable this year.
But Intel is giving the trend a reality check. In his December 14 blog post, Raja Koduri, Intel’s senior vice president and general manager for accelerated computing systems and graphics, said that in reality – from a computing standpoint – we’re quite far from the current popular idea of what the metaverse is like: an alternate digital world where we can live, indistinguishable from the real world.
“Truly persistent and immersive computing, at scale and accessible by billions of humans in real time, will require even more: a 1,000-times increase in computational efficiency from today’s state of the art,” said Koduri.
Koduri said, “Indeed, the metaverse may be the next major platform in computing after the world wide web and mobile.” He cited advancements in computer-generated animation, VR and AR displays that “have progressed rapidly in recent years,” higher dependence on technology because of the pandemic, as well as new decentralized finance tech that “encourage everyone to play a role in creating these metaverses.”
While the computing power we have now seems impressive, he puts things in perspective, showing that current computing capabilities fall short of the requirements needed for a persistent, immersive, visual world where billions can co-exist simultaneously.
Explained Koduri: “Consider what is required to put two individuals in a social setting in an entirely virtual environment: convincing and detailed avatars with realistic clothing, hair and skin tones – all rendered in real time and based on sensor data capturing real world 3D objects, gestures, audio and much more; data transfer at super high bandwidths and extremely low latencies; and a persistent model of the environment, which may contain both real and simulated elements.”
It seems doable, because after all, we already have massively multiplayer video games with realistic graphics. But Intel’s definition here of the metaverse involves extremely realistic virtual reality that deals with not just a few thousand players onboard but hundreds of millions.
“Now, imagine solving this problem at scale – for hundreds of millions of users simultaneously – and you will quickly realize that our computing, storage and networking infrastructure today is simply not enough to enable this vision,” Koduri said in his blog post.
“We need several orders of magnitude more powerful computing capability, accessible at much lower latencies across a multitude of device form factors. To enable these capabilities at scale, the entire plumbing of the internet will need major upgrades,” he added.
Koduri offered a look at how they’re taking on the problem, describing their three-layered approach: “Within the meta intelligence layer, our work focuses on a unified programming model and software development tools and libraries that are open in order to enable developers to deploy complex applications more easily. The meta ops layer describes the infrastructure layer that delivers compute to users beyond what is available to them locally. And finally, the meta compute layer is the raw horsepower necessary to power these metaverse experiences.”
Currently, at a smaller scale perhaps, we can already achieve a more simplistic metaverse with less realistic visual representations and smaller populations.
One current example: The Verge, discussing Koduri’s blog post, mentions Meta’s flagship VR space, Oculus Worlds, which only has a max of 20 participants at a given space and time, and with visuals that aren’t hyperrealistic. The Verge also points to a Quartz interview with Koduri where the executive gave an estimate of the computing increases we’re looking to make in the next five years: just eight to 10 times more than current capabilities.
Simply put – at least according to one company that, for many stretches, has been at the forefront of consumer computing products – a fully realized, The Matrix-like digital world is still quite a long way to go given what current computing infrastructures are capable of. – Rappler.com