Covering Disruptive Technology Powering Business in The Digital Age

Home > Archives > News > NVIDIA to Transcend the Metaverse Future With Advanced Innovations
NVIDIA to Transcend the Metaverse Future With Advanced Innovations
November 22, 2021 News


Written by: Muhammad Zulhusni, Journalist, AOPG

Since Mark Zuckerberg announced the rebranding of Facebook and stated that the metaverse will be the successor to the mobile internet, the metaverse has become an increasingly popular concept in the tech world. Even NVIDIA is getting in on the act.

At a recent NVIDIA GTC press pre-briefing, Richard G. Kerris, Vice President, Omniverse Platform Development IGM of M&E at NVIDIA, delved into the topic of Omniverse, the metaverse, and digital twins, stating that the next era of innovation will help us visualise things that haven’t yet been built. Virtual worlds are really what make up this next generation of the next web, and they built NVIDIA Omniverse as the platform to do so.

What Can We Expect From NVIDIA’s Omniverse?

The Omniverse can be used for a variety of purposes by customers. Creators, designers, and engineers can connect major design tools, assets, and projects in a shared virtual space to collaborate and iterate. In addition, developers and software providers can easily create powerful tools on the modular platform to extend its functionality. During the briefing, NVIDIA announced some exciting new Omniverse platform features:

  • Showroom: Showroom allows anyone with an RTX-based system to interact, play, and learn from various Omniverse demos.
  • FARM: FARM is a systems layer that enables multi-GPU, multi-agent rendering, and simulation.
  • AR: The ability to combine your media with what’s on your computer and what’s in the real world that you’re in.
  • VR: The first VR experience with full fidelity, real-time full-frame ray tracing.

Aside from that, NVIDIA announced that Omniverse Enterprise will be available soon. “This is a big breakthrough, an enterprise subscription sold by our global system manufacturers means that an enterprise-ready, multi-GPU scalable 3D design collaboration platform will be available for companies worldwide. And what it brings you, is the ability to work alongside one another or across different locations as teams and be able to take your work as an enterprise customer into the next level,” Richard explained.

To go along with that, NVIDIA also announced:

  • Modulus: A framework for creating physics-ML models for digital twins. This means they’ll be able to assist businesses in developing and maintaining a physically true-to-life simulation of a digital twin. This is extremely important because it allows them to complete their work more quickly.
  • Omniverse Replicator: A powerful synthetic data-generation engine that generates physically simulated synthetic data for training deep neural networks. What makes this so significant is that synthetic data is critical for the future of AI. Replicator enables NVIDIA to generate diverse, massive, and accurate data sets from which to build high-quality, high-performing and safe AI.
  • Omniverse Avatar: Omniverse Avatar is a platform for creating interactive AI avatars. Omniverse Avatar connects the company’s speech AI, computer vision, natural language understanding, recommendation engines, and simulation technologies. Avatars created in the platform are interactive characters with ray-traced 3D graphics who can see, speak, converse, and understand naturally spoken intent. A variety of factors come together to make this possible. NVIDIA connects core NVIDIA SDKs:
    • Riva for speech AI.
    • Metropolis for perception.
    • Merlin for recommendations.
    • Fleet Command for scaling.
    • Omniverse for animation and rendering.
  • Omniverse Avatar “Project Tokkio”: Project Tokkio is an AI-enabled customer service avatar platform. It is a reference application that allows us to demonstrate interactive customer support and leverages technologies such as Riva, Metropolis, Merlin, Fleet Command, and Omniverse for fully interactive, instant dialogue, and customer recognition. To interact with the physical world, Project Tokkio can be deployed with physical robots and talking kiosks.
  • Omniverse Avatar “Project Maxine”: A platform for AI-powered video conferencing, live streaming, telepresence, virtual collaboration, and content creation. It is a set of highly optimised AI SDKs for video effects, audio effects, and Augmented Reality (AR). The AI modular features can be chained into existing pipelines or used to create new customised and scalable end-to-end pipelines.

Intelligent Robots to Become a Part of Everyday Life

The briefing did not just cover virtual innovations, but robots as well. Deepu Talla, Vice President and General Manager of Embedded and Edge Computing at NVIDIA, believes that they will play significant roles in our lives. However, he said that when most people think of robots, they think of the hardware, the computer that powers the robots. Although this is correct, he says it is not the end of the story. According to Deepu, there are many layers and software technologies that must come together to create a true robot.

“In order to build a robot, [it must] perceive the world, reason, and then perform the final action. To run these neural networks, Artificial Intelligence requires a lot of data. So, we use both real-world data and synthetic data to teach the robot, and we use Isaac Sim to do the synthetic data training. Then, while doing the AI training, you need a supercomputer or a lot of compute, and NVIDIA solutions are available,” said Deepu.

As part of the Omniverse, he also stated that simulation is a critical component of robotics. Experimenting with robots in simulation is safer than in the real world. As the importance of data quality grows, NVIDIA is releasing the new Omniverse Replicator for Isaac Sim application, which is based on the recently announced Omniverse Replicator synthetic data-generation engine. These new Isaac Sim features enable ML engineers to generate high-quality synthetic datasets for training robust deep-learning perception models.

NVIDIA also unveiled the NVIDIA Jetson AGX Orin™, the world’s smallest, most powerful, and energy-efficient AI supercomputer for robotics, autonomous machines, medical devices, and other types of embedded computing at the edge.

Jetson AGX Orin, based on the NVIDIA Ampere architecture, provides 6x the processing power while maintaining form factor and pin compatibility with its predecessor, Jetson AGX XavierTM. It can perform 200 trillion operations per second, similar to a GPU-enabled server but in the palm of your hand.

“As robotics and embedded computing transform manufacturing, healthcare, retail, transportation, smart cities and other essential sectors of the economy, the demand for processing continues to surge,” said Deepu. “Jetson AGX Orin addresses this need, enabling the 850,000 Jetson developers and over 6,000 companies building commercial products on it to create and deploy autonomous machines and edge AI applications that once seemed impossible.”