Nvidia

University Researchers Collaborate with NVIDIA to Address Graphics Challenges

University Researchers Collaborate with NVIDIA to Address Graphics Challenges

NVIDIA’s latest university collaborations in graphics research have produced a reinforcement learning model that smoothly simulates sports movements, ultra-thin holographic glasses for virtual reality, and a real-time rendering technique for illuminated objects. hidden light sources.

These projects — and more than a dozen others — will be on display at the SIGGRAPH 2022, taking place August 8-11 in Vancouver and online. NVIDIA researchers have 16 technical papers accepted at the conference, representing work with 14 universities, including Dartmouth College, Stanford University, Swiss Federal Institute of Technology Lausanne and Tel Aviv University.

Articles cover the breadth of graphics research, with advances in neural content creation tools, human display and perception, mathematical foundations of computer graphics, and neural rendering.

Neural tool for versatile simulated characters

When a reinforcement learning model is used to develop a physics-based animated character, the AI ​​typically learns one skill at a time: walking, running, or perhaps cartwheeling. But researchers from UC Berkeley, the University of Toronto and NVIDIA have created a framework that allows the AI ​​to learn a whole repertoire of skills – demonstrated above with a warrior character who can wield a sword , use a shield and get back up after falling.

Achieving these smooth, realistic motions for animated characters is usually tedious and labor-intensive, with developers starting from scratch to train the AI ​​on each new task. As stated in this paperthe research team enabled reinforcement learning AI to reuse previously learned skills to respond to new scenarios, improving efficiency and reducing the need for additional motion data.

Tools like this can be used by creators in the fields of animation, robotics, games, and therapeutics. At SIGGRAPH, NVIDIA researchers will also present papers on 3D neural tools for surface reconstruction from point clouds and interactive shape editing, plus 2D tools for AI to better understand gaps in vector sketches and improve the visual quality of time-lapse videos.

Bringing Virtual Reality to Lightweight Glasses

Most VR users access 3D digital worlds by fitting bulky head-mounted displays, but researchers are working on lightweight alternatives that look like standard glasses.

A collaboration between NVIDIA and Stanford researchers has integrated the technology needed for 3D holographic images into a portable screen a few millimeters thick. The 2.5 millimeter screen is less than half the size of other thin VR displays, known as pancake lenses, which use a technique called folded optics that can only support 2D images.

The researchers accomplished this feat by approaching display quality and size as a computational problem, and co-designing the optics with an AI-powered algorithm.

While previous VR displays required a distance between a magnifying eyepiece and a display panel to create a hologram, this new design uses a Spatial Light Modulator, a tool that can create holograms right in front of the user’s eyes, without needing that space. Additional components – a pupil-replicating waveguide and geometric phase lens – further reduce the device’s footprint.

It’s one of two VR collaborations between Stanford and NVIDIA at the conference, with another article offering a new computer generated holography frame which improves image quality while optimizing bandwidth usage. A third paper in this area of ​​display and perception research, co-authored with scientists from New York University and Princeton University, measures how rendering quality affects how quickly users respond information on the screen.

Lightbulb Moment: new levels of real-time lighting complexity

Accurately simulating light paths in a real-time scene has always been considered the “holy grail” of graphic design. The work detailed in a paper by the University of Utah School of Computing and NVIDIA raises the bar by introducing a path resampling algorithm that allows real-time rendering of scenes with complex lightingincluding hidden light sources.

Consider entering a dark room, with a glass vase on a table indirectly lit by a floor lamp located outside. The glossy surface creates a long light path, with rays bouncing multiple times between the light source and the viewer’s eye. The calculation of these light paths is usually too complex for real-time applications such as games, so it is mainly used for movies or other offline rendering applications.

This article highlights the use of statistical resampling techniques – where the algorithm reuses calculations thousands of times while tracing these complex light paths – when rendering to effectively approximate real-time light paths. The researchers applied the algorithm to a classic difficult scene in computer graphics, pictured below: a set of indirectly lit metal, ceramic and glass teapots.

Related articles written by NVIDIA at SIGGRAPH include a new sampling strategy for inverse volume renderinga new mathematical representation for manipulating 2D shapessoftware for create samplers with improved uniformity for rendering and other applications, and a way to transform biased rendering algorithms into more efficient unbiased rendering algorithms.

Neural Rendering: NeRFs and GANs Power Synthetic Scenes

Neural rendering algorithms learn from real-world data to create synthetic images – and NVIDIA research projects are developing cutting-edge tools to do this in 2D and 3D.

In 2D, the Model StyleGAN-NADA, developed in collaboration with Tel Aviv University, generates images with specific styles based on user text prompts, without requiring sample images for reference. For example, a user could generate images of vintage cars, turn their dog into a painting, or turn houses into huts:

And in 3D, researchers from NVIDIA and the University of Toronto are developing tools that can support the creation of large-scale virtual worlds. Instant Neural Graph Primitivesthe NVIDIA article behind the famous Instant NeRF tool will be presented at SIGGRAPH.

NeRFs, 3D scenes based on a collection of 2D images, are just one capability of the Neural Graphics Primitives technique. It can be used to represent any complex spatial information, with applications such as image compression, highly accurate representations of 3D shapes, and ultra-high resolution images.

This work is paired with a University of Toronto collaboration that compresses 3D neural graphics primitives just like JPEG is used to compress 2D images. It can help users store and share 3D maps and entertainment experiences between small devices such as phones and robots.

There are more than 300 NVIDIA researchers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics. Learn more about NVIDIA Research.