Facebook open-sources ‘hyper-realistic’ VR

Facebook wants to advance its hyper-realistic VR tech by handing over the code to the community.
20 December 2018 | 577 Shares

Virtual reality Samsung Gear VR headset during VRLA Expo Winter. Source: Shutterstock

Facebook’s virtual reality (VR) research team Reality Labs (FRL) has announced the public release of DeepFocus, its AI-driven system for rendering hyper-realistic focus effects for VR platforms.

DeepFocus works with advanced prototype headsets, rendering blur in real time and at various focal distances.

The ‘defocused blur’ (also known as retinal blur) is important for achieving realism and depth perception in VR. So far, DeepFocus is the first system able to produce this effect in real time.

“When someone wearing a DeepFocus-enabled headset looks at a nearby object, it immediately appears crisp and in focus, while background objects appear out of focus, just as they would in real life,” FRL said.

Using deep learning, the company developed a ‘convolutional neural network’ that produces an image with accurate retinal blur as soon as the eye looks at a different part of a scene.

“Some traditional approaches, such as using an accumulation buffer, can achieve physically accurate defocused blur. But they can’t produce the effect in real time for sophisticated, rich content, because the processing demands are too high for even state-of-the-art chips,” said FRL.

“Instead, we solved this problem using deep learning. We developed a novel end-to-end convolutional neural network that produces the image with accurate retinal blur as soon as the eye looks at different parts of a scene.”

Relying on standard RGB-D color and depth input, DeepFocus can work with nearly all existing VR functions. It’s also compatible with three types of headsets that are currently being explored in the VR research community, namely vari-focal displays, including Facebook’s own Half Dome, multi-focal displays, and light field displays.

Open-sourcing the product means that a broad swath of VR developers can now contribute to the advancement and adoption of this technique, potentially leading VR towards much more lifelike visuals for use in a range of applications, from gaming, training, education, design and more.

Another aspect that FRL wants to improve relates to efficiency and power consumption. Currently, the demo runs on a four-GPU (graphics processing unit), but the team wants to dial-down that usage to just one.

According to FRL’s vision scientist, Marina Zannoli, the end goal of the project is “to deliver visual experiences that are indistinguishable from reality”.

While crucial for adding realism, retinal blur is also considered an important factor in making VR experiences more comfortable. The system is all about “all-day immersion”, according to FRL, which means addressing common issues of eye strain and visual fatigue, which have so far helped to hamper the technology’s adoption.

There is one caveat to the announcement; under a Creative Commons license, the open-sourcing of DeepFocus precludes commercial exploitation.

That means that while everyone is free to download, use and modify the source code, they are not permitted to use it commercially, a clause that will likely see any work with the technology limited to users of its own system.