Simulators soar to new heights in a mixed reality world

Real world video combined with eye-tracked, high-resolution digital imagery provides an immersive training environment for rescue crews, forest fire fighters, logistics teams, and more.
1 September 2022

Cloud based environment: developers of custom simulators have a wide range of tools that can be deployed to build highly immersive and productive training experiences for users. Image credit: Shutterstock.

Microsoft flight simulator was a classic computer game back in the 1980s, which allowed players to explore 8-bit skies around Chicago, Los Angeles, New York City, and Seattle. Decades later, PC or X-box Series X versions offer players the virtual freedom to fly all around the world in rich detail that is almost unimaginable to fans of the original title. But even this mind-blowing jump in performance doesn’t scratch the surface of what’s possible in the commercial simulator space. Here, developers are devising a fascinating array of hardware and software combinations with ultra-realistic controllers that can even mimic the feel of a hoisting cable. Commercial simulators are being deployed to train pilots on how to put out forest fires, allow construction workers to practice operating giant cranes and huge earth-moving equipment, help rescue crews to manage challenging weather conditions, and much more besides.

Compelling composite imagery

High on the list of big breakthroughs in the simulator world has been the skill with which developers can combine live video and digitally rendered images, with breathtaking results. “Mixed reality didn’t exist five years ago,” Ryan Binns, Chief Architect at BISim, told TechHQ. “Your brain goes with it because everything feels right.” Users of the latest rigs can see their own hands activating the various hardware switches, moving real world joysticks, or turning an actual steering wheel – all captured by a video feed based on externally-mounted headset cameras. This stereo view provides depth perception, and more magic happens when operators turn their heads to look out of a cockpit window, to give an example.

Thanks to mixed reality engines, simulator displays can swap in digitally rendered ‘pass through’ images of the virtual world outside, triggered as users turn their heads to look around. One way of doing this is to use chroma key compositing (or ‘green screens’) for marking out cockpit windows and other digitally rendered portions of the scene. But with this approach, the intersections between the real and virtual worlds can still sometimes distract users, breaking the spell. For a more believable experience, modelers today instead leverage computer aided design (CAD) information – which precisely locates the various real world cockpit switches, buttons, and other interactive controls – to improve the simulation. This approach ensures perfectly crisp, realistic edges between video and digitally rendered imagery.

Playing into this visually rich display are improvements in virtual reality (VR) headsets, which are an increasingly popular choice in the simulator sector thanks to the compact nature of the equipment. Headsets are much easier to transport and accommodate compared with the use of larger monitors or bulkier projection methods. Today’s headsets, which push field-of-view (FOV) performance much closer to that of human binocular vision, are also very immersive. But this performance comes with challenges.

Eye for detail

Boosting the peripheral vision that’s available means that graphics cards need to be able to handle many more pixels, and this is on top of the requirement to provide high-definition images throughout. But rather than just throwing more hardware at the problem, developers can make use of eye-tracking functionality that’s already built into the headsets. “Rather than render the whole scene at maximum resolution, systems can provide the highest definition directly in front of the wearer’s pupils,” Binns explains.

Graphics performance can also be lifted by deploying machine learning, which helps developers manage the demands of fast-moving scenes. Examples include deep-learning super sampling (DLSS) – intelligent upscaling technology that’s available with Nvidia chip sets to squeeze more out the hardware. And one of the quests driving the deployment of faster, more capable graphics processing is the need minimize latency. Too much of a lag between the wearer’s movements or actions within the simulator and the response of the images rendered inside the headset, will soon make a user feel nauseous – a common complaint with off-the-shelf consumer devices. Also, trainees could be using professional simulators for hours at a time, which is another reason why the technology has to be many steps ahead.

Replaying the benefits

When he’s not helping to build new simulators, Ryan Binns is up in the air flying real aircraft. He’s been piloting fixed-wing planes for 15 years and trains new learners in his spare time. It offers valuable insight for his day job and means that when he says that today’s custom simulators are incredibly realistic, you can confidently take him at this word. Simulators offer a range of benefits to users, allowing trainees to practice in a safe environment and giving teachers the opportunity to play scenarios back and talk through the experience with students – activities that are either impossible or certainly much more complicated (and expensive) to achieve in a real world setting. The scale of the experience that’s possible in the virtual world is another plus point.

So-called ‘whole earth simulators’ are making the digital realm a much more practical place for companies to make use of. “You can navigate credibly in a sim using real-world landmarks,” said Binns. “And the availability of this data is getting better and better.” A key area of expertise for simulator providers is being able to ingest the variety of information streams that now exist – for example, satellite imagery – and blend them into a believable whole earth rendering. Smart techniques that are being rolled out include the augmentation of photo-imagery – which can only provide so much detail if captured high up in space – with higher-resolution, accurately placed digital features that can be overlayed. Unlike a background photo, digital assets can be changed over time – adding to the level of realism.

Today’s industrial simulation experience is a practical and compelling one, where cutting-edge technology is deployed on the features that matter the most. This could include custom controllers that boost the detail of swaying cargo blowing in the wind, or repurposing snow algorithms to create ash layers that are convincing enough to train fire fighters. These digital worlds benefit from a wide range of talents, which includes the software developers building the code, the engineering teams integrating the hardware, and the subject matter experts that validate the final experience.