Gesture recognition — changing how we play with tech

Not just for gamers, gesture-recognition tech is providing an advantage anywhere from the highway to the emergency room.
2 November 2020

The Microsoft Kinect in action. Source: Shutterstock

  • Once the reserve of sci-fi flicks, gesture-based technology is rapidly growing market
  • Not just for gamers anymore, the technology now has genuine practical value, whether that’s on the road or in the ER

“Science fiction’s always been the kind of first level alert to think about things to come,” Steven Spielberg once said; “Every science fiction movie I have ever seen, any one that’s worth its weight in celluloid, warns us about things that ultimately come true.”

In his 2002 hit sci-fi film Minority Report set in the year 2054, there is a moment when actor Tom Cruise, playing police chief John Anderton, is able to manipulate surveillance footage and other data with a quick swipe of his hand through the air. He can effortlessly and naturally turn it, shrink it, rewind it, push it aside. 

Before the film’s production, the world-renowned director invited fifteen experts to consider technologies that could feasibly be developed by that time. 2054 doesn’t sound as far away as it did 18 years ago, and that advanced gesture-based technology, in particular, a bit less mind-blowing. As we rapidly digitize, leaving paper in the past, gesture recognition continues to ascend as an alternative and viable way to interact with technology, alongside things like voice recognition. 

What is gesture recognition?

Gesture recognition is a type of perceptual computing user interface that allows computers to capture and interpret human gestures as commands and execute them. In most cases, ‘gesture’ refers to any non-verbal communication intended to communicate a message, and when it comes to gesture recognition, it’s any physical movement that can be interpreted by a motion sensor, whether it’s the pinch of fingers or the kick of a leg. 

The technology — while we might not use it every day — is already used by consumers through game consoles from Nintendo Switch, PlayStation, and Xbox, and Smart TVs, in some cases removing the need for traditional computing inputs from keys and buttons.

In fact, Microsoft was one of the first to deliver the technology at the mainstream scale with its Kinect for Xbox 360, released in November 2010. It captured body and hand motions in real-time, freeing gamers from normal controllers, and supporting multiple users within a small room setting. Today, Kinect is part of Microsoft’s cloud-based service, Azure, where the device and software developer kit (SDK) is aimed at enterprise users and markets like logistics, robotics, healthcare, and retail.

Other technology behemoths have launched their own products. Google provided gesture interactions on its mobile phones and smart speakers while Huawei launched gesture control on its flagship mobile phone, Mate30. Apple submitted a patent regarding the application of gestures on smart speakers last year. These are among some examples by multinationals, there are so many other companies that have adopted this technology over the last decade.

Becoming serious business

Combined with technology like artificial intelligence (AI) and IoT, and more accurate accelerometers, sensors, and infrared cameras, gesture recognition continues to advance and is becoming a sizeable industry in itself. According to a report by Markets and Markets, the gesture recognition market is projected to reach US$32.3 billion in 2025 from US$9.8 billion in 2020, growing at a compound annual growth rate of more than 25%. 

In the automotive sector, manufacturers are keenly exploring how the technology can make interactions with infotainment systems more natural, so the driver doesn’t have to take his or her eyes off the road. Gesture recognition has advantages over voice recognition, which can be distorted in a noisy cabin. Since 2016, BMW’s 7 Series line has had gesture recognition that allows drivers to turn up or turn down the volume, accept or reject a phone call, and change the angle of the multicamera view. 

Meanwhile, as our homes and workplaces become increasingly smart and touchless, gesture recognition is being applied to allow users operate things like lights and heating without touching a switch or dial. In shopping malls and retail outlets, the technology can allow shoppers to navigate maps or catalogs, especially in an era of increased desire for contactless. 

In the healthcare sector, gesture recognition has vast potential in maintaining boundaries between what is and what isn’t sterile, while enabling surgeons to access information and imaging required for the patient and procedure. In these settings, surgeons simply can’t interact with touchscreens or keyboards once scrubbed, without breaking asepsis. 

Doctors and nurses could access a patient’s MRI scan, or make notes by ‘writing’ in the air.