قالب وردپرس درنا توس
Home / Tips and Tricks / Google's KI solution for hand and finger tracking could be huge for smartglasses «Next Reality

Google's KI solution for hand and finger tracking could be huge for smartglasses «Next Reality



Although early attempts at consumer smartglasses used track pads and hand-held or portable controls for user input, these are the gesture control interfaces of the HoloLens 2 and the Magic Leap One, which represent the future of Smartglasses input.

A New Machine Learning The model developed by Google's research department could make it possible to implement the complex hand gesture controls in lightweight smart glasses common in high-end AR systems without the extra space and cost of dedicated depth and motion sensors.

This week's Google AI was released The team introduced its latest hand and finger tracking approach, which uses the cross-platform, open-source MediaPipe framework to immediately process video on mobile devices (not in the cloud) and to map up to 21

points of the hand and fingers through machine learning models.

"We hope that the creation of this hand-perception functionality will lead to creative development for the broader research and development community. Use cases that stimulate new applications and new research pathways," the team wrote in a blog post detailing the approach has been.

Images about Google

Google's method for hand- and finger tracking and finger tracking subdivides the task into three machine learning models. Instead of using a machine learning model to recognize the hand itself, which lends itself to a wide range of sizes and poses, Google researchers instead used a palm recognition algorithm. The team achieved an average accuracy of almost 96% with this approach.

With the recognized palm, another machine learning model 21 identifies hand and ankle coordinates of the hand or hands in the camera view. The third algorithm adds the visible gesture by recording the pose for each finger and comparing it to predefined hand gestures, count gestures, and a variety of supported hand gestures.

Images via Google

In other words, this approach Machine Learning can be applied to Android or iOS devices without dedicated motion or depth sensors. In addition, the team provides the model through open source so other developers and researchers can deploy it. The team also plans to improve the accuracy and performance of the machine learning models over time.

In the near future, the hand-tracking system could help developers create AR experiences that are similar to those of Snapchat and Facebook, with hand recognition and tracking turned into selfie camera effects.

Google may also be able to use the technology to create unique AR experiences that are similar to Animojis on the iPhone X, combined with the Soli radar sensor on the Pixel 4, and use a combination of Apple's ARKit and its TrueDepth camera.

Image via Google Smartglasses. The resolution of the motion and depth sensor array would allow hardware manufacturers to approximate the user input methods for HoloLens 2 and Magic Leap One.

More and more technology companies rely on AI to solve the equation for AR wearables in terms of form factor and functionality. Even Microsoft combines ARKit's and ARcore's AI approach to discovering surfaces with the new holoLens 2 scene-understanding features. The software approach could be the key to getting Smartglasses lean enough to suit everyone Day to be worn, rather than just in the comfort of the user's home or office.

Do not Miss: The Future of Apple Augmented Reality Smartglasses and the Following Android Copies


Source link