During the presentation at Unite Berlin Magic Leap gave the participants an intensive course to gain experience for Magic Leap One (ML1). We learned a lot more about how the device works and what we can expect with the device
Alessia Laidacker, Interaction Director, and Brian Schwab, head of the Interaction Lab, led the session, which was designed to teach developers how to create content for the new device and operating system.
We learned the following. ..
More Content Uncovered Partner
In recent months, Magic Leap has announced its work with Sigur Ros and the NBA. But during the Unite Berlin event, two new content partners were unveiled.
Meow Wolf, which focuses on transforming art galleries into augmented reality experiences Specialized companies will bring their talents to Magic Leap, as well as game developer Funomena, a game developer who works on some "out of the box" content (not necessarily games) for the device] During the Q & A session, Schwab Shaquille explained O'Neal's description of his ML1 experience of the codecode conference Recode in February.
The experience was a virtual screen that made it possible for Shaq to watch an NBA game with 3D assets on his table. Shaq was also able to interact with these elements to extract in-game information while the screen hovered in front of him in the room.
Schwab addressed the elephant in space: The field of view of ML1 will be limited without revealing details. However, he emphasized that developers, when they know the limitations of the field of view problem, should be prepared to adapt to it with a "less is more" approach. He also noted that filling the user's field of view from the user's point of view can distract the user from an overall immersive experience and can draw attention to the screen's boundaries, which results in users viewing the content more like watching TV rather than watching To experience enlargement of the real world. And keeping people in the real world is the goal of Magic Leap with immersive content designed for the device.
ML1 Will Track User Experience Data
While Magic Leap has already mentioned that head pose, hand gestures, eye tracking, and voice commands are all part of the ML1 user interface, the Unite Berlin presentation has revealed a fifth element about the Developers will have: Geo / Temporal information from user interactions.
"We actually collect some of this information over time, so we can give you trends both in person and [for] can tell multiple users in one area, "said Schwab. "We have access to new information streams that return experience through better user experience."
Developers can use this information to call interactive content to draw users' attention to the action. View other content
Magic Leap has developed, based on user input, his own transmodal interaction model that combines the position and movement of the head and the eyes and the hand in the context of an action to identify target objects or focus areas. It's called TAMDI or Target, Capture, Manipulation, Disable, and Integration, which is a cyclic process that allows developers to measure user input and return the correct interactive result.
As a hypothetical, let's say, I play Game of three-card Monte in ML1. With the transmodal interaction model, the app can tell from my eye position, the head, and the position of my hand that I choose (or even cancel) the middle map.
Also, the app can track my eyes and my head to keep track of the fact that I actually follow the cards and not just a happy guess.
Magic Kit is Magic Leaps AR Toolkit
Earlier this year, we reported that Magic Leap had protected the term "Magic Kit" Now we know more about what this name means.
Magic Kit is a toolkit, not unlike ARKit, ARCore or Microsoft's Mixed Reality Toolkit, allowing developers to take advantage of the capabilities of the ML1, such as interacting with the mapped environment and the full range of user interaction methods.
Environment Toolkit, a plugin that comes with Magic Kit, helps developers overcome obstacles and to consider objects in a room and define how content interacts and navigate the space. The Environment Toolkit also provides developers with tools to identify seats, hiding places, and room corners so that content behaves contextually and enhances the overall sense of immersion.
The company's interaction lab will distribute a package of examples and sources of magic kit code to help developers exploit the features of the ML1
Spatial mapping goes another way compared to HoloLens  Magic Leap introduces a new concept for environment mapping using the MeshMesh grid type, which is available in the Lumin SDK for Unity in the pre-format MLSpatialMapper.
"BlockMesh spatially divides the real world into a set of cubic blocks whose axis is aligned with the coordinate system origin of the current head tracking map," explains Laidacker Bühne and on a graphic during the presentation.
"The lattice blocks are generated from the internal reconstruction model, allowing the geometry inside Although lattice blocks are internally an interconnected triangle mesh, the meshes between the blocks are not connected, allowing for easy and fast updating of areas of the mesh when changes in the environment occur. " 19659043] Gesture Recognition is also Different from HoloLens Approach
During the gestures section, Laidacker provided insight into how ML1 recognizes the eight available gestures. For user input: When developers enable gestures, ML1 considers the head position and turns on the depth sensor so that he after the Nah field searches, rather than further, where ML1 searches for the environment.
In contrast, Microsoft built custom silicon The Holographic Processing Unit (HPU) in HoloLens automatically detects the hands, switches to the near zone when a hand is detected, and then returns to the far end when the hand disappears.
The controller is here to stay … For now
During the Q & A, when asked if the controller wanted to stay here, Schwab confirmed that it is at least for the near future.
Although he Believes ML1 has a strong amount of gestures, and the team is working to improve them and add them to the options menu, a controller is a recognized paradigm of input that is pleasing to users, and better Input method in some cases, such as input, which requires high precision, input or control of content outside the field of view. The controller also serves to provide a better feedback mechanism with its haptic motor.
"One of the reasons why the controller should stay here is that it gives you a bit of haptic feedback right in those nerves," said Schwab. "It's also a higher fidelity track for the moment."
Anyone who has used HoloLens gestures to accurately place, scale, or align objects in 3D space can accomplish something with higher accuracy and lower latency, even if it's a little bit away from the magic of gesture control ,
Laidacker added that the mix of controller input and hand gestures (or omitting the one or the other input) is left to the discretion of the developer. The duo believes that many developers prefer hand gestures as a more natural way of interacting, especially for experiences that serve non-gamers, a segment of the consumer market that is largely unfamiliar with handheld, gaming style controller dynamics.
There was a lot of digesting in the presentation, but those were the biggest revelations. If you want more, we've embedded the video of the entire presentation (starting at the three-hour, 45-minute mark) below for your own edification.