قالب وردپرس درنا توس
Home / Tips and Tricks / Magic Leap First hands-on impressions for HoloLens developer «Magic Leap :: Next Reality

Magic Leap First hands-on impressions for HoloLens developer «Magic Leap :: Next Reality



In a surprising turn of expectation management, Magic Leap not only managed to deliver the Magic Leap One I ordered on Wednesday at 4pm, but to deliver it. Same-Day PT

After spending about an hour running the set-up through the setup and poking around its interface and some of the start-up apps, I thought it would be helpful to have a quick list of some of my to share first impressions as someone who has spent a lot of time with a holoLens in the last couple of years and tries to answer many of the burning questions that I had about the device.

Meshing

The Magic Jump One takes a different approach to meshing (aka spatial assignment) than the HoloLens. It works by chunks of cubic regions, which then easily overlap to fill in micro-gaps. The results are much cleaner for many scenarios than the HoloLens ™ triangle mesh, which builds a single giant rippled wireframe from every point it scans.

The mesh of the Magic Leap One is very accurate when it comes to anything sharp. It clings to straight edges and corners in a grid of small squares and manages to detect large flat surfaces, walls and even corners. Sharp, projecting 90 degree corners occasionally get a beveled edge of 45 degrees, but they still reflect the geometry of sharp straight edges and in particular surfaces more accurately than the HoloLens with their more jagged compound and triangle mesh. In areas where it can not intervene (black, non-reflective surfaces, mirrors, windows, etc.), the ML1

fills the back of the holes with an approximate extent of its known walls and floors. And oh yes … it's super fast. If I look at it, I would say it's 3x to 5x faster than the HoloLens filling gaps in the grid while scanning a room.

When you come to strange shapes like lamps and computer monitors, you have the same kind of narrow but not quite right angle limits like the HoloLens. The Magic Leap Net tries to shape what it sees into every tiny block of its square, resulting in less sharp and pointed protuberances than the HoloLens, but sometimes at the expense of tight shapes.

While both the HoloLens and the Magic Leap have difficulty mapping black areas, the Magic Leap in my office has deteriorated a bit. My black office chair and mini-fridge were completely overlooked while HoloLens at least tried. Although his chair net looks more like a stubby mushroom than a chair, he could picture my mini fridge after looking at it from different angles. It seems that the HoloLens cameras have a slight edge in capturing surfaces that hardly reflect light.

TL; DR: Better with flat surfaces and edges than the HoloLens, but worse with black, non-reflective furniture, and less forgiving in direct sunlight or outdoors.

World Position Lock

The HoloLens is known for how well digital objects "stick" in the real world. This is done by tracking your position at high frequency and then scaling up the 60 fps input to 240 fps (one color at a time) and adjusting four times for small movements of your head for the duration of one time to adjust your 60 pictures. You can shake your head fast, jump up and down, tilt your head from any angle, whatever … the windows and items you put in your room will almost always stick. It is brilliant.

If you've reviewed the Magic Leap shots online, you'll notice that the Magic Leap One has a bit of drift. I can confirm that the subtle divergence you saw in these clips is a pretty accurate representation of what you'll see on the device. It's subtle, but it's there. It is enough to notice when you look for it almost every time you move. But once you start to look at an app, you'll find that you stop thinking about it, and most of it is stable enough not to be annoying.

It is much less shaky than the [Meta] of 19459011 and roughly comparable to ARKit and ARCore. Clever apps can introduce subtle character animations that float and move rather than stand still to prevent them from being noticed. You will not really notice it on floating objects like jellyfish, UFOs or goldfish. But you will notice it a little when you move around things that seem to be attached to a surface. If you really try to push it through, say, shaking your head quickly from side to side or jumping up and down, the drift is obvious. I do not know if it's a frame rate cap thing or if that's something that can be improved over time with software updates, but I hope it's the latter.

TL; DR: Passable. On par with ARKit and ARCore. But not as solid as HoloLens. I hope this can be improved in a software update.

Setup

When you turn on the unit for the first time, you will not see anything after starting the boot until the boot process is complete. The built-in eye trackers also allow you to automatically measure the pupil distance ( IPD) by focusing on the position of a number of points near your field of view (even at different depths).

Optics [19659004] If you've been following the news, you've heard a number of people in Magic Leap say you can not grasp exactly what you see when you look with your eyes against a camera. This is true. I've been trying to shoot some pictures and videos through the lens with my camera, and the holograms are always blown out with a glowing halo effect and fuzzy. If you look through them with your own eyes, the resolution is high enough to be sharp without any screen-door effect, almost exactly like the HoloLens. You can not see individual pixels, though distant fine hair-raising details have the subtle flicker of anti-aliasing around their curves, which you can see in virtually every 3D application on any platform. It also has much less of a neon rainbow effect over its waveguides than the HoloLens, even if the field of view (FoV) is filled with large, flat, white web pages (though, to be fair, I have an early Wave 1 HoloLens, the rainbow Effect in mine might be more noticeable than others.

The FoV is significantly larger than the HoloLens It is not the full peripheral view that some have hoped for, but it is a welcome step in the right direction, one noticeable If you've looked at the totally inaccurate FoV comparison image with overlaid images of a cat in a living room that's been around since the release of Magic Leaps FoV, you'll find that this exceeds your low expectations – if your peripheral To completely cover your vision before you press the shutter, you probably have to wait a few years. It's a shan That this was overkill, since it's really better than the HoloLens. But as everyone's expectations were so high, many find this disappointing.

TL; DR: Either it fits or is better than the HoloLens in every way.

Depth of Field [19659004] Although something does not pop out of you right away, I wanted to try to test if the headset actually plays several depths. When the unit is first started, you will see floating islands with a space man leaping between them and hot air balloons in the distance. I went to one of the islands, closed one eye and concentrated on the tree with a balloon in the distance. The balloon seemed to be slightly blurry and did not feel like it was mingling with the foreground that pruned it.

Then I concentrated on the balloon and got the feeling that I was looking past the tree. I'll have to test that with a telephoto camera at some point to see if it's not just my thoughts to process them as two separate distances. But it felt like they were not all reproduced at the same depth.

TL; DR: Need More Testing

Eye Tracking

I'm excited to dive into eye tracking and mutimodal inputs. The only thing I noticed so far is the automatic IPD setting, but I only spent about an hour with the device. I've noticed that the absence of the gaze-centered cursor point that HoloLens uses for typing instead uses mouse pad-like controller-based input instead, interacting with web browser windows much more intuitively and much faster than on HoloLens, When the air-tap-push-and-pull gestures tend to have just enough latency to make them feel sluggish. It still tracks your gaze. Look at a browser window and your mouse will appear there, ready to move. Move your head to see another and your mouse is there too. It just works without you thinking about it.

I also tried to test, just to look with my eyes from one to the other, but was not able to bring the mouse to change the window without even moving his head. I'm not sure if this is part of the multimodal input approach, or if they just do not use eyetracking for the window focus in Lumin OS, but dig deeper later.

TL; DR: An exciting new feature on all mixed-reality headsets

The Controller

The controller is very responsive, and digital selection beams that stretch and stretch from its lower end are effortlessly tethered. In Lumin OS, the app selection is based on your view, and then the controller is used to change the selection within the apps. I got a little annoying with confusing choices when I opened the main menu in front of another app, but otherwise you really do not have to think about it.

You need to keep the controller running a little ahead of you if you do not want it to ease off while it's switching hemispheres. I'd like the hemispheres to be tilted down to avoid that, but it's not such a big deal. I found it less tiring and more precise than palpitation.

TL; DR: The extra precision is a godsend. The latency when moving between the hemispheres is a bit annoying.

Gesture Control

The Lumin OS Shell does not seem to allow it to be used without the controller. There are no gaze and air tap features enabled. At least not that I could find. I found this surprising since the gesture support and hand point tracking are detailed in the Magic Leap documentation. It shows a much more flexible range of controller-free input options than the HoloLens "ready", "air tap" and "air tap and drag". The support for these gestures seems app-specific, and at least on the first day, the controller seems to be required when interacting with Lumin OS and its prisms. I may be wrong about my limited use so far, but I would like to see a standard minimum set of gestures used for all apps and environments to provide the option of a control-free experience. This is especially critical in environments where controllers are not really an option, such as in operating theaters or factory buildings. While apps can be created to support a rich gesture interaction, this means that if you can not start these apps without using the controller, it still means you need to search for such apps at startup. In fact, after the first boot, you will be prompted to pull the controller's trigger before the Lumin OS shell even starts.

That's all I have for now. We will have more updates for you in the next few days and weeks, so check back often or give us on Twitter.

Do not miss: Magic Leap You finally make the leap from fantasy to reality, pre-orders $ 2,295

Cover Picture by Bryan Crow / Next Reality


Source link