While the next-generation HoloLens does not yet have a start date, we now have a better idea of how big the jump in depth sensor performance will be.
At the recent Computer Vision and Pattern Recognition Conference, held in Salt Lake City, Utah, in June, Microsoft researchers introduced the new HoloLens Research Mode, which gives developers access to device sensor data.
During the tutorial, the researchers showed the audience a preview of the depth sensor feed from the Kinect for Azure project, which Microsoft introduced as the sensor for the next version of HoloLens earlier this year
Video from this presentation has now been released. The footage shows the level of detail that the Kinect sensor can achieve when rendering a point cloud, with lanyards and wrinkles in clothing being visible in the data feed.
The higher frame rate of the sensor with a longer range is also visible. and the sensor captures spectators up to eight rows, while the point cloud (bottom right) shows details of chairs and people.
Compare this to the research mode material of the current Generation HoloLens from the same presentation (at 18:00 in the presentation video) or the video embedded here (bottom of the page), and the improvement is clear.
According to reports, HoloLens 2.0 is due sometime next year. Due to the skills shown in this early preview, it's worth the wait.