It can be fairly challenging to put on HoloLens properly for the first time. And a headset that is not properly adjusted can result in blurry or not full field of view.
This was often an issue when I was showing off our HoloLens creations. Sometimes the headset is just not put on correctly, and the user will lose half of the field of view, and often the whole point of the demonstration is lost. Of course, as the demonstrator – the person not wearing the HoloLens – I didn’t see what the user saw or didn’t saw, so we ended up with an awkward conversation.
“Can you see the little blue dot in the middle?” or “Is the edge of the holograms’ display area sharp or blurry?” or “are you sure you’re not seeing about *this* much of the Hologram?”
To help with this issue, the HoloLens calibration tool has a first step that asks you to adjust the device on your head until you see all edges. But that doesn’t help us when we have to demonstrate our own app, does it?
So, after doing hundreds of in-person HoloLens demos, I decided it’d be nice to copy this functionality for our own apps. And thus, the HeadsetAdjustment scene has been born. It is currently a PR to the HoloToolkit, but hopefully it’ll be merged soon, making it my second HT contribution.
The User will se a similar invitation to adjust the headset so that all edges are visible. He can then proceed to the actual experience by air tapping or saying “I’m ready”. Simple!
The Developer’s Side
First, a huge thanks to @thebanjomatic for his tips on finetuning the scene!
The headset adjustment feature is implemented as a separate scene, and can be found as HoloToolkit/Utilities/Scenes/HeadsetAdjustment.unity. The simplest usage scenario is when you don’t want to modify anything, just use it as is. For this, all you have to do is add the HeadsetAdjustment scene as your first scene, and your “real app” as the second. The HeadsetAdjustment scene will automatically proceed to the next scene when the user airtaps or says the “I’m ready” phrase.
Of course, you can customize the experience to your liking. To change the text that’s displayed, you can edit the UITextPrefab properties here:
By default, the next scene is selected automatically based on the scenes included in the “Scenes in Build” window of Unity. In the above example, the HeadsetAdjustment scene is #0 (meaning it is loaded and started first), and the actual app loaded after the adjustment is the GazeRulerTest – the #1 scene.
However, you may want to override this. The HeadsetAdjustment script allows you to specify the next scene by name in the NextSceneName property. If you enter anything here, it’ll override the default behavior of finding the next scene by number, and it’ll load the scene with the name provided in this field.
You can also customize the voice command the user can use to proceed in the Speech Input Source.
Now you have a way to ensure that the person you’re demoing your latest creation to has the best possible experience. Enjoy!
Most people will have a huge smiley on their face when you show the HoloLens to them. With way over a hundred demos behind me, only 3 or 4 didn’t agree right away, that mixed reality is the future.
So, demonstrating HoloLens is a very grateful job. But it can be pretty frustrating, too, because:
You can’t see what the user sees, and can’t offer help or explanations;
If there are more people in the room, they’ll be bored as they have no idea what the current lucky person is experiencing.
Luckily, Microsoft has added a way to wirelessly project the so-called Mixed Reality view from the HoloLens. This displays the GPU-generated holograms that the user sees over the video coming from the RGB camera, and streams it real-time to a computer.
The problem is, that to get this streaming right, you need to have a lot of things working together. Most of all, you need a fast and reliable Wi-Fi connection. But even then, there is usually a delay measurable in seconds, often as much as 5-6 seconds. This makes it extremely difficult to explain what you’re seeing on stage (because you have to explain what you saw 5 seconds ago, which what the audience sees right now). And when you’re trying to help somebody, it can be downright frustrating, because you say stuff like “yes, there is is… oh wait, there it was 6 seconds ago, move back… not there… the other one… now air-tap… let’s wait a little until I can see that you successfully airtapped…”. In a business scenario this even comes along as unprofessional, and can make the HoloLens look like it’s not ready yet.
To illustrate the delay, I put on the HoloLens, and launched the Mixed Reality Capture Live Preview in the HoloLens Device Portal. Then I opened up and closed the Start Menu. Conditions were fairly good, so I “only” experienced a 4 second delay:
After months of experimenting, and dozens of demonstrations, we at 360world found a way to reduce the latency significantly. This has worked for us a dozen times already, under varying circumstances, including our own office, a client’s office and even in very busy conference locations. And the even better news is that it is easy to implement.
Step 1 – Use Windows 10 Anniversary Edition
You should use Windows 10 Anniversary Edition on the computer you want to stream the Mixed Reality Capture to. (you can just use this computer or a projector to share the results with a larger audience). The reason is that for Step 3, you need a new AE feature.
Step 2 – Enable Mobile Hotspot on the Computer
This is a new feature in the Windows 10 Anniversary Edition. You can access it from Settings / Network & Internet, and Mobile Hotspot:
As you can see, the computer has to be connected to the Internet (and has to have a Wi-Fi adapter) for the Mobile hotspot to be enabled. If you can’t see the above warning, all is OK – turn on the “Share my Internet connection with other devices” checkbox.
Step 3 – Connect the HoloLens to the Mobile Hotspot on your Computer
And here comes the trick: once the hotspot is set up, you need to connect your HoloLens directly to the hotspot:
Now air-tap on the Advanced options link, and take a note of the IP address of the HoloLens. You will need it soon.
Step 4 – the HoloLens Windows App
For the best streaming results, forget the device portal. What you need is the Microsoft HoloLens app from the Windows Store. This app has most (but not all) of the features of the Device Portal, and seems to perform much better when it comes to live streaming the Mixed Reality Capture.
Once you have the app installed, click on the + button, and add your HoloLens to the form. This is where you need the IP address of the HoloLens you – hopefully – recorded earlier:
You should now see the HoloLens you just added, and hopefully it is online. If not, make sure that the HoloLens is still turned on. You may need to wait few seconds before the HoloLens app can detect the device.
Click on the connected HoloLens, and
Step 5 – Enjoy!
Click on the first icon called “Live Stream”, and Live Stream should start. To further reduce latency (and not to cause audio issues), you may want to turn off the audio in the … menu.
Here is the result:
As you can see, the latency is well below 1s, in fact, about 0.5s! This is more than satisfactory for any kind of live demo, or helping a first-time HoloLens user.
In fact, depending on how noisy the airwaves around you are, you can even switch it to High Quality mode from the default “Balanced” setting.
This solution is still not foolproof. The connection is wireless, and packet losses occur that can add more and more delays to the stream. If this happens, just navigate away from the live stream and quickly come back, and you’re good as new.
To sum up:
Use Windows 10 Anniversary Edition
Turn on Mobile Hotspot
Connect your HoloLens to the Mobile Hotspot on your computer and make sure you know the IP address
Install the Microsoft HoloLens app on your computer, and connect it to your HoloLens
A lot has happened this week in the Augmented Reality (AR) / Mixed Reality (MR) space. On February 29, Microsoft has opened up HoloLens Developer Edition preorders for a selected lucky few, and more importantly, published a ton of videos, white papers and developer documentation. This gave us an unprecedented amount of information to parse and learn a ton about the capabilities and limits of the device.
Meta – the other very interesting player in this space – has also opened up a few days later, on March 2. They also opened the preorder for their respective developer kit (devkit), the journalist embargo has lifted and for the first time, we got to see the Meta 2 glasses in action – at least on video.
In this post, I’ll try to piece together all the information I came across during these few frantic days of research. I’ll show what’s common and what’s different in Meta’s and HoloLens’ approach, devices and specifications, and provide an educated comparison based on the data available.
And this is the key. While I had about 15 minutes of heads-on time with HoloLens back in November, the device and its software has probably changed since then. As for Meta, all I have to go on is the data available from Meta itself, the reports of journalists and examining videos frame by frame to make educated guesses. I never saw a Meta 2 headset in person, much less had actual time using it. While I’m pretty sure what I’ll write about is fairly accurate, there are bound to be some inaccuracies or even misinformation here. If you find some of these or do not agree with my conclusions, please feel free to comment, and I’ll try to keep this post up-to-date, as long as it is practical to do so. This post will be a work in progress for a while, as more information becomes available and people point out my mistakes or perhaps Meta hits me with a headset to play with (hint, hint).
With that out of the way, let’s get started and see how Meta 2 and HoloLens compare!
To Tether or not to Tether
The Meta headset is tethered. The HoloLens is not. This may seem trivial, but in my opinion, this is the most important contrast between the two devices – and a lot of the other differences come down to it. So, let’s see what this means.
The HoloLens is a standalone computer – a fact that Microsoft is very proud of. Just like a tablet or a phone, it only needs to be attached to any wire is when you’re charging it. During actual use, you are free to move around, jump up and down, leave your desk or walk long distances. This kind of freedom opens up several use cases – walk around a factory floor or a storage space while the device shows you directions and which crate to open; go to the kitchen while keeping a skype video conversation going on the right and the recipe on the left; or bring the device up to the space station, and have an expert on Earth look over your shoulder and instruct you by drawing 3D pointers.
Meta’s tethered experience ties you to the desk (unless you strap a powerful laptop to your back, which has been done). You can stand up of course, but can only move 9 feet, and run the risk of unplugging the device or pulling your laptop from the table.
On the other hand, the tethered approach has great advantages. You are not limited to the computing power in your headset (which is about the same as a tablet or mobile phone). You can use an immensely powerful desktop computer with multiple high-end graphics cards and CPUs and an infinite power supply.
All of this power comes with great – well not responsibility, but additional cost. We’ll talk about pricing later, but let’s just mention it here that you’ll need a pretty powerful, gaming grade PC with an i7 processor and a GTX 960 graphics card to get the most out of the Meta 2 headset.
It is worth mentioning, that Meta is actively working to create a tetherless device down the road – but this post is about what’s been already announced, and the Meta 2 is tethered now.
One would think that Meta would have advantages on the weight front, since you don’t have to wear an entire computer and batteries on your head.
HoloLens weighs 579 grams. Meta’s headset weighs in at 420 grams, but that’s without the head straps and cables. I’ve no idea why Meta left out the head straps from the calculation, since it is definitely something your neck will have to support – but in any case, I’d estimate that weight-wise, the two devices are pretty much at the same level.
What’s more important for long term use is the actual way your head has to support that weight. I only have personal experience with HoloLens, but its weight distribution and strapping mechanism makes you forget all about the weight in just a few minutes. Both allow for glasses to be worn underneath them – something that is very important to me personally, and I suppose to a lot of other potential users. Both have a ratchet system to tighten the straps around your head, although Meta’s ratchet seem to be very loud based on one of the videos. Meta also uses Velcro to adjust the top strap – I imagine that people with more hair than me may find this an issue.
All-in-all, I can’t decide whether the Meta or HoloLens is more comfortable to wear on the long run. My guess is that there’s not going to be extreme differences in this regard – not counting the Meta’s tethered nature, which is bound to cause some inconvenient moments until one gets used to literally being tied to the desk. There are also some potential eye fatigue issues that I’ll touch on later.
As mentioned before, Meta 2 requires a hefty PC – and it needs to run Windows 8.1 or newer. Meta behaves like a second screen connected to that PC through an HDMI 1.4 cable, so anything Windows displays on that screen will be shown to the user. It is up to the developer to fill that screen with a stereoscopic image that actually makes visual sense. The best way to do this is by using Unity – a game developer tool, which is quickly becoming the de-facto standard for creating virtual reality and augmented reality experiences. It’s been shown that you can also place Microsoft Office, Adobe Creative Suite or Spotify around you on virtual screens, and interact with them, removing the need to have extra monitors. How well it works in practice remains to be seen though, but one Meta engineer has discarded three of his four monitors in favor of holographic ones.
There’s not much more to go on when it comes to the development experience of Meta. They have time though – their devkit will not be shipping until 2016 Q3.
Microsoft’s HoloLens is a standalone computer, running Windows 10. The same Windows 10 that’s available on desktop, tablets, phones and even Xbox. Of course, the shell (the actual end user experience) is customized for every device. For example, this is the Start menu of HoloLens:
Running a full-blown Windows 10 on HoloLens has some distinct advantages. HoloLens can run any UWP (Universal Windows Platform) app from the same Windows Store that the phones, tablets and PCs use. This means that you can simply pin the standard 2D weather app right next to your window, and you can get weather information by just looking at it. Or pin a browser with the recipe to the wall above your stove. When it comes to running 2D applications with HoloLens, it is less about creating floating screens and windows around you (although you can do that too), and more about pinning the apps on walls, top of tables and other real world objects.
As for development, Microsoft, has just published an insane amount of developer documentation and videos, which I am still in the process of reading through. As you can expect from a software company, the documentation is very detailed and long. But what’s more important, the platform seems to be pretty mature, too. For example, I was just informed by my friend and fellow MVP, James Ashley that Microsoft has built an entire suite of APIs that facilitate automated testing of holographic applications.
For more involved development, the #1 recommended tool is also Unity. This is great news, since this will make a lot of the experiences created for one device easily transferable to another one. At least from a technical perspective, because – as I’ll detail more later – adapting the user experience to the widely different approaches of these headsets is going to be a much larger challenge. But a developer can also choose to create experiences using C++ and DirectX – technologies that even AAA games use. Not that you’ll be able to run the latest, graphically demanding games on a HoloLens hardware – it has a much weaker CPU and GPU, and performance is further limited by the fact that the HoloLens has no active cooling (fans), and will shut down any app that dangerously increases the device’s temperature.
If you do want to run AAA games on HoloLens though, you can take advantage of the game streaming feature of Xbox One. You can just pin a virtual TV on your wall, and stream the Xbox game to your headset. I expect to see similar techniques to stream desktop applications from your computer in the future.
Resolution, Field of View
Field of View is the area in front of you that contains holograms. With Mixed Reality devices, the FoV is very important – you want the holograms to cover as much of your vision as possible in order for them to feel more real. After all, if images just appear as you move your head, it breaks the illusion, and can make you feel a bit confused.
Ever since its introduction, HoloLens’ field of view (the area in front of you that can display holograms) has been under criticism. Some compared it to looking through a mail slot. Based on data available on the just released developer documentation, I finally have a way to calculate the FoV of HoloLens.
According to the documentation, HoloLens has more than 2500 light points per radian. Assuming that “light points” are basically a fancy word for pixels, this means that HoloLens can display approximately 43.6 points per degree. This is a similar measurement as DPI (dot per inch) for 2D displays, such as phones, although I don’t know how to scientifically convert between the two.
Another place of the HoloLens documentation states that it has a 1268x720p resolution (per eye). So, if we have 43.6 points per degree, and we have 1268x720p resolution, we have a field of view of 29.1×16.5 degrees, which ends up being about 33.4 degrees of diagonal field of view. If my calculations are correct that is. They may very well not be, since Microsoft has given us another number: 2.3 million light points total. 2x1268x720 is actually less than that (calculating with 2 eyes) – it is 1.826 million. So, there is a chance that my calculations are off by 20-30%. (Thank you James for bringing this to my attention).
Let’s see the Meta 2! Meta is not shy talking about their field of view, in fact this is one of their biggest selling points. Meta claims to have 90 degrees of diagonal FoV, which is not only 3 times as large as the HoloLens’, it is pretty much the same size as the Samsung Gear VR headset! 90 degrees is huge compared to pretty much every other AR device – most manufactures struggle to even reach 40-50 degrees.
For a larger field of view, you need more pixels to keep images and text sharp. Meta has a 2560×1440 pixels on its display that gets reflected into your eye. And that is for both eyes, so one eye gets 1280×1440, which is “only” twice as much as the HoloLens display. With a much bigger field of view though, we end up with about 21 pixels per degree, approximately half of HoloLens’ 43. This means that while the experience will be much more immersive, individual pixels will be twice as large. Whether it is enough remains to be seen – I haven’t read any complaints about pixilation though. One thing for sure: you’ll definitely want to move close to your virtual screens so that they fill your vision to read normal sized text. Also, the larger pixel count means more work for the GPU – another point where the tethered nature of Meta is an advantage, and one likely reason on why HoloLens has a limited FoV.
Here is a handy table to sum all of these up – I put the data I calculated / deducted in italic, and the manufacturer provided numbers in bold.
HoloLens (could be higher by 30%)
# of pixels per eye
diagonal Field of View (degrees)
Pixels per degree
An important way of interacting with HoloLens is speech. HoloLens is a standalone Windows 10 computer, and thus the applications you create can support speech commands and even integrate with Cortana. Technically, there’s nothing stopping you from using speech commands on Meta either, but this hasn’t been shown in the videos I saw – and you’d need a decent microphone on your PC. HoloLens has an array of 4 microphones that go wherever you go to clearly pick up your speech and filter out ambient noise.
Let’s talk about manipulating holograms, and activating buttons! Probably this is the area where the two products differ the most. Both HoloLens and Meta are able to see the user’s hand, and use what it as a gesture input, without needing to have any additional devices. (Although HoloLens comes with a Bluetooth clicker that has a single button you can press). However, that’s where the similarities end.
Meta thinks that your hands are made to manipulate the environment, and thus it should be the tool to interact with holograms, too. With Meta, you touch a virtual object to move or rotate it, push your finger forward to press a button, close your fist in a grabbing motion and move your hand to move things around in the virtual world. Meta wants to remove complexity from computing with this natural approach and direct interaction. Direct interaction (touch screens) is what made phones and tablets so popular and easy to understand as opposed to the indirect model of a computer mouse.
This is a great concept on paper, but if the reactions of the journalists who actually had hands-on time with the device are something to go by, needs more refinement until it actually works the way Meta intended. Engadget says this “feature didn’t work great… the gesture experience needs to be refined before it launches”. TechCrunch calls the hand tracking control “a bit more brutish than I would hope”, and praises Leap Motion’s technology in comparison (Leap Motion specializes in 3D hand tracking). But still, the fact that Leap Motion is doing such a great job gives hope that Meta will nail it as well.
HoloLens takes an entirely different approach. Microsoft stuck to the long standing tradition of a point-and-click interface. However, instead of moving a mouse around, you move your gaze – more precisely, your head. For selecting, you perform an air tap gesture, which is analogous to a mouse click.
For moving, rotating things, you first select the operation you want to perform, then pinch in the air, and move your hand. As I said in my previous post, this takes some time to get used to, but works fairly reliably once you’ve gone through the ropes.
Meta’s approach is certainly more appealing and natural. However, even if Meta works out the kinks, you will have trouble interacting with virtual objects that are out of your arm’s reach. With HoloLens, you can put a hologram to the other side of the room and just gaze (point) and click (air tap) to perform an action.
So, in order to properly interact with your holograms, Meta needs them to be close to you, within an arm’s reach. With HoloLens, you can fill your room with digital goodies, and keep interacting with them.
If you look at something close, such as your nose, your eyes get a bit crossed. If you look at something afar, your eyes look parallel. Similarly, depending on whether you look close or far, muscles change the shape of your eyes to make the light focus exactly on your retina.
Neither HoloLens, not Meta 2 take these effects into count, at least not in a dynamic fashion. To lessen eye strain, HoloLens actually suggest that you place the holograms approximately 2m from the user (between 2-5 meters), and cut the 3D image when you get closer than 0.5 meters. Technically you can display holograms outside of this range, but Microsoft warns you that the discrepancy between the “crossiness” of your eyes and the lenses focused at 2 meters may cause stress and fatigue. My guess is that this is one of the reasons why Microsoft opted for the gaze – and air-tap interaction model.
With Meta, virtual objects that you interact with should be kept inside the 0.5 meter threshold (arm’s length). There is even a demo when you lean inside a holographic shoe. I have no idea how Meta’s lenses are focused, and how much overlap the eyes have for eye crossing – but the demo certainly looks cool.
Understanding the Environment
Environment awareness for mixed reality means that the software and the hardware understands the environment the user is in. It knows that there is a table 2 meters in front of me, which has a height of 1 meter, and such and such dimensions. It understands where the walls are and how the furniture is laid out. It sees a person in front of it.
Environment awareness is important when it comes to placing objects (holograms) in the virtual world. If your virtual pet runs through the sofa or the walls as if it wasn’t there, it ruins the illusion. If you throw a holographic ball, you expect it to bounce off the floor, the walls and the furniture.
This is an area where I could barely find any information on the Meta 2 headset, apart from a few seconds of video showing a ball bouncing off a table.
The situation is different with the HoloLens. Environment awareness is key to the HoloLens experience. When your gaze cursor moves around the room, it travels the walls and the furniture, just as if you were projecting a small laser circle.
When you place a Skype “window” or a video player, it snaps to the walls (if you want it to). When you place a 3D hologram on a table, you don’t have to move it up and down so that it sits precisely on the table. Even games can take advantage of environment scanning, turning your living room into a level in a game – and every room will have different gameplay depending on the layout of the furniture, placement of the walls, and so on.
Environment understanding works by scanning the room and keeping this scan continuously updated. HoloLens can store the results of this scan, and even handle large spaces by only loading the area you are in as you walk down a long corridor. It can also adopt to changes in the environment, albeit there are indications that this adopting may be slow. A developer can access this 3d model (mesh) of the scanned environment, and react accordingly. When using the physics engine of a tool such as Unity, it is just a matter of a few mouse clicks to program a hologram collide and bounce off real world objects.
One of the things that amazed me (and journalists) when I tried HoloLens was that if I placed a Hologram somewhere, it simply stayed there. No matter how much I moved around or jumped – the hologram stayed right where I put it.
This is an extremely difficult technical problem to get right. Our mind is trained to expect this behavior with real world objects, so any discrepancies will immediately be revealed and the magic will be broken. To keep the illusion, the device has to be extremely precise in following even the slightest movement of your head in any direction. Microsoft uses four “environment understanding” cameras, an Inertial Measurement Unit (IMU), and has even developed a custom chip – the Holographic Processing Unit – to help with this problem (and some others).
To appreciate the quality of tracking HoloLens provides, take a look at the video below. It is recorded on the HoloLens itself, by combining the front camera on the HoloLens with the generated 3D “hologram” overlay. You won’t find a single glitch or jump here. Microsoft is even making an app called “Actiongram” available which can do similar recordings that can record mixed reality videos – something that is pretty difficult and time consuming to do with the standard tools in the movie industry.
On the other hand, based on the videos I saw, Meta’s tracking is not yet perfect (but it is close).
Road to VR, who – unlike me – had some actual time with the Meta 2 noticed this, too. They said “If you turn your head about the scene with any reasonable speed, you’ll see the AR world become completely de-synced from the real world as the tracking latency simply fails to keep up. Projected AR objects will fly off the table until you stop turning your head, at which point they’ll slide quickly back into position. The whole thing is jarring and means the brain has little time to build the AR object into its map of the real world, breaking immersion in a big way.”
Sound, especially spatial sound is very important in both VR and MR experiences. Sound can be a subtle indicator that something is happening outside of your field of vision. Microsoft has invested a lot into being able to provide you with the illusion of sound coming from any direction and distance, and it convinced people who tried it. Meta also has a “Four speaker near-ear audio” system, but it hasn’t been mentioned in the videos or reports I’ve seen. When I asked Meta on twitter, they confirmed that it is there to “create an immersive 3D audio experience”.
In any case, adding spatial sound to an object is probably just as simple with Meta as it is with HoloLens. If you’re using Unity, all you have to do is attach a sound to an object (a simple drag-and-drop operation), and the system will take care of all the complicated calculations that will make it sound like an alien robot has just broken through your apartment wall at 7’o clock.
Collaboration between Multiple Users
Both Meta and HoloLens has shown examples of multiple users existing and cooperating within the same holographic space. Meta has even shown passing a hologram from one user’s hand to another’s.
At TED, both companies have shown a kind of holographic “video” call, where the other participant could be seen as a 3D hologram. Microsoft has also demonstrated collaboration among builders, engineers, or even scientists studying the Mars surface. Some of these demos had both participants in the same physical space, others were working together remotely.
Microsoft is also creating a special version of Skype for HoloLens, which has been piloted on the International Space Station. The astronaut can call experts on the ground, who will see what he sees through the front camera on the HoloLens. Then, the expert can draw arrows pointing out points of interest, or even create small diagrams on the wall to help the HoloLens user solve an issue. The interesting thing here is that the expert doesn’t even need a HoloLens, only a special Skype app that allows him to draw directly in the 3D space of the astronaut.
Microsoft does note though that more than 5 HoloLens devices in the same room may cause interference. With devkits limited to 2 orders per developer, and priced at $3,000, this is not going to be a problem for a while.
Price and Availability
During the last few months, Microsoft has been collecting applications for a developer kit. Anticipating a huge demand, developers had to (and still can) apply and convince Microsoft of them being worthy to the privilege of spending a sizable sum – $3,000 – on a developer kit, which will probably be obsolete in a year or less. Still, there is huge interest, and Microsoft is shipping the devices in waves – I’ve even heard of a wave 5, which is pretty scary, since waves can take 1-2 months to completely ship. HoloLens Developer Edition is all set to start shipping on March 31, but only to US and Canada developers.
Meta has also started taking preorders for their developer kit. Meta’s device only costs $949 – plus the expensive, $1000+ gaming computer you need to plug it into. But at least you can use that computer for other things, such as driving your Oculus Rift VR headset or gaming.
The downside is, Meta will not ship until Q3 2016. Being 6 months away from an actual shipping date has its risks. It means that the device or its software is not yet ready, and / or the manufacturing process and logistics still needs work. Solving these issues can take longer than expected. This can lead to further delays, and while I’m hoping it won’t be the case, there is a chance that the Meta 2 devkit will only ship in Q4 or even next year. But once they do ship, I expect them to get a large amount of devices into the hands of developers fast. Oculus has had 250,000 developers, so with Meta not being limited to North America and only costing one third of an arm and a leg, they have a chance of reaching similar numbers.
The reason I love this tech is that the use cases are pretty much infinite. And even if 50% of those turn out to have feasibility issues due to technology limitations, the rest is still huge. Every aspect of life, every profession can and will be touched by the grandchildren of the devices I talked about.
I’ve already mentioned a lot of use cases for both devices. But I think it is worth to inspect what the companies themselves emphasize.
Meta’s vision is clear. By removing abstractions, such as files, windows, etc., Meta wants to simplify computing and get rid of the complexity that the last 30 years of computer science has built. They are doing this by making the hand and direct manipulation the primary method of interaction. They are also aiming to get rid of the monitors on the workspace – instead of using multiple monitors, you place virtual monitors or even just floating apps all around you, and if you want to access your emails, you just look at where you put the email app. Still, you will be tethered to your desktop for a while, which is something you should keep in mind when deciding whether a certain use case is fit for the Meta 2.
Meta’s field of view is vastly better than what HoloLens has to offer, and by plugging it into a computer, it has access to a powerful workstation and graphics card, and you don’t have to worry about it running out of battery.
On the other hand, the superior tracking, the environment understanding feature, the ability to interact with holograms that are further from you, speech control, and being tetherless are advantages that opens up use cases for HoloLens that are simply not possible with the Meta 2 (as known today).
Having pretty much surrendered the smartphone war to iOS and Android, Microsoft does not want to be left behind on the next big paradigm shift. So, they are firing from all cylinders – aiming not only at productivity, but experimenting with entertainment and games as well. Building on top of the Windows 10 ecosystem also helps a lot. And with their huge amount of resources, they are creating polished experiences that go beyond simple research experiments in all promising areas. However, Meta shouldn’t be discounted from this race – with the current hype, they are sure to secure a next round of investment or will be bought outright soon. And even if they don’t, the enthusiastic community will help take Meta (and HoloLens as well) to new places.
If you thought that at the end of this post, after more than 5,000 words, I would tell you that the Meta or the HoloLens is better – well, you were mistaken. Both are amazing pieces of hardware, filled with genius level ideas and technology, and an insane amount of research. If you want to jump right in as a developer, have the money, and live in the USA: go for HoloLens. If you are intrigued by the Meta 2’s superior visual capabilities, don’t need HoloLens’ untethered freedom and are willing to wait a little more, probably Meta2 that is the device for you.
In any case, what you will get is a taste of the Future.
I am 42 years old. I grew up with home computers and started this adventure with a ZX Spectrum that had a total of 48 KBytes (yes, kilobytes) of RAM, and an 8 bit CPU running at a whopping 3.5 Megahertz. I lived through the rise of the PC, the Internet and the smartphone revolution. All of these were life changing.
By now, I have a pretty good sense of when a similar revolution is approaching. And my spider sense is tingling – the next big thing is right around the corner. It is called Holographic Computing, Augmented Reality, Mixed Reality – even its name is not agreed upon yet. Once again – for the fifth time in my life – technology is on the verge of profoundly changing our lives. And if you are like me, and yearn to live and even form the sci-fi future of your childhood – this is the area to be in.
It took a lot of emails, whining talking to the right people (thank you!), some luck and the dedication from our MVP Lead, Andrew DeBerry and others – but finally, me and the rest of the Kinect for Windows MVP group (or Emerging Experiences MVP group as we call ourselves now) got a chance to experience HoloLens in person during the MVP Summit in November.
This was a pretty big deal for me. I’ve gotten into the small Kinect Emerging Experiences group because of my interest and passion towards new and almost green-field ways of human-computer interaction. And HoloLens is the first mixed reality device that has a chance of being widely available. It has never stopped tickling my fantasy since it first has been introduced. I’ve read every piece of review and report on personal experience from those who were lucky enough to actually try it. I had a pretty good idea on what to expect – but wanted desperately to experience it for myself.
This post is the summary of my experiences. It will be long, but worth it.
The Holographic Academy is Microsoft’s dedicated showcase area for HoloLens. It is located in building 37 on the Microsoft Campus. Once we left all our bags and phones at the nearby lockers, we (about 20 of us) gained entry to a rather large room. The room was scarcely lit, but it had more than enough light for us to see well. In the center of the room was a round stage, on which stood a tall, muscular guy – looking just like a drill sergeant from the movies. Around him were about 5 or 6 stations, each with computers, a few HoloLens devices, a table, TVs hanging on the wall, a couch, and a Microsoft employee to help. We were directed to split into groups of 5 each, and each group went to a different station. My group ended up being only four people, and we had three HoloLens devices, so chances were good 🙂
The air-click gesture
The “sergeant” started to speak, loudly, with authority, but fortunately without any of the scary uptones. He turned out to be a pretty nice guy, welcoming us to the Holographic Academy, and walking us through the basic gestures of “air tapping” (clicking), and “bloom”, which opens up the Start menu. Once it became clear that we weren’t about to go through a rigorous bootcamp training, and got our congrats for mastering the air gestures, my attention started to wonder. I looked at the device right next to me on the table, connected to the PC nearby for charging. It looked exactly like the one in the pictures, so no surprises there. However, I could see the display area – it was about 3×2 centimeters, but I had no way of actually measuring it. Still, it looked much larger than the Epson AR glass I tried a couple of weeks earlier.
Putting it on
Soon, we had the chance of putting on the device. It has an internal band, which can be expanded and contracted based on the size of your head. The purpose of this band is to bear the weight of the device, and distribute it evenly on your head – it’d be way too heavy to just rest on your nose. The actual device can then be tilted independently of this holder unit – in fact, one of the first surprises to me as a lifetime wearer of glasses was that the nose bridge wasn’t even supposed to rest on your nose. All in all, HoloLens sat on my head comfortably, and soon I’d forget about the weight of it. Ah, and we didn’t have to measure my pupil distance – that seems to be something the device no longer needed or did automatically.
The head pant can be expanded and contracted using the wheel in the middle. It is a very premium and comfortable experience.
On the left side, there are two buttons to control the volume, and on the right side, two buttons control the brightness.
This is what the volume and brightness buttons look like
When I first looked through it, I saw a pale blue border, indicating the actual screen. Well, it is not a screen, but more on that later. Soon, HoloLens started booting, and I saw the familiar, light blue Windows logo as it did so. Yapp, it’s Windows 10 all right!
At this point I was pretty disappointed with the field of view… it was not just small, but looked like a 16:4 aspect ratio. I soon realized that the top half of the screen faded away, but the bottom side had a very sharp edges. I started moving the headset around, and yes, it seemed like either my glasses or my eyebrows actually obstructed the view. As I moved the movable part of the headset down a bit, I managed to get a full 16:9 aspect ratio display out of it. I know this because I could move myself so that the entire display area covered one of the TV sets on the wall.
Another, very interesting observation: you can not only tilt, but also move the glass closer to or further from your eyes. The travel distance is about 5-8 centimeters, which is quite a lot. And while I did this, the actual field of view has not changed! If HoloLens had a display in front of my eyes, I’d have expected it to shrink as I move it away. But the perceived size of the display area remained the same – this suggests that HoloLens actually uses some kind of projection, and not simply a small display in front of you.
In less than a minute, the logo disappeared, and a small 3D graphic took its place – a line drawing, resembling three mountains, made out of just a couple of lines. A text saying that HoloLens is scanning the environment was displayed. And soon, magic started to happen.
Scanning the Room
The spatial mapping process
The room scanning (Spatial Mapping) looks somewhat like what you see on the videos – triangles made of light start to cover the real objects in front of you. There are a couple of differences though:
The triangles were not filled blue triangles, but outlines, made of white light
It took about 20-30 seconds for the scan to finish
The triangles had sides of about 5-10 real life centimeters. This is more than enough to discover the walls, furniture and other objects in the room, but the resulting mesh is pretty low resolution for anything more than that. I have no idea whether it is just for the coordination of the device in the 3D space, whether the HoloLens is capable of a higher resolution environment mapping, or even if apps can actually access this information.
During the “boot camp”, we were asked to first use the “Origami” app, and then we were told that we could experiment freely, although a lot of the apps on the device may not work or not work well. So, when the scan finished, the start menu was presented, hanging in the air in a fixed space in front of me. The way HoloLens interaction works is somewhat like a mouse – you have a pointer which you can direct with your head, and the air click gesture to activate whatever the pointer is over. All the usual effects – “mouseover” and click animations, and even sounds are in place.
The way you move your pointer is by moving your head – not your gaze. The pointer is in the middle of the screen, and looks like a small circle. The air click gesture can be performed pretty much anywhere in front of your body. However, simply bending your finger down and up is not enough – it is not a coincidence that we’ve been trained “hard” to move our entire finger and touch our thumbs. If you do this gesture right, it works well, and detection is pretty reliable.
So, as a newbie Holographic Academy graduate, I obediently moved my head so that the pointer was over the Origami app, and clicked – I mean, air-tapped.
An early version of the Origami app. We were shown a much more refined and complex version.
The Origami app starts out as a holographic cube. It moves along with your gaze, and as you look around, it stays mostly in center – but sticks to the floor, the walls and the tables. I moved my head so that it was on the table, and put it there, using the air-tap gesture.
This was the first time I actually examined the holograms themselves. And I think, Microsoft choose a very good name when they decided to call these things “holograms”. They look exactly as what you’d expect after watching too many Star Wars movies. Perfect illusion of 3D objects hanging in space. Still, you’d never mistake a HoloLens hologram for a real life object. There is one fundamental difference: holograms are actually made of light. Real objects reflect light, and thus they are not bright in a dark room (except for lamps and some tricky lighting, but I digress). Holograms are made of light themselves, but their light is not reflected on the furniture around them.
But that’s where the similarities between R2-D2’s projection in Kenobi’s cave end with HoloLens. Because the holograms in HoloLens’s field of view are absolutely amazing. The holograms stick in place. You can move your head around, and they remain exactly where you put them. You can move yourself, and examine the hologram from every direction. You can jump up and down (believe me, I tried, looking more like an idiot than usual), and they are still there. No jumps, no glitches, no nothing. The Holograms are where you put them, and they stay there. They are also extremely solid – there’s barely any transparency to be seen. Of course, this also depends on the brightness level set on the device.
The 3D illusion is also perfect. If you go to an IMAX movie, the 3d can be breathtaking – but you’ll never confuse it with reality. You always know that it’s an illusion. Not so with HoloLens. The holographic objects are “just there”. You don’t have to convince your brain that you’re looking at a 3D thing, because you ARE. There is none of that over-emphasized “look, I am 3D” feeling that you get with 3D movies. Things are just naturally there, and naturally 3D. And this is extremely important to keep the illusion alive – because holograms need to work together, and next to the real, three dimensional reality. This is the real mind-blowing part of the HoloLens tech – the 3D illusion is so perfect, you don’t have to suspend your disbelief, because there is none of that to begin with. Except for the field of view…
OK, back to the Origami experience, which I placed on a small table earlier. There are two slopes in the hologram, seemingly made of folded paper, both of which have an origami ball suspended above them. If you air-tap, the balls fall down on the ramps, roll down, and there is an explosion at the bottom of the ramp as the balls hit the table. Then, the table “opens up”, and through the hole you can peek into a new world. The world has origami birds, clouds, mountains and a blue sky – and you’re looking at it from above. The illusion is perfect, it’s as if you opened a Portal in Valve’s game, and are now looking down from the sky. You can walk around the table, and peek into this portal from every direction – the illusion stays impeccable.
One of the speakers
The Origami experience (along with some others) emit a 3D spatial sound, which gives you an important pointer to what’s happening around you and even behind you. Unfortunately, there was something wrong with my device, and I couldn’t get the spatial sound illusion working, even though my spatial hearing is fine in the real world. This was probably some sound driver issue, limited to the device I tried. Others in my group had no sound problems, but my HoloLens actually rebooted itself when I tried to launch the Cortana app. (remember, HoloLens is Windows 10). Also, voice commands didn’t work for me – there was supposed to be a reset world command for the Origami experience, which it didn’t get even when the helper in our area said it leaning close to my headset. BTW, try to do that with a VR headset – it would really freak out the wearer.
The “Holograms” app
After I had enough of the Origami app, I performed the “bloom” gesture again to bring back the Start menu.
(This is the Bloom gesture, but the resulting Start menu doesn’t look like this.)
The next app I tried was a simple one – you could select holograms from a list, and place it anywhere in the space. If I recall correctly, the app was called “Holograms”. Once you’ve selected a Hologram, it followed your gaze (meaning where your head was pointing at, not your eyes), and stuck to any surface you were looking at. I could then fix the Hologram in place with an air-tap gesture. Some of these holograms were just static 3D objects, some would animate. If I wanted to move a Hologram, I had to air-tap on it, and a surrounding box would appear. There were text options below the box to delete the Hologram, move it or resize it. Deleting worked much like you’d expect. However, I had some cognitive issues with the resizing. The problem here is that the cursor is usually moved with your gaze (head). However, when resizing, I had to perform a pinch gesture with my finger, and move it in the 3D space. Basically, when performing the “drag and drop” operation, you have to move your hands – but in other cases, the pointer moves with your gaze. Multiple times I moved my head to move the resize handlers, when I should have used my hands at that point. The reason for this is understandable – gaze is a 2D pointer, but when you want to move stuff around, you want to move them in 3D. The HoloLens perfectly followed my hand movement in all three dimensions, but this experience was still somehow confusing to my brain trained for mouse usage.
You can see the difference between moving the pointer with your gaze for clicking, and with your hand for dragging. Takes a while to get used to. Also, see how any Universal Windows App can run on HoloLens?
The first thing I did was to place a rainbow Hologram on the floor, about 2-3 meters from me. As I said before, the 3D illusion was perfect, but you couldn’t confuse the Hologram with anything real, because it was made of light (and the objects around didn’t get lit by the light the Hologram radiated).
The next thing I tried was to put a holographic space suit helmet on the head of a fellow MVP, patiently sitting on the couch and waiting for her turn. I could easily move, rotate and resize the helmet to fit it on her neck. The hologram completely blocked out her head, I couldn’t see her face – until she moved a bit and was only half covered with the helmet 🙂 Still, the illusion was great, and I could move around and look at her helmet from all directions. Now that I think of it, I must have looked like a freaky stalker, staring at her head in my futuristic glasses, with mouth half open in wonder… Sorry 🙂
One thing I haven’t tried though is to place large Holograms. I guess I am so used to small screens that it didn’t even occur to me that computer generated objects can be as big as a human, or even larger. Just imagine using this technology to design clothing, or any machinery, and seeing the result in 3D in real time, in the real environment!
I also placed a small ballerina Hologram next to the small orange table, on the floor. As I moved around the table, it started to cover the Hologram, as a real object would! Well, mostly… because of the low resolution environment mesh, the surface area of the round table was pretty scarcely modelled – and therefore, it couldn’t hide the hologram perfectly. This was probably the weakest holographic experience I had – when a real life object was supposed to hide the hologram, but it couldn’t because the real life object’s mesh was too rude.
I even played with covering the depth sensors on the glass (there are two on each side, looking somewhat outwards). When I covered one, the tracking remained stable. However, when I covered the other one as well, the tracking was lost, and I was back to the initial Environment Scanning, when the environment 3D mesh with the triangles were built up.
The next thing I tried was launching the Edge browser. It worked as you’d expect – a 2D window, floating in front of you in space. Selecting links and navigating was simple enough. However, the text clarity was not perfect, I could see a disturbing ghost image on the smaller text. It may have been because I have pretty strong glasses and there’s more than 1 dioptry difference between my eyes. I should have closed one of my eyes to see if the text would’ve became clearer – but unfortunately, I didn’t think of this while there. Scrolling the browser was simple, a similar drag and drop gesture as what you used for resizing objects.
The Last Mind Blowing Experience
At this point I was running out of time, but so, the last thing I tried was to launch the Photos app. You know, the Windows 10 app that has all you photos? I was expecting another “2D app floating in space”. But boy, was I wrong!
It started out much like the Edge browser – you select the app from the Start menu, place it in space, and start interacting with it. But the thing about HoloLens is that you can move close to the Holograms. And that’s what I did.. I moved closer to the app, looking for the point where the field of view limitations would hinder the experience. But instead, I found something else. I found that the actual photos were in front of the app! Field of view concerns totally forgotten, I moved even closer. And yes – the Photos app was not the same app running on the “standard” Windows 10. It looked similar, but the UI elements were actually in 3D! The photos were in front of the background of the app, and casted small shadows. I moved to the side, and I could actually see the gap between the photos and the rest of the app. And the rest of the app actually had some thickness to it! Not just a thin paper floating in the air – the app’s toolbar had thickness, the app’s background was also a solid 3D object.
To me, this app was the most mind blowing stuff I saw. I was expecting most of the other things, based on what I read and heard of HoloLens before. I just wanted to see those with my own eyes. But… seeing an app I used every day in 3D, with real substance, real width – wow. Just wow. It sounds cliché, but these final few minutes with a solid, 3D version of an everyday 2D app made me realize that this is the device that really transforms computing to a new dimension.
Well, it is not exactly a new year just yet, but we are pretty close. A lot has happened to me and a lot has changed around in my professional career during this last year or so, and as a result, I am taking up blogging again.
One of the big changes is in my MVP Status: I am proud to say that I am now part of the Emerging Experiences MVP group, focusing on what Microsoft calls More Personal Computing: Cortana, inking, gesture driven computing (Kinect), Microsoft Band, Windows Hello and of course, Holographic Computing with HoloLens. The Emerging Experiences group is part of the larger Windows Developer MVP group, which is awesome, since I have and will be doing a lot of things in this area, too.
One of the biggest projects I am finishing up is my course on Pluralsight, called “Introduction to Universal Windows Platform Development with XAML”. I’ve put a lot of work into this course, and I am very proud of the result.
What About the Old Blog?
I have to leave my old place at http://vbandi.dotneteers.net – that blog runs on a very outdated blog engine and my own server, and I just can’t keep maintaining it. I’ll do my best to at least reserve the old posts somehow – I still get a decent amount of traffic, and sometimes my own searches take me back to that old site. But new posts will happen here.
And what will be Here?
I’ve always been fascinated by how computing can be made available for an ever growing audience – about new human-computer interaction paradigms, user experience. I thoroughly enjoy living in the sci-fi world of my childhood – there’s so much stuff possible today that has only been a dream 20 years ago. And what’s even better, I get to be a small part of bringing this future to life, making it happen. Being a software developer is the closest you can be today to being a real life wizard. Making thoughts materialize, turning ideas into joyful moments for millions, making people more productive or even changing or saving lives – it’s all possible through software. It is a difficult profession though – sometimes even creating something small can take months of hard work, and not just whispering a few magic words or waving a wand. But this is the kind of magic that works!
I don’t have a planned set of topics. I’ll go where my professional career and my passion towards the magic of technology – takes me. Right now, it is my UWP course on Pluralsight, my fascination with HoloLens and other emerging experiences. It may be something entirely different a few months from now. The only way to find out is to begin and walk the walk. I’d be happy to have you along the ride!