I tried the Vision Pro today. I had heard mixed reviews, mostly about how heavy it is and people getting headaches or vertigo. Those challenges are real. Even still, the experience was pretty incredible. I might need to go back for another demo, so I can choose my own adventure rather than staying on the standard demo path.
The eye tracking selection was natural and pretty effective. I did find my eyes getting a bit tired though and eventually got a bit of a headache. The pinch to select was also quite good, though I found myself wanting to cross my arms and then the device cameras couldn’t see my pinching and I had to put my hands back on my lap. An Apple Watch could probably solve this somehow.
The 8K video footage was something different. Spoiler alert: they do a cheap trick where they film a ball flying right at you, and you instinctively have to put your hands up to protect yourself. It feels like you are there, more than anything I’ve experienced outside the real thing.
This achievement is remarkable but makes me a bit uneasy. As they solve the challenges of device weight and motion sickness, we may be able to wear a version of a device like this all the time. Combined with generative models, it will be increasingly possible to transform the aesthetics of reality to our liking. I could see this pushing people further into their devices and away from the real world. We’re already pretty into our phones, but we know the difference between our phone and reality (to say nothing of what we read on our phones). Vision Pro makes you start to forget that what’s in front of your face isn’t real. Powerful.