I write this post immediately after watching Apple unveil the highly anticipated Vision Pro headset while geeking out in my living room. With all the hype surrounding the device, I never would have thought it would be as impressive as I witnessed during today’s keynote. The downward-facing cameras, the lenses, the EyeSight, and the small battery pack were just small things that all stood out to me as simply state-of-the-art. And then, when I thought it couldn’t get any better, they dropped collaborations with Disney and Unity to top it all off.

I am an XR developer by experience; most, if not all, of my work experience has had something to do with AR/VR. I took a class in Augmented Reality in my freshman year to learn the fundamentals, and I fell further in love with the technology than I already was. During that time, I was able to get involved with a great team at my school, the University of Miami, known by two names, the XR Garage and the Innovate Team. With this team, I began the development of AR applications on both mobile and Magic Leap headsets and was able to possess my own Magic Leap for over a year to date. The majority of my time, however, I have been the lead developer on an iOS/Android application that uses Apple ARKit (which is now extended to the Vision Pro) and Google ARCore to deploy an AI application in 3D, which I worked on everything from the design, the front-end, back-end, and the machine learning development for. This unique experience allowed me to begin my development journey outside the classroom via Unity. However, as a computer engineering and computer science student, I plan to work with devices and systems on levels lower than applications once I get my degree. And while I am only entering my junior year to date, I am doing whatever I can to get myself in the best possible position to break into this fantastic industry post-graduation.

I’m fortunate enough to be able to own a few devices in the Apple ecosystem: I own a MacBook Pro as my daily computing device, an iPad Pro as my jack-of-all-trades notebook/second screen, and, of course, an iPhone (with AirPods Pros) to boot. As a student, I take advantage of all the opportunities about me to be able to acquire or afford these things that make my life a million times easier, from taking notes in class to doing homework to acquiring textbooks; I literally don’t ever have any papers in my backpack daily, just my MacBook and my iPad. While I typically wait until after developer conferences to watch recaps of new hardware and software like most people, because of my interest in XR, I was counting down the days until the event, and I had to see the unveiling of the new headset for myself.

Apple’s Software & Hardware Updates

Tim Cook took center stage in navigating the keynote while allowing other Apple leaders like Craig Federighi and Kate Bergeron to discuss the various different products Apple presented. First was the new MacBook Air, a larger-screened version of the everyday, lightweight laptop with M2 capabilities. Pretty cool. They also lowered the price of the 13-inch versions of the MacBook Air (either M1 or M2 chip) to adjust for the $1299 price tag for affordability purposes. The M2 chip is a fantastic piece of tech revolutionizing computing speed like never before, which is exciting for both Apple users and those interested in their competition. The new Mac Studio and Mac Pro are now also equipped with the M2 Max chip or the M2 Ultra, two M2 Max chips fused for peak performance. Having worked with ARM processors and System on Chips in class, I know it’s not as simple as putting two chips together but doubling everything provided in an SoC can only be considered good. Especially knowing that the M2 Ultra has up to 192GB of memory. I couldn’t imagine what the average person would do with all that memory once that technology became standardized. With these devices’ price tags and capabilities ($6999 for the Mac Pro), it would be scarce to see anything past the Mac Studio in anyone’s home office, but I can always be proven wrong.

iOS 17 1 image from apple.com

Next, iOS 17 was announced, mainly bringing updates to our Phone, Messages, and Facetime apps. My favorite takeaways were the upgraded “ducking” autocorrect with AI, the contact posters, live voicemail, the improved AirDrop and SharePlay (this one’s big for me), easier replying in messages, the new check-in feature, and the new ability to share contact posters via this new AirDrop technology for quicker networking. Also, the new Standby mode is super cool, but it will only work for iPhones 14 and newer since it takes advantage of their always-on screens. One of my favorite new additions to iOS 17 is the Journal app, a great new inclusion to promote mindfulness. I have recently been told often about the benefits of journaling, so an app dedicated purely to that purpose is great instead of having to journal alongside your notes or in a text editor like Microsoft Word.

The conference’s most “meh” point was when Apple revealed the new iPad features, precisely when they started by telling us that the Health app is coming to iPad. Since I don’t believe the iPad is convenient enough to update anyone’s health/workout information on compared to the iPhone, it seemed anticlimactic and an app that should’ve already been on there for the lack of iPad-specific functionality it contains. Health is adding mental health evaluations, though, which is fantastic. Aside from this, iPadOS 17 adds changes previously added to the iPhone, such as lock screen customization and some incredible iPad extra PDF functionality. I use GoodNotes to house all my note-taking and PDF holding since it uses the cloud, allowing me to quickly pull up anything on my MacBook and instantly open it on my iPad. Maybe iPad’s new features will rival apps such as these, but for now, I don’t think I will be going back and forth between apps when GoodNotes works perfectly fine for me.

MacOS Sonoma is coming out with really cool screensavers like they do with every update. This time instead of dark mode/light mode of the same view, it is live, moving screensavers which still once you unlock your device. Dope, I’ll definitely be using those once it comes out. Private browsing on Safari got more private, porting games to Mac became more convenient for developers, and widgets moved to the desktop. Overall great update. The best part is the new video conferencing features, allowing users to present themselves in different ways alongside their shared screen, with one version taking advantage of your camera to put your screen behind your body and in front of your background to allow for real-time presentation.

Aside from these significant changes, Apple is coming out with AirPods software updates with better, personalized, and situational noise cancellation. FindMy is coming to Apple TV Remotes, and FaceTime can be ported to Apple TV. Widgets are in Apple Watch, alongside new watch faces and specific active features for cycling and hiking.

On the Vision Pro

Now, for the main event, the Apple Vision Pro. I’ve never wanted to own a device more upon launch since I learned about the original Oculus Rift in 2014. While that turned out to be a disappointment for my younger self, being older, I can take full advantage of working on this new product hitting consumer shelves promptly. This behemoth of a device, for $3499, is more than anyone today could wish for in a mixed reality headset compared to the current state of the art.

Vision Pro 1 image from apple.com

It transcends the mixed reality headset level into the realm of being a spacial computer. Current headsets designed primarily to “augment” your reality have many technological limitations, from the field of view clipping to the heat, heaviness, low resolution, and frame rate synonymous with the devices. Apple aims to transcend these problems with a different format of immersion. Both of Magic Leap’s headsets allow users to see the literal world about them, with digital augmented reality overlay above everything. Google Glass and Microsoft’s HoloLens series also employed this type of technology in their devices to integrate real life with digital objects. However, with Apple’s device, this is different. If you put on the Vision Pro with the device off, you won’t be able to see anything. What you see in the real world is the work of many cameras reconstructing the world in front of you and displayed in a way you can’t even tell the difference. This is the same type of technology that Meta uses with their Quest series, but it has not been their flagship feature, and they focus more on total immersion and virtual reality technology. Because of this, their augmented reality is lacking, and until the new Quest 3, it did not even feature color.

Once you put on the headset, there are no controllers. The headset is entirely controlled by your body, specifically your eyes, voice, and hands. With eye-tracking technology within the headset, your eyes can be accurately used as a cursor. From then, once your eyes fastened on something virtual you desire, you simply perform hand gestures to interact with the object, whether it be a search bar, a button, or an object. The most fascinating part of the hand gestures is that you don’t have to actually be able to see your hands to use them as input like you do with Meta Quest or Magic Leap. The Vision Pro innovatively comes with downward-facing cameras that can see your hands even if you’re not looking at them, allowing for subtle interactions with the environment without lifting your arms.

The next cool thing about the headset is something Apple calls “EyeSight.” This on-the-nose feature allows others looking at your headset to see your eyes on the display by displaying the camera footage of your eyes in real time. This enables you to interact appropriately and less awkwardly with others while wearing the device. Others can actually see when you are looking at them through the device, and the intuitive device shows when you are currently watching something immersive so you aren’t distracted by others.

The displays are also more expensive than any other headset on the market. Apple fits 64 OLED pixels in the same space that they would do one iPhone pixel, with each being 7.5 microns wide (not too sure how small that is, but it’s pretty damn small). Across both panels, this comes out to 23 million pixels displayed before you. With these, each of your eyes has more pixels in front of it than that on a 4K TV. In front of these screens is a curved lens solution to allow for the proper viewing experience. With this, Apple may have topped the graphics of the rest of the industry, as most, if not all, current headsets don’t have a high enough resolution to be considered “retina-like.”

Because of the insane amount of sensors installed on the device, Apple developed a new chip called the R1, which is embedded in the device alongside an M2 chip as the central processing power of the device. This R1 chip processes all the input information from all the senses to produce a “virtually lag-free” display. By creating this chip, the M2 does not have to waste processing power to compute all the data taken in by the device’s sensors, and new images are streamed to the device in less than 12 milliseconds.

Vision Pro 2 image from dataconomy.com

There are a crazy amount of things discussed about the headset during the conference, and I urge you to read more about it if you’re interested. The integration with the Apple ecosystem makes the headset a unique product, unlike anything I’ve ever seen. But the part that got me the most excited is the announced portability from Unity to Vision Pro. The Vision Pro was developed to take advantage of already created applications for iPhone/iPad, so it comes embedded with ARKit and RealityKit, which are already the standard in AR application development and are now extended to include Vision Pro development. This was to be expected, but I got excited once they revealed that apps created on Unity for ARKit would have easy portability to the Vision Pro. Most applications currently designed for AR use Unity as an engine for development because the packages the company provides allow for easy integration of AR dependencies and development on different operating systems, such as iOS, Android, and Lumin OS for Magic Leap. Because of these, Unity is the industry standard for cross-platform XR development, and it is also where all of my prior experience in XR development lies.

Ralph Hauwert, SVP of Unity, joined Apple’s Platforms State of the Union, providing further information on Unity’s collaboration with Apple. Once the SDK for the Vision Pro is released, not only will developers will be able to create new apps intended for use on the new Vision Pro, but we can port our existing AR applications to the Vision Pro easily, as it will use Unity standard frameworks like AR Foundation and the same ARKit and RealityKit that developers are used to, all in the same Unity application we have experience developing in. Because of this, there will be no shortage of AR applications available for the device upon launch, and I hope to release some projects of my own in that time to take part.

All in all, this new device is nothing short of revolutionary. So the only question left is how will the public receive it? Are we ready to start buying headsets, or will this excellent device end up flopping like all the rest before it? Only time will tell, but I am hopeful about this product’s future. As my first actual blog post, I hope this content interests anyone who comes across it, and I plan on writing more in the future. For now, I go back to summer activities: interning, networking, applying, and leetcoding.