Louis Rosenberg is the CEO & Chief Scientist at Unanimous AI. He first got his hands dirty in the virtual world in 1991 as a PhD student working in a virtual reality lab at NASA. He used a variety of early VR systems to model interocular distance (which is the distance between the eyes) and optimize the depth perception in software. Despite being a true believer in the potential of virtual reality (VR), he found that the experience was bad. Not because of the low fidelity, as he knew that would steadily improve, but because it felt confining and claustrophobic to have a scuba mask strapped to people’s faces for any extended period.

According to him, even when he used early 3D glasses, which were shuttering glasses for viewing 3D on flat monitors, the sense of confinement didn’t go away. He still had to keep my gaze forward, as if wearing blinders to the real world. All he wanted back than was to take the blinders off and allow the power of virtual reality to be splattered across his real physical surroundings.

This sent him down a path to create the Virtual Fixtures system for the U.S. Air Force, a platform that enabled users to manually interact with virtual objects that were accurately integrated into their perception of a real environment. This was before phrases like “augmented reality” or “mixed reality” had been coined. But even in those early days, watching users enthusiastically experience the prototype system, he was convinced the future of computing would be a seamless merger of real and virtual content displayed all around us.

Cut to 30 years later, and the phrase “metaverse” has suddenly become the rage. At the same time, the hardware for virtual reality is significantly cheaper, smaller, lighter, and has much higher fidelity. And yet, the same problems he experienced three decades ago still exist. Like it or not, wearing a scuba mask is not pleasant for most people, making you feel cut off from your surroundings in a way that’s just not natural.

This is why the metaverse, when broadly adopted, will be an augmented reality environment accessed using see-through lenses. This will hold true even though full virtual reality hardware will offer significantly higher fidelity. The fact is, visual fidelity is not the factor that will govern broad adoption. Instead, adoption will be driven by which technology offers the most natural experience to our perceptual system. And the most natural way to present digital content to the human perceptual system is by integrating it directly into our physical surroundings.

Of course, a minimum level of fidelity is required, but what’s far more important is perceptual consistency. By this, all sensory signals (i.e. sight, sound, touch, and motion) feed a single mental model of the world within our brain. With augmented reality, this can be attained with low visual fidelity, as long as virtual elements are spatially and temporally registered to our surroundings in a convincing way. And because our sense of distance (i.e. depth perception) is relatively coarse, it’s not hard for this to be convincing.

But for virtual reality, providing a unified sensory model of the world is much harder. This might sound surprising because it’s far easier for VR hardware to provide high-fidelity visuals without lag or distortion. But unless you’re using elaborate and impractical hardware, your body will be sitting or standing still while most virtual experiences involve motion. This inconsistency forces your brain to build and maintain two separate models of your world — one for your real surroundings and one for the virtual world that is presented in your headset.

When Louis Rosenberg tells people this, they push back, forgetting that regardless of what’s happening in their headset, their brain still maintains a model of their body sitting on their chair, facing a particular direction in a particular room, with their feet touching the floor. Because of this perceptual inconsistency, their brain will be forced to maintain two mental models. There are ways to reduce the effect, but it’s only when we merge real and virtual worlds into a single consistent experience (i.e. foster a unified mental model) that this truly gets solved.

This is why augmented reality will inherit the earth. It will not only overshadow virtual reality as our primary gateway to the metaverse but will also replace the current ecosystem of phones and desktops as our primary interface to digital content. After all, walking down the street with your neck bent, staring at a phone in your hand is not the most natural way to experience content to the human perceptual system. Augmented reality is, AR hardware and software will become dominant within 10 years, overshadowing phones and desktops in our lives.

This will unleash amazing opportunities for artists and designers, entertainers, and educators, as they are suddenly able to embellish our world in ways that defy constraint (see Metaverse 2030 for examples). Augmented reality will also give us superpowers, enabling each of us to alter our world with the flick of a finger or the blink of an eye. And it will feel deeply real, as long as designers focus on consistent perceptual signals feeding our brains and worry less about absolute fidelity.

As for what the future holds, the vision currently portrayed by large platform providers of a metaverse filled with cartoonish avatars is misleading. Yes, virtual worlds for socializing will become increasingly popular, but they will not be the means through which immersive media transforms society. The true metaverse — the one that becomes the central platform of our lives — will be an augmented world. And by 2030 it will be everywhere.

To know more about news related to Augmented Reality in the Metaverse, check out Lexyom’s Legal News publications.

Legally Yours,