2026 Trends To Watch: Physical AI, Spatial Computing And The VR Boom


SOURCE: FORBES.COM
NOV 25, 2025

ByRobert C. Wolcott,

Contributor.

I explore business, leadership and humanity in our technological age.

Nov 25, 2025, 11:30am ESTNov 25, 2025, 02:43pm EST

GERMANY-INTERNET-INNOVATION-TECHNOLOGY-WIRELESS-ZUCKERBERG

VR circa 2016. Facebook founder and chief Mark Zuckerberg (C) publisher Springer CEO Mathias Doepfner (R), publisher Friede Springer (2nd R) and other guests try virtual reality devices at the Axel Springer Award in Berlin, 2016. (Photo by Kay Nietfeld / POOL / AFP) (Photo by KAY NIETFELD/POOL/AFP via Getty Images)... More

POOL/AFP via Getty Images

The next really big thing (beyond AI) is almost upon us, and most people haven’t noticed. After our three-year LLM obsession, the wider horizons of AI and robotics are starting to percolate beyond the lab. They’ll eventually transform Virtual Reality (VR).

Zuckerberg wasn’t wrong about the metaverse. He was early. Over the next three to five years, VR will reemerge transformed by the convergence of Generative AI, Physical AI and spatial computing. VR, back and vindicated.

Life, B.C. (Before ChatGPT)

In late 2022 (thanks to Sten Saluveer) I spoke at Estonia’s Black Night Film Festival. These are edgy, leading-edge masters of film production. I predicted that within a decade—we didn’t know when—technology would enable us to generate video on demand, in real time, for an audience of one. Eventually, entire lived experiences would emerge real time in VR. These artists and technicians found the vision inspiring—and terrifying—but it wasn’t yet possible.

PROMOTED

POFF Opening Gala image

Image from the mainstage of the Black Nights Film Festival (Pimedate Ööde filmifestival, PÖFF), hosted in Tallinn, Estonia each year in November since 1996.

POFF

Until it was. November 30, ChatGPT launched. I had no idea LLMs had become so good, so fast, though I had been watching weak signals for years, including signals from a nonprofit lab called OpenAI.

Today, AI masters like Fei-Fei Li and Yann LeCun are moving beyond LLMs. As LeCun—who will step down as Meta’s Chief AI Scientist at the end of 2025—asserts, “LLMs are not a path to human-level AI.”

They and others are doubling down on “Physical AI”—the fusion of perception, reasoning and control in 3D space that lets machines act autonomously in the physical world—and on spatial computing, which uses mapping, tracking and 3D representations to give computers and humans shared visions of physical environments. Physical AI enables action; spatial computing provides spatial frames of reference in which action occurs.

CEO: C-suite news, analysis, and advice for top decision makers right to your inbox.

Email Address

Sign Up

By signing up, you agree to receive this newsletter, other updates about Forbes and its affiliates’ offerings, our Terms of Service (including resolving disputes on an individual basis via arbitration), and you acknowledge our Privacy Statement. Forbes is protected by reCAPTCHA, and the Google Privacy Policy and Terms of Service apply.

Physical AI and spatial computing are combining with the next wave of VR, what some call VR+, including AR and mixed reality. Not the metaverse of empty virtual Walmarts or $3,000 headsets, but something far more compelling.

Butterfly Wings & VR Futures

This past week, a cryptic deal between unexpected partners—healthcare and generative AI—signaled this shift. Few noticed. Even fewer understood the implications.

On November 17, Butterfly Network (BFLY), the handheld-ultrasound pioneer, announced a five-year co-development and licensing agreement with Midjourney, one of the most influential AI image-generation labs. Midjourney will pay Butterfly a $15 million upfront fee and $10 million annually for access to its ultrasound-on-chip platform and software—plus milestone payments and revenue-sharing tied to future hardware.

BFLY and Midjourney deal, Nov 17, 2025

BFLY's public filing regarding it's Co-Development and Licensing Agreement with Midjourney, Inc.

PUBLIC FORM 8-K

Butterfly originally designed their chip to collapse a cart-sized imaging machine into a handheld probe. Their system is a spatial sensor, not just a medical component. Midjourney is licensing the platform for next-generation sensing and spatial understanding. This deal marks Butterfly’s shift to a sensing platform for AI at the edge.

Why This Deal Matters

Butterfly’s ultrasound-on-chip adds depth, motion and sensing (even through surfaces) that cameras alone can’t achieve, expanding spatial computing’s perception, eventually enabling richer VR environments.

Midjourney’s R&D roadmap envisions next-generation models for video and 3D environment generation, building towards—as CEO David Holz referenced in internal 2024 “Office Hours” commentary—"holodeck-like” worlds. Within that arc, the Butterfly–Midjourney partnership represents an early step toward VR systems that can perceive the world, generate environments and respond in real time.

David Holz at 2013 SXSW Music, Film + Interactive Festival

David Holz speaks onstage during the 2013 SXSW Music, Film + Interactive Festival on March 9, 2013 in Austin, Texas. Holz would eventually found Midjourney in 2021. (Photo by Bobby Longoria/Getty Images for SXSW)... More

getty

Midjourney hasn’t disclosed what exactly they’re developing. My speculation: whatever launches will integrate acoustic, visual and other sensors to blend people, AI agents and objects seamlessly into VR environments. Perception, intelligence and experiences at the VR edge.

Next-Generation Metaverse

Meta has reported nearly $70 billion in accumulated losses (read: R&D investments) in Reality Labs since Mark Zuckerberg founded the lab in 2021. Apple’s Vision Pro arrived with fanfare and a $3,499 price tag but lacked content and mass adoption. Google launched and later abandoned its Daydream VR platform.

For many observers, the Metaverse failed.

Fortunately, Meta, Apple and Google can afford to lose billions. Their investments catalyzed R&D and inspired entrepreneurs and researchers further beyond.

The past few years delivered some missing ingredients. Midjourney and OpenAI’s Sora now create photorealistic scenes increasingly aligned with physical laws. Spatial mapping continues to improve. Edge AI chips deliver real-time inference and—like Butterfly’s—low-power physical sensing in small packages.

Meanwhile, Figure and Tesla are building general-purpose humanoid robots that learn multi-step tasks from large-scale models rather than scripts. NVIDIA’s Omniverse provides simulation fabric to train on millions of scenarios before touching a home or factory floor. These systems share a pattern: tight loops between sensing, world models and action.

While mostly developed for robots, these capabilities are imminently applicable to creating far better VR. No one knows how long this will take, but the consensus among experts I’ve interviewed is that no physical laws prevent VR experiences from becoming indistinguishable from reality.

Future VR interfaces might include headsets, room-scale installations, smart glasses or as-yet-unimagined new form factors. Eventually, interfaces might disappear through brain-computer interfaces (BCIs) now under development by companies like Neuralink, Synchron and Precision Neuroscience.

Moran Cerf at the New York Times

Moran Cerf speaks at the New York Times Building in 2023 in New York. (Photo by Charles Sykes/Invision/AP)... More

2023 Invision

My long-time collaborator, Columbia University neuroscientist Moran Cerf, explains why he believes we’ll see invisible interfaces in our lifetimes. “Making BCIs work poses wicked engineering challenges. Humans have ways of solving those.” Add our AI partners, and we’re off to the VR races.

Freed from face-mounted screens and no longer limited to pre-rendered worlds, VR systems will perceive your physical space and immerse you in responsive, generative environments—humans and robots included.

Life A.D. (After Diffusion): New Worlds On Demand

Yann LeCun’s next act signals that AI must grow a body, not just better autocomplete. After a decade leading Meta’s FAIR lab, he is leaving to launch a startup focused on a “new paradigm of AI architectures” capable of understanding physical worlds, reasoning over time and executing complex actions. Of course it’s about robots, but it’s part much wider horizons.

Fei-Fei Li’s new company, World Labs, describes itself as “a spatial intelligence company, building frontier models that can perceive, generate and interact with the 3D world.” Its first product, Marble, turns text, photos and video into persistent, editable 3D environments. Li argues, “world models must be able to generate worlds consistent in perception, geometry and physics,” with GenAI maintaining spatial coherence over time.

Synthesize LeCun’s Physical AI with Li’s spatial intelligence and VR, and the future becomes visible. Moreover--I’ll leave this to you, the reader--take a look at Google’s nano-banana concept and consider how it relates to future VR.

King Charles III Presents The Queen Elizabeth Prize For Engineering

LONDON, ENGLAND - NOVEMBER 5: King Charles III (top row, 2nd left) poses for a group photo with the recipients of the 2025 Queen Elizabeth Prize for Engineering to (top row, left-right) Professor Yoshua Bengio, Dr. Yann LeCun, Professor Geoffrey Hinton, (bottom row, left-right) Jensen Huang, Dr. Fei-Fei Li, Dr. Bill Dally and Professor John Hopfield, for their contributions to the development of modern machine learning in the field of Artificial Intelligence, during a reception for the 2025 Queen Elizabeth Prize for Engineering, at St James' Palace November 5, 2025 in London, England. (Photo by Yui Mok - Pool/Getty Images)... More

Getty Images

Their work advances toward a concept Moran Cerf and I introduced in 2014: Post-Virtual. Beyond the point where we cannot distinguish VR from “default reality,” we enter the Post-Virtual world. Post-Virtual is the hard Turing Test for VR, a threshold likely to be surpassed later this century.

VR’s Coming Smartphone Moment

Emerging AI systems don’t just ‘see’ ordinary video. They consume multimodal imaging combining visible light, infrared, ultrasound (hence the BFLY-Midjourney deal) and more, enabling editable models of the world--and they’ll generate new worlds which we’ll access via VR platforms to come.

Business leaders should begin experimenting now with spatial computing and generative world models. The organizations that learn fastest will help shape this new medium.

The coming metaverse won’t be cartoon avatars in empty virtual Walmarts. It will be collaborative, lived environments that you understand—and that understand you. Inspiring—and terrifying.