PolyU develops novel AI graph neural network models to unravel interdisciplinary complexities in image recognition and neuroscience
SOURCE: EUREKALERT.ORG
FEB 06, 2026
Machine vision now a bedrock for intelligent machines
SOURCE: ELECTRONICS360.GLOBALSPEC.COM
JAN 22, 2026
Peter Brown
21 January 2026

Computer vision is rapidly integrating computing and real-time inference technology for a new class of industrial imaging system called perception-driven machines. Source: ASUS
At CES 2026 in Las Vegas, Nevada, numerous announcements from machine vision vendors showcased how they are packaging vision hardware with real-time inference and integrating computing.
This is evolving machine vision from just a feature for industrial imaging to a core sensory system for intelligence machines where robots can perceive and then act rather than just see objects.
As a branch of AI, computer vision is used to analyze and understand images, videos and other visual cues using machine learning for cameras, lidar, radar and more inside robots (as well as vehicles and other systems) to interact with the physical world.
Computer vision converts images into data (pixels, color, intensity) and then algorithms process the data to identify features like edges, shapes and textures. Machine learning and deep learning then take the data and learn millions of labeled examples to recognize objects, classify scenes or predict outcomes. This allows the machines to interpret visual information and act on it.
The idea of these so-called perception-driven systems is to interpret raw sensor data and convert it into actionable understanding. So, they capture the images as traditional machine vision would, but then take the data in real time to inform decisions in autonomous and intelligent machines.
Leopard Imaging Inc. showcased at CES 2026 its portfolio of AI vision solutions that combine intelligent vision with physical AI. The company said this allows AI to move beyond virtual computation into real-world perception and action. These end-to-end systems bridge algorithms and physical systems for next-generation intelligent robotics for warehouses, logistics and more.
Another perception-driven system was introduced by Stradvision in its first AI-based vision perception collaboration with Seeing Machines at CES 2026. The two companies showcased a joint front camera perception that can detect and recognize objects from video inputs and link it to intelligent behavior.
The collaboration will use Stradvision’s SVNet FrontVision solution along with a Nvidia GPU designed to detect and recognize vehicles, pedestrians and other road users using video input.
"This collaboration is about establishing a common technical and strategic reference point," said Philip Vidal, Chief Business Officer at Stradvision. "We are enabling clear, system-level discussions around perception performance, scalability, and integration pathways. CES provides a natural forum to align early and explore how complementary vision technologies may work together as OEMs define next-generation ADAS architectures."
The result is perception-driven systems that identify:

e-con Systems launched the Darsi Pro at CES 2026 that features multi-sensory capabilities to give perception-driven systems more options for a range of applications. Source: e-con Systems
Other perception-driven systems are focusing on multi-sensory capabilities to allow them for use across an even greater range of applications.
Such a system was introduced by e-con Systems at CES 2026. Called the Darsi Pro, the system is an edge AI vision system that uses Nvidia’s Jetson platform for mobility, robotics and intelligent traffic management systems.
Darsi Pro delivers up to 100 teraflops per second (TOPS) of AI performance and integrates cloud-based device management, multi-sensors and a rugged industrial build.
The multi-sensor fusion — cameras, lidar, radar and inertial measurement units (IMUs) — offers real-time perception and decision making rather than just isolated imaging.
Leopard Imaging also introduced a multi-sensor imaging platform called Dragonfly that includes an image sensor processor (ISP) for improved image quality, real-time performance and direct data output. The company said this eliminates the need for external imaging processing.
These perception-driven systems reflect the use of multi-sensor modalities so that machines can understand environments like detecting objects in poor lighting or in complex backgrounds.
These announcements from CES 2026 imply these perception-driven machines will offer a variety of features like:
It is likely this is the first of many announcements that will combine perception-driven technology with AI vision stacks and high-performance edge computing. As these systems deploy and become more commonplace, it will open up new types of ways to gather data and use that data in real-time for actionable analyses in the industrial imaging sector as well as autonomous vehicles, logistics and more.
To contact the author of this article, email PBrown@globalspec.com
LATEST NEWS
WHAT'S TRENDING
Data Science
5 Imaginative Data Science Projects That Can Make Your Portfolio Stand Out
OCT 05, 2022
SOURCE: EUREKALERT.ORG
FEB 06, 2026
SOURCE: MARKTECHPOST.COM
FEB 06, 2026
SOURCE: QUANTUMZEITGEIST.COM
JAN 31, 2026
SOURCE: ROLLINGSTOCKWORLD.COM
JAN 22, 2026