Mobile robot technology is moving fast, and generative AI is pegged to transform AMR vision systems to enable more integrated solutions.
Advanced machine learning and visual processing are major trends in automated mobile robot (AMR) development. According to Gartner’s Emerging Tech Impact Radar, advanced computer vision and AI acceleration will be among the most important influencers within the next two years.
“In the past decade, machine vision changed a lot due to technological improvements. In the beginning stage, machine vision was rule-based, which is still very efficient. But about five years ago, deep learning highly improved accuracy and performance. Now, generative AI is changing machine vision again,” said Magic Pao, AVP of Industrial Cloud and Video Group at Advantech, during a keynote panel at the 2023 Advantech Industrial IoT World Partner Conference in Taipei, Taiwan.
Vision is critical for AMRs to navigate complex environments, avoid obstacles (including human workers), and identify and handle items correctly. AMR vision systems typically consist of:
- Cameras that capture image and video footage of the robot’s surroundings
- Other sensors, such as LiDAR and 3D sensors, that help the robot understand its surroundings, develop depth perception, and detect objects
- An onboard processor to analyze visual and sensor data. This processor runs computer vision algorithms to make sense of the data, identify objects, detect anomalies, and determine the robot’s position and orientation. It may also run simultaneous localization and mapping (SLAM) algorithms to create maps and localize the robot within the maps.
- A communication interface to send and receive data to other robots or systems, such as a central control system
AMR vision systems must be high-speed and low-latency to be effective and ensure safe operation around humans. They also require technology to transmit and process large amounts of data over various distances. Therefore, Gigabit Multimedia Serial Link (GMSL) technology is often preferred because it can facilitate high-speed data transmission from cameras to the onboard processor and external devices and systems. It also allows data integration from multiple sensors and sources, enabling more complex, real-time decision-making. The latest generation, GMSL3, supports up to 12 Gbps and can transmit data up to 15 m from its host processor.
“We chose GMSL as our technology to empower our computing unit with visual capabilities,” said Jacky Liu, advanced computer vision head of Industrial Cloud and Video Group at Advantech. “It’s not a new technology and has already been widely used in the automotive industry with features such as high data rates and long-distance transmission. Its robustness and data integrity ensure reliable operation in challenging environments, and the scalability and synchronization allow you to run multiple data streams. With GMSL, our current video capture card can support up to eight channels, even 3D cameras, simultaneously.”
In early 2023, Advantech partnered with e-con Systems, an OEM camera solution provider, on a GMSL camera and AI computing system for automated guided vehicle (AGV) and AMR applications. The MIC-733-AO is an industrial AI inference system based on NVIDIA Jetson AGX Orin. It has high AI performance of up to 275 TOPs, a wide range of I/O interfaces for 5G/4G connectivity, and multiple video inputs, including two GMSL. The GMSL camera from e-con Systems transmits video data over long distances, and the solution can integrate multiple cameras into a single Jetson system.
“Today, the whole value chain is compressed, and each role is trying to integrate more things,” said Pao. “We integrate video acquisition software, called the Advantech CamNavi SDK, with NVIDIA ISAAC SDK and ROS2 (Robotic OS) into one software package. We don’t just pre-install them — we also pre-build and pre-train them. To adapt it for robotics, AMRs, and cobots, we just put application software on top and enable the solution.”
During the keynote panel at the World Partner Conference, Pao reiterated that generative AI is changing machine vision and that NVIDIA’s generative AI SDK is ready on the MIC-733-AO. With generative AI, vision systems can improve data augmentation, search functionalities, and image reconstruction. For example, a company may not have enough images to train an AMR to accurately detect item defects or damage. However, with generative AI, the system needs just a few images as examples and can generate numerous defect images with different aspects to retrain the AMR.
“Accelerating AI from developer to deployment is very important,” said Elvis Lee, industrial AI platform manager of Industrial Cloud and Video Group at Advantech. “GPU performance has increased 1,000 times compared to five years ago, and data has increased 10,000 billion times within the past 10 years. The data is moving very fast compared to previous technology.”
While moderating an afternoon panel, Lee mentioned that NVIDIA recently surveyed its customers to understand what sensors and sensor interfaces they use in AMR projects. LiDAR sensors and USB, MIPI, Ethernet, and IR cameras are among the most popular. Mixed-sensor integration is also popular, but driver and compatibility issues and long integration times can delay projects.
“GMSL does not define a consistent camera parameter data structure, so people spend a lot of project time ensuring compatibility between the GMSL capture card and a device,” said Liu during Lee’s panel. “Currently, we are working on defining a camera parameter profile and plan to integrate it into our unit by the second half of 2024. We believe this will help customers easily select a camera, shorten the overall project time, and make the configuration more flexible.”
Advantech isn’t alone in its integration efforts, and as Pao noted, it takes the entire value chain to see widespread industry impact. “The whole value chain is very long. If you start from the chip vendor, system manufacturer, systems integrator, and toward the end user, it’s a long journey,” he said.
For example, on day two of the conference, Pyong W. Pak, CEO of Movensys, spoke about his company’s all-in-one software controller platform for AGVs and AMRs and how it opens doors for engineers.
“One thing that we’ve heard from our customers is that vision engineers do not know about motion control, and motion control engineers do not know about vision. So, we’re working on an all-in-one solution that ties vision and motion control,” said Pak.
Pak also noted that according to Interact Analysis, though more than four million mobile robots will be installed worldwide by 2027, only 14% of warehouses will deploy at least one AMR by then. However, as vision technology evolves and more companies prioritize integration, adoption barriers may dissolve and make advanced automation a reality for more warehouses.
Advantech
advantech.com
You may also like:
Filed Under: AI • machine learning, Warehouse automation, Vision • machine vision • cameras + lenses • frame grabbers • optical filters