Table of Contents

Sensor Fusion in Autonomous Mobile Robots – enabling AMRs to navigate safely and precisely

We can imagine how complex it is to fuse all the sensory channels together and teach machines to take decisions independently. Let us look at what is the sensor fusion in AMRs and what are the challenges it solves in seamless and safe AMR working on the ground.

What is sensor fusion?

Sensor fusion is the method of combining data from various sensors to make more informed and precise decisions, avoid collisions, and achieve interrupt-free movement. These sensors majorly include vision cameras, RADAR, LiDAR, accelerometers, gyroscopes, GPS, infrared, and ultrasonic sensors.

Sensor fusion is applied to any AMR that needs object detection, obstacle avoidance, navigation, and locomotion control. It helps in reducing noise and uncertainty from the sensory data when captured in silos. For example, a blinded camera due to light exposure cannot capture clear images, or radar/lidar may be jammed due to rainy or foggy conditions. Sensor fusion reduces such noise levels in the processed data and increases the level of accuracy.

A simple example of a daily sensor fusion application is of mobile phone. It gives users accurate indoor or outdoor location by integrating GPS, accelerometer, gyroscope, and compass data.

Types of sensors involved in AMR

The most common types of sensors used in AMRs are camera, radar, lidar, and ultrasonic. The camera mimics the vision sense of humans and provides a surrounding view of the environment. Other sensors like ultrasonic and radar give more extended range data of orientation and object presence.

 LiDAR: It is a combination of light and radar and uses the laser to create a 3D dimension of surroundings. Autonomous systems leverage lidar to detect the path, and objects, and analyze the markings to avoid collisions on the path. The major advantage of lidar over a camera is that it gives a longer range and does not get affected by darkness or high lighting conditions. Read more about LiDAR here.

 RADAR: Radar uses electromagnetic radio waves to identify the distance, speed, and orientation of the objects from the reflection of the waves. The major advantage of radar is that its signals are not affected by weather conditions, light, and noise.

 Camera: The camera gives high-resolution actual images with colour and shape to identify and classify the objects. The major disadvantage is that it only recognizes 2D information and cannot predict the depth of the objects. Also, it doesn’t give good visibility in extreme weather conditions like snowstorms, heavy snow, or flood and conditions of darkness, or bright lights.

 Ultrasonic: It uses sound waves to detect the close surrounding objects accurately. The location of the solid object is measured by the echoed sound speed reflected from the collision with an object. It gives the precise measurement in dark and is irrespective of the color of the object.

What are the challenges that sensor fusion solves?

Autonomous mobile robots should work independently in various deployment environments. It could be an inventory management robot at the grocery store, or vacuum cleaning robot at home, or a material handling robot at a warehouse. To operate independently, AMRs must get accurate data information about moving or stationary objects, humans, other robots, etc. Each sensor gives data that is unique to its characteristics as we have seen above. But with sensor fusion, the real-time information is derived so that robot can move safely in an unknown environment.

Sensor fusion solves key challenges for robots to operate like navigation, path planning, localization, and collision avoidance.

  • Navigation for AMR is the continuous determination of its own position using different sensor fusions such as GPS location data, accelerometer, and gyroscope data. This data tells the robot about its entire movement from the origin, current location, and the further path of movement.
  • Another challenge sensor fusion solves is mapping and localization. The robot should know about the entire path where it needs to travel for the task execution and where it needs to return after completing that task. With sensor fusion, accurate path planning in extreme conditions can be achieved. If the camera stops providing accurate data, with sensor fusion, the robot can utilize radar/lidar data and keeps on operating seamlessly.
  • Collision avoidance and human presence detection are also imperative for AMR to work safely and autonomously work in a structured and unstructured environment.

Each sensor has its own challenges as cameras are very good at identifying objects with different shapes and colours but has poor performance in foggy and rainy conditions. On the other hand, radar and lidar work well in poor lighting conditions and give accurate data. Sensor fusion overcomes these individual sensor limitations and helps AMR to achieve a higher level of autonomy even in an unknown environment.

Algorithms

To overcome the unique challenge of each sensor as we looked at above, the sensor fusion algorithm combines these data to determine the accurate position of objects. Also, the sensor fusion algorithm gives priority to specific sensor data in particular conditions. Like the lidar doesn’t give accurate data in foggy and rainy conditions but radar the gives. So, the algorithm sets priority for the data coming from the radar sensor. Sensor fusion algorithms are developed in such a way that it gives accurate data depending on the use case.

There are few widely used algorithms for sensor fusion in autonomous mobile robots. One of them is the Kalman filter. It is basically used to estimate unknown values from highly noised sensor data. It is used in navigation and positioning to predict unknowns. For example, in a robot’s navigation, it only needs to know its last position and speed to indicate the current position and future positions.

Another majorly used algorithm is Central Limit Theorem (CLT) to average the value from the large numbers. Bigger the sample size, more accurate would be the average value of the parameter to formulate the bell curve. Convolution neural networks, prediction algorithms, and measurement algorithms are also used in sensor fusion to derive the past, present, and future state of the robots.

 Conclusion

Sensor fusion is imperative for robots to take decisions independently and move in structured and unstructured environments. eInfochips has in-depth expertise in working and integrating various sensors for automotive, consumer, and industrial domains. Our team has worked on different sensor fusion algorithms such as object and pedestrian detection, adaptive cruise control, and park assist to port, optimize, and test them in different environments.

Contact our autonomous machine solutions expert to know more.

Picture of Vihar Soni

Vihar Soni

Vihar Soni works as Assistant Product Manager and focuses on the Digital Engineering portfolio at eInfochips. Vihar is working on cutting-edge technologies like the Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML). He carries close to seven years of experience in Product Management, Go-To-Market Strategies, and Solution Consulting. He likes to read on new technology trends in his free time.

Explore More

Talk to an Expert

Subscribe
to our Newsletter
Stay in the loop! Sign up for our newsletter & stay updated with the latest trends in technology and innovation.

Start a conversation today

Schedule a 30-minute consultation with our Battery Management Solutions Expert

Start a conversation today

Schedule a 30-minute consultation with our Industrial & Energy Solutions Experts

Start a conversation today

Schedule a 30-minute consultation with our Automotive Industry Experts

Start a conversation today

Schedule a 30-minute consultation with our experts

Please Fill Below Details and Get Sample Report

Reference Designs

Our Work

Innovate

Transform.

Scale

Partnerships

Device Partnerships
Digital Partnerships
Quality Partnerships
Silicon Partnerships

Company

Products & IPs