Please login first
An Efficient Algorithm for Cleaning Robots Using Vision Sensors
* 1 , 2 , 1 , 1
1  Faculty of Engineering, Kitami Institute of Technology, Kitami, Hokkaido, Japan
2  Faculty of Engineering, Hokkaido University, Sapporo, Japan


Public places like hospitals, and industries are required to maintain standards of hygiene and cleanliness. Traditionally, the cleaning task has been performed by people. However, due to various factors like shortage of workers, unavailability of 24-hour service, or health concerns of working with toxic chemicals used in cleaning, autonomous robots have been looked upon as alternatives. In recent years, cleaning robots like Roomba have gained popularity. These cleaning robots have limited battery power, and therefore efficient cleaning is important. Efforts are being undertaken to improve the efficiency of cleaning robots.

The most rudimentary type of cleaning robot is the one with bump sensors and encoders, which simply keep cleaning the room until the battery is available. Some researchers have tried to attach sensors like Lidar and cameras on the robot, and use sensory information for intelligent cleaning. Some approaches first build a map of the environment, and then plan systematic paths to clean the floor. Other approaches uses dirt sensors attached to the robot to clean only the untidy portions of the floor. Researchers have also proposed to attach cameras on the robot to detect dirt and then clean. However, a critical limitation of all the previous works is that robots cannot know if the floor is clean or not unless they actually visit that place. Hence, a timely information is not available if the room should be cleaned or not, which is a major limitation to achieve efficiency.

To overcome such limitations, we propose a novel approach that uses external cameras, which can communicate with the robots. The external cameras are fixed in the room, and detect if the floor is untidy or not through image processing. The external camera detects if the floor is untidy, along with the exact areas, and coordinates of the portions of the floor, which must be cleaned. This information is communicated to the cleaning robot through wireless network. The robot then intelligently plans the shortest path through the untidy areas minimizing battery usage. The camera node and robot work in a master-slave architecture, in which camera is the master instructing the robot about areas to clean. The camera node comprises of a Raspberry-Pi embedded computer, and robot is programmed on ROS (Robot Operating System). The ROS Master acts as a name-service in the ROS computation graph storing topics and services registration information for ROS master and slave nodes. The communication protocol is TCPROS, which uses standard TCP/IP sockets. Unlike previous works that use on-board robot sensors, the novel contribution of the proposed work lies in using external cameras, and intelligent communication between the camera node and robot.

The proposed method enables cleaning robots to have an access to a ‘birds-eye view’ of the environment for efficient cleaning. We demonstrate how normal web-cameras can be used for dirt detection. The dirt detection algorithm uses a combination of ‘sum of absolute differences’ and histogram algorithms in RGB and HSV domains. We test the algorithm with different types, sizes, and colors of dirt. The proposed cleaning algorithm is targeted for home, factories, hospitals, airports, universities, and other public places. The scope of our current work is limited to indoor environments; however, an extension to external environments is straightforward. In this paper, we demonstrate the current algorithm through actual sensors in real-world scenarios.

Keywords: Autonomous Cleaning Robots; Sensor Network; Image Processing