Automatic control of electric wheelchair using depth camera and indoor-localization
Abstract
In this thesis, an electric wheelchair is developed to automatically approach to a goal within unknown indoor environments. To do so, the designed electric wheelchair utilizes two encoders and a Microsoft Kinect camera as its input sensors to sense environment surrounding. Two encoders are utilized for localization with wheeling odometry-based methods and Kinect depth camera for obstacle avoidance. Instead of using conventional controllers with joystick, new hardware is designed to control the wheelchair. The hardware obtains encoders’ data, and communicates with a computer for sending encoders’ data and receiving commands to control the navigation of wheelchair. For control the system, a software, the main processing unit of the system, is written in C++ language. The program processes all input data, gets the target defined by a user with a graphical user interfaces and outputs commands to route the wheelchair to the target to avoid collisions with obstacles. Proportional-integral-derivative (PID) controller is implemented in the software to drive the wheelchair by using the orientation angle of the wheelchair in 2-D.
Experiments were conducted indoor at International University – Vietnam National Universities HCMC to evaluate the quality of the system. The results showed that PID motion control based on the orientation of the wheelchair effectively routes the wheelchair to reach a defined goal. Moreover, Kinect camera is effective for indoor obstacle avoidance. However, some disadvantage of Kinect camera, i.e. field of view, range and accuracy, make some limitations for the system such as fail to detect too close obstacles as well as near side obstacle. In addition, the errors of odometry-based localization increase over time leading to inaccurate navigation. The visual localization method based on Kinect camera is proposed to improve the accuracy of localization system. Two consecutive Kinect’s depth images are converted to point clouds and segmented to planes. Two sets of planes are then matched together to estimate the rotation matrix which is used as initial guess of iterative closest point (ICP) of point-to-plane registration. Nevertheless, the processing time per registration is too long, so this method is not appropriate for real time application. Therefore, further studies should focus on the improvements of the localization system.