Initially earmarked for covert military operations, unmanned aerial vehicles (UAVs) or drones have since gained tremendous popularity, which has broadened the scope of their use. In fact, “remote pilot” drones have been largely replaced by “autonomous” drones for applications in various fields. One such application is their usage in rescue missions following a natural or man-made disaster. However, this often requires the drones to be able to land safely on uneven terrain–which can be very difficult to execute.
“While it is desirable to automate the landing using a depth camera that can gauge terrain unevenness and find suitable landing spots, a framework serving as a useful base needs to be developed first,” observes Dr. Chinthaka Premachandra from Shibaura Institute of Technology (SIT), Japan, whose research group studies potential applications of camera-based quadrocopter drones.
Accordingly, Dr. Premachandra and his team set out to design an automatic landing system; they have detailed their approach in their latest study published in IEEE Access. To keep things simple, they upgraded a standard radio control (RC)-based drone with necessary hardware and software and equipped it with a simple 2D camera for the detection of a symbolized landing pad.
“The challenges in our project were two-fold. On the one hand, we needed a robust and cost-effective image-processing algorithm to provide position feedback to the controller. On the other, we required a fail-safe switch logic that would allow the pilot to abort the autonomous mode whenever required, preventing accidents during tests,” explains Dr. Premachandra.
Eventually, the team came up with a design that comprised the following components: a commercial flight controller (for attitude control), a Raspberry Pi 3B+ (for autonomous position control), a wide-angle modified Raspberry Pi v1.3 camera (for horizontal feedback), a servo gimbal (for camera usage control), a Time-of-Flight (ToF) module (as feedback sensor for the drone height), a multiplexer (for switching between manual and auto modes), an “anti-windup” PID controller (for height control), and two PD controllers (for horizontal movement control).
In addition, they implemented an image-processing algorithm that detected a distinctive landing symbol (in the shape of “H”) in real time and converted the image pixels into physical coordinates, which generated a horizontal feedback. Interestingly, they found that introducing an adaptive “region of interest” helped speed up the computation of the camera’s vertical distance to the landing symbol, greatly reducing the computing time–from 12-14 milliseconds to a meagre 3 milliseconds!
Following detection, the system accomplished the landing process in two steps: flying towards the landing spot and hovering over it while maintaining the height, and then actually landing vertically. Both these steps were automated and therefore controlled by the Raspberry Pi module.
While examining the landing, the research team noticed a disturbance in landing behavior, which they attributed to an aerodynamic lift acting on the quadrocopter. However, this problem could be overcome by boosting the gain of the PID controller. In general, performance during the landing process indicated a properly functioning autonomous system.
With these results, Dr. Premachandra and his team look forward to upgrading their system with a depth camera and thus enabling drones to find even more applications pertaining to daily life. “Our study was primarily motivated by the application of drones in rescue missions–But it shows that drones can, in future, find use in indoor operations such as indoor transportation and inspection, which can reduce a lot of manual labor,” concludes Prof. Premachandra.
###
Reference
Title of original paper: Development of an Automated Camera-Based Drone Landing System
Journal: IEEE Access
DOI: 10.1109/ACCESS.2020.3034948
About Shibaura Institute of Technology (SIT), Japan
Shibaura Institute of Technology (SIT) is a private university with campuses in Tokyo and Saitama. Since the establishment of its predecessor, Tokyo Higher School of Industry and Commerce, in 1927, it has maintained “learning through practice” as its philosophy in the education of engineers. SIT was the only private science and engineering university selected for the Top Global University Project sponsored by the Ministry of Education, Culture, Sports, Science and Technology and will receive support from the ministry for 10 years starting from the 2014 academic year. Its motto, “Nurturing engineers who learn from society and contribute to society,” reflects its mission of fostering scientists and engineers who can contribute to the sustainable growth of the world by exposing their over 8,000 students to culturally diverse environments, where they learn to cope, collaborate, and relate with fellow students from around the world.
Website: https:/
About Professor Chinthaka Premachandra from SIT, Japan
Chinthaka Premachandra is a manager of the Image Processing and Robotics Laboratory at SIT where he became an Associate Professor at the Department of Electronic Engineering, Graduate School of Engineering and Science in 2018. His laboratory conducts research in image processing and robotics. His research interests include AI, computer vision, pattern recognition, speed up image processing, camera-based intelligent transport systems, terrestrial robotic systems, flying robotic systems, and integration of terrestrial robot and flying robot. He received the FIT Best Paper Award from IEICE in 2009 and the FIT Young Researchers Award from IPSJ in 2010.
Funding Information
This study was funded by the Branding Research Fund of SIT, Japan.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.