Semi-autonomous Exploration Robot in Disaster Area [ October 2018 ~ Present ]
Semantic Survey Map Building Framework
In this research, we propose a semi-autonomous mobile robot system that builds a wide area survey map including semantic information to carry out damage monitoring in disaster area such as Fukushima Daiichi nuclear power plant station. To this end, following technologies are developed and SMLO loop-based seamless integration is realized as shown in Fig. 1.
• A sensor system that measure heat source, radiation source, water source, other substance information as well as color and shape information in the environment.
• SLAM (simultaneous localization and mapping) scheme that generates precise a wide area semantic survey map for learning-based motion generation of the mobile robot.
• A route generation system that perform reinforcement learning based on the built map from the SLAM scheme. An operator is able to control the robot semi-automatically based on the generated route.
The generated semantic survey map can be used for prevention of secondary disasters and recovery plans given that it contains useful information for the disaster environment.
Fig. 1 Conceptual image of semantic survey map building process based on SMLO loop.
• 菅原 岬, 藤井 浩光, 河野 仁, 池 勇勳, "遠隔操作ロボットによる水源サーベイマップ構築のための近赤外線情報の3次元可視化," 第20回計測自動制御学会システムインテグレーション部門講演会講演論文集 (SI2019), 高松, December 2019.
• Hiromitsu Fujii, Misaki Sugawara, Hitoshi Kono, and Yonghoon Ji, "3D Visualization of Near-Infrared Information for Detecting Water Source," Proceedings of the Fukushima Research Conference 2019 on Remote Technologies for Nuclear Facilities (FRC2019), Fukushima, Japan, p. 5, October 2019.
• Hitoshi Kono, Tomohisa Mori, Yonghoon Ji, Hiromitsu Fujii, Tsuyoshi Suzuki, "Development of Perilous Environment Estimation System by Rescue Robot Using On-board LiDAR for Teleoperator," Proceedings of the 2019 IEEE/SICE International Symposium on System Integrations (SII2019), pp.7-10, Paris, France, January 2019. [Link]
• Yonghoon Ji, Hiromitsu Fujii, and Hitoshi Kono, "Semantic Survey Map Building Framework Using Semi-autonomous Mobile Robot in Disaster Area," Proceedings of the Fukushima Research Conference 2018 on Remote Technologies for Nuclear Facilities (FRC2018), Fukushima, Japan, p. 20, October 2018.
Insert title here
Underwater Robotics [ April 2015 ~ Present ]
3D Reconstruction of Underwater Environment Using Acoustic Camera
In recent years, waterfront development, such as construction and reclamation projects related to airports, ports, and submarine tunnels, has become considerably more critical. To conduct such heavy work, there exist underwater construction machines operated by divers in an underwater environment. However, hazards may prohibit human access and the limited field of vision due to turbidity and lack of illumination makes underwater operations difficult. To complete tasks, such as inspection, removal of hazardous materials, or excavation work, a remote control robot equipped with a 3D system for reconstructing the underwater environment is required, as shown in Fig. 1.
Recently, the development of acoustic cameras, such as the dual frequency identification sonar (DIDSON) and adaptive resolution imaging sonar (ARIS), which can generate high-resolution and wide-range images, has facilitated our understanding of underwater situations. This type of sonar sensor is relatively small and can easily be mounted on an underwater robot and gather information of a relatively larger area considerably faster. The acoustic camera can also be mounted on an arm of a crawlertype robot, and thus, the robot can fulfill complex underwater tasks, such as manipulation, even in turbid water.
In this research, a novel dense 3D mapping paradigm for an acoustic camera in an underwater situation is proposed. As a result, it is possible to build a dense 3D map of the underwater environment precisely and robustly.
Fig. 1 Example of underwater construction using a remote control underwater crawler-type robot based on dense 3D mapping of surrounding environment using acoustic camera.
• Yusheng Wang, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "Rotation Estimation of Acoustic Camera Based on Illuminated Area in Acoustic Image," Proceedings of the 12th IFAC Conference on Marine Systems (CAMS2019), pp. 217-222, Daejon, Korea, September 2019.
• Yusheng Wang, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera," Proceedings of the 2019 IEEE/SICE International Symposium on System Integrations (SII2019), pp. 28-33, Paris, France, January 2019. [Link]
• Yusheng Wang, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "3D Occupancy Mapping Framework Based on Acoustic Camera in Underwater Environment," Proceedings of the 12th IFAC Symposium on Robot Control (SYROCO2018), pp. 1-7, Budapest, Hungary, August 2018. (IFAC PaperOnLine, Vol. 51, No. 22, pp. 324-339, August 2018) [Link]
• Ngoc Trung Mai, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "Acoustic Image Simulator Based on Active Sonar Model in Underwater Environment," Proceedings of the 15th International Conference on Ubiquitous Robots (UR2018), pp. 781-786, Hawaii, USA, June 2018. [Link]
• Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "3D Reconstruction of Line Features Using Multi-view Acoustic Images in Underwater Environment," Proceedings of 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI2017), pp. 312-317, Daegu, Korea, November 2017. [Link]
• Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "3-D Reconstruction of Underwater Object Based on Extended Kalman Filter by Using Acoustic Camera Images," Preprints of the 20th World Congress of the International Federation of Automatic Control, pp. 1066-1072, Toulouse, France, July 2017. (IFAC PaperOnLine, Vol. 50, No. 1, pp. 1043-1049, July 2017) [Link]
• Yonghoon Ji, Seungchul Kwak, Atsushi Yamashita, and Hajime Asama, "Acoustic Camera-based 3D Measurement of Underwater Objects through Automated Extraction and Association of Feature Point," Proceedings of the 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI2016), pp. 224-230, Baden-Baden, Germany, September 2016. [Link]
• マイ ゴクチュン, 禹 ハンウル, 池 勇勳, 田村 雄介, 山下 淳, 淺間 一, "音響カメラ画像を用いた拡張カルマンフィルタに基づく水中物体の3次元計測手法の構築," 第34回日本ロボット学会学術講演会予稿集 (RSJ2016), RSJ2016AC1C3-06, pp. 1-4, 山形, September 2016.
• Seungchul Kwak, Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "3-D Reconstruction of Underwater Objects Using Arbitrary Acoustic Views," Proceedings of the 2016 11th France-Japan congress on Mechatronics 9th Europe-Asia congress on Mechatronics 17th International Conference on Research and Education in Mechatronics (MECHATRONICS-REM2016), pp. 74-79, Compiegne, France, June 2016. [Link]
• 곽 승철, 지 용훈, Atsushi Yamashita, and Hajime Asama, "다시점의 음향카메라 영상을 이용한 수중물체의 3차원 형상 복원," 2016 제31회 제어・로봇・시스템학회 학술대회, pp. 1-2, 서울, March 2016.
• Seungchul Kwak, Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "3-D Reconstruction of Underwater Object: Analytical System for Extracting Feature Points Using Two Different Acoustic Views," Proceedings of the 2015 JSME/RMD International Conference on
Advanced Mechatronics (ICAM2015), pp.197-198, Tokyo, Japan, December 2015. [Link]
• 郭 承澈, 池 勇勳, 山下 淳, 淺間 一, "2視点における音響カメラ画像をを用いた水中物体の特徴点の3次元計測," 第33回日本ロボット学会学術講演会予稿集 (RSJ2015), pp. 1-4, 東京, September 2015.
• Seungchul Kwak, Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Development of Acoustic Camera-Imaging Simulator Based on Novel Model,"
Proceedings of the 2015 IEEE International Conference on Environment and Electrical Engineering (EEEIC2015), pp. 1719-1724, Rome, Italy, June 2015. [Link]
Insert title here
Motion Planning for Off-Road UGVs [ April 2015 ~ December 2018 ]
Adaptive Motion Planning Based on Vehicle Characteristics and Regulations
In recent years, autonomous mobile robots and UGVs have attracted the attention of many researchers, and are becoming capable of dealing with various environments. Safe and reliable motion planning for mobility is one of the most important requirements for such unmanned robots. However, there have been very few studies on establishing an outdoor motion planning methodology in off-load environments, despite the undeniable fact that
it is an indispensable requirement to operate unmanned robots traveling on rough terrain such as disaster sites where hazards prohibit human access.
When the UGV navigates autonomously on an off-road environment where rough terrain exists, the importance of avoiding accidents, such as collision and turnover, cannot be overemphasized in the sense of safe navigation. Therefore, the UGV is required to avoid such risks and to select a route within the traversable area. Hence, estimating the traversability and appropriate motion planning on rough terrain are very important tasks to meet these requirements. In this respect, we aim to propose a novel motion planning methodology for UGVs to navigate safely to a destination within convoluted environments, including rough terrain.
When designing a novel motion planner, we need to consider the following.
• All DoFs, namely the 6-DoFs of vehicular pose (position and orientation), which include height direction and roll and pitch angles.
• The unique characteristics of each vehicle, such as the size of the vehicle, minimum turning radius, or travelable maximum inclination angle, depending on the driving speed.
• Regulations necessary for vehicular operation depending on different situations, such as maintaining the driving speed and suppressing the change of posture.
• Feasible processing time to identify a solution, even in relatively large-scale environments.
The purpose of this research is to establish a novel motion planner for off-road UGVs, which addresses all the aforementioned issues. Specifically, when the user specifies the initial pose and the target pose of the UGV with respect to the environmental map composed of a 3D point cloud provided a priori, the motion planner should solve the problem of generating a path that connects these two states offline. We propose an adaptive methodology for global motion planning. Here, “adaptive” means that the proposed methodology enables to perform appropriate planning that satisfies different conditions defined from the vehicle characteristics and the regulations. A random sampling based scheme (Mov. 1) is applied to carrying out global path planning. In regard to the scale of the environment map, we have treated the scale spanning several hundred meters as large-scale in this study, and the proposed motion planner was applied to environmental maps with this size. Experimental results (Fig. 1) showed that the proposed off-road motion planner could generate an appropriate path, which satisfies vehicle characteristics and predefined regulations.
Mov. 1 Random sampling based scheme for global motion planning.
Fig. 1 Experimental results in simulation environment. (a) All generated nodes, output path as solution, and changes of several variables for each node on generated motion in case of low speed regulation. (b) All generated nodes, output path as solution, and changes of several variables for each node on generated motion in case of high speed regulation.
• Shinya Katsuma, Hanwool Woo, Yonghoon Ji, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "Efficient Motion Planning for Mobile Robots Dealing with Changes in Rough Terrain," Proceedings of the 1st IFAC Workshop on Robot Control (WROCO2019), pp. 460-463, Daejon, Korea, September 2019.
• 勝間 慎弥, 禹 ハンウル, 池 勇勳, 田村 雄介, 山下 淳, 淺間 一, "不整地の環境変化に効率的に対応する移動ロボットの動作計画," 日本機械学会ロボティクス・メカトロニクス講演会19講演論文集 (ROBOMECH2019), 広島, June 2019.
• Yonghoon Ji, Yusuke Tanaka, Yusuke Tamura, Mai Kimura, Atsushi Umemura, Yoshiharu Kaneshima, Hiroki Murakami, Atsushi Yamashita, and Hajime Asama, "Adaptive Motion Planning Based on Vehicle Characteristics and Regulations for Off-Road UGVs," IEEE Transactions on Industrial Informatics, Vol. 15, No, 1, pp. 599 - 611, ISSN 1551-3203, January 2019. [doi:10.1109/TII.2018.2870662](Impact Factor 7.377)
• Yuki Doi, Yonghoon Ji, Yusuke Tamura, Yuki Ikeda, Atsushi Umemura, Yoshiharu Kaneshima, Hiroki Murakami, Atsushi Yamashita and Hajime Asama, "Robust Path Planning against Pose Errors for Mobile Robots in Rough Terrain," Advances in Intelligent Systems and Computing 867, Intelligent Autonomous Systems 15 (Marcus Strand, Rudiger Dillmann, Emanuele Menegatti and Stefano Ghidoni (Eds.)) (Proceedings of the 15th International Conference IAS-15, Held July 2018, Baden-Baden (Germany)), Springer, pp. 27-39, ISSN. 2194-5357, January 2019 (Online: eISSN. 2194-5365). [doi:10.1007/978-3-030-01370-7_3]
• 土居 悠輝, 池 勇勳, 田村 雄介, 池田 裕樹, 梅村 篤志, 金島 義治, 村上 弘記, 山下 淳, 淺間 一, "不整地走行移動ロボットの位置誤差を考慮したロバストな経路計画," 第18回計測自動制御学会システムインテグレーション部門講演会講演論文集 (SI2017), pp. 3438-3443, 仙台, December 2017.
• 田中 佑典, 池 勇勳, 田村 雄介, 木村 麻衣, 梅村 篤志, 金島 義治, 村上 弘記, 山下 淳, 淺間 一, "3次元環境地図を用いた不整地走行無人車両の経路計画," 第22回ロボティクスシンポジア講演予稿集, pp. 203-204, 群馬, March 2017.
• 田中 佑典, 池 勇勳, 河野 仁, 田村 雄介, 江本 周平, 板野 肇, 村上 弘記, 山下 淳, 淺間 一, "複数台移動ロボットによる環境計測結果に基づいた不整地走行のための移動ロボットの進路方向決定手法の構築," 第21回ロボティクスシンポジア, pp. 250-255, 長崎, March 2016
• Yusuke Tanaka, Yonghoon Ji, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "Course Detection from Integrated 3D Environment Measurement by Multiple Mobile Robots," Proceedings of the 2015 JSME/RMD International Conference on Advanced Mechatronics (ICAM2015), pp.237-238, Tokyo, Japan, December 2015. [Link]
• 田中 佑典, 池 勇勳, 山下 淳, 淺間 一, "移動ロボットの性能に応じた走行可能性推定が可能な不整地に対する走行可能性推定および行動生成手法," 精密工学会誌, Vol. 81, No. 12, pp. 1119-1126, ISSN 1348-8724, December 5 2015 (Online: eISSN 1881-8722). [doi:10.2493/jjspe.81.1119]
• Yusuke Tanaka, Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Fuzzy Based Traversability Analysis for a Mobile Robot on Rough Terrain," Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA2015), pp. 3965-3970, Seattle, USA, May 2015. [Link]
• 田中 佑典, 池 勇勳, 山下 淳, 淺間 一, "ファジィ推論を利用した移動ロボットのための不整地の走行可能性推定手法の構築," 日本機械学会ロボティクス・メカトロニクス講演会15 講演論文集 (ROBOMECH2015), 1P1-J06, pp. 1-4, 京都, May 2015.
• 田中 佑典, 池 勇勳, 山下 淳, 淺間 一, "ファジィ推論を利用した不整地の走行可能性推定に基づく移動ロボットの進路方向判断手法の構築," 第32回日本ロボット学会学術講演会予稿集 (RSJ2014), RSJ2014AC2D2-01, pp. 1-4, 福岡, September 2014
Insert title here
Construction of Intelligent Space [ April 2014 ~ May 2019 ]
Automatic Calibration of Camera Sensor Network
Figure 1 (a) illustrates an example of the map information which is built by typical simultaneous localization and mapping (SLAM) schemes. However, considering human-robot coexistence environments, the map information, which is a static model, cannot deal with such dynamic environments because it cannot reflect
changes in the environment (e.g., moving objects, etc.). On the other hand, the concept of an intelligent space, as illustrated in Fig. 1 (b), which constructs a distributed sensor network in an external environment, can monitor what is occurring in it.
Distributed sensor networks installed in external environments can recognize various events that occur in the space, so that such intelligent space can be of much service in human–robot coexistence environments, as shown in Fig. 1 (b). Distributed camera sensor networks with multi-camera systems provide the most general infrastructure for constructing such intelligent space. In order to obtain reliable information from such a system, pre-calibration of all the cameras in the environment (i.e., determining the absolute positions and orientations of each camera) is an essential task that is extremely tedious. This research considers the automatic calibration method for camera sensor networks based on 3D texture map information of a given environment as shown in Fig. 1(a). In other words, this research solves a global localization problem for the poses of the camera sensor networks given the 3D texture map information. The proposed complete 6DOF calibration system in this research only uses the environment map information; therefore, the proposed scheme easily calibrates its parameters. The results shown in Mov. 1 demonstrate that the proposed system can calibrate complete external camera parameters successfully.
Fig. 1 Environmental information: (a) static information from map and (b) dynamic information from sensor network which is components of intelligent space.
Mov. 1 Experimental results of automatic calibration of camera sensor network using wireless IP camera.
Indoor Positioning System Based on Distributed Camera Sensor Network
An importance of accurate position estimation in the field of mobile robot navigation cannot be overemphasized. In case of an outdoor environment, a global positioning system (GPS) is widely used to measure the position of moving objects. However, the satellite based GPS does not work indoors. This research proposes an indoor positioning system (IPS) that uses calibrated camera sensor networks for mobile robot navigation.
The IPS information is obtained by generating a bird's-eye image from multiple camera images; thus, our proposed IPS can provide accurate position information when the moving object is detected from multiple camera views. We evaluate the proposed IPS in a real environment in a wireless camera sensor network. The results shown in Mov. 2 demonstrate that the proposed IPS based on the camera sensor network can provide accurate position information of moving objects.
Mov. 2 Experimental results of IPS for mobile robot localization.
• Yonghoon Ji, Atsushi Yamashita, Kazunori Umeda, and Hajime Asama, "Automatic Camera Pose Estimation Based on a Flat Surface Map," Proceedings of the SPIE 11172, 14th International Conference on Quality Control by Artificial Vision (QCAV2019), Vol. 11172, pp. 111720X-1-111720X-6, Mulhouse, France, May 2019. [Link]
• 池 勇勳, 山下 淳, 梅田 和昇, 淺間 一, "人工物環境における直線情報を用いたカメラの外部パラメータ推定法," 第19回計測自動制御学会システムインテグレーション部門講演会講演論文集 (SI2018), 3B3-17, pp. 2598-2600, 大阪, December 2018. (SI2018優秀講演賞受賞)
• Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Automatic Calibration of Camera Sensor Network Based on 3D Texture Map Information," Robotics and Autonomous Systems, Vol. 87, pp. 313-328, ISSN 0921-8890, January 2017 (Online: October 5 2016). [doi:10.1016/j.robot.2016.09.015](Impact Factor 2.928)
• Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot,"
Advances in Intelligent Systems and Computing 531, Intelligent Autonomous Systems 14 (Weidong Chen, Koh Hosoda, Emanuele Menegatti, Masahiro Shimizu and Hesheng Wang (Eds.)) (Proceedings of the 14th International Conference IAS-14, Held July 2016, Shanghai (China)), Springer, pp. 1089-1101, ISSN. 2194-5357, February 2017 (Online: eISSN. 2194-5365). [doi:10.1007/978-3-319-48036-7]
• 지 용훈, Atsushi Yamashita, Hajime Asama, "실내 환경에서의 이동로봇의 위치추정을 위한 카메라 센서 네트워크 기반의 실내 위치 확인 시스템," 제어・로봇・시스템학회 논문지, Vol. 22, No. 11, pp. 952-959, ISSN 1976-5622, November 2016 (Online: eISSN 2233-4335). [Link]
• 지 용훈, Atsushi Yamashita, and Hajime Asama, "카메라 네트워크를 활용한 3 차원 지도정보 기반의 실내 위치 확인 시스템," 2016 제31회 제어・로봇・시스템학회 학술대회, pp. 1-2, 서울, March 2016.
• Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Automatic Camera Pose Estimation Based on Textured 3D Map Information," Proceedings of the 2015 JSME/RMD International Conference on
Advanced Mechatronics (ICAM2015), pp.100-101, Tokyo, Japan, December 2015. [Link](ICAM2015 Honorable Mention)
• Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Automatic Calibration and Trajectory Reconstruction of Mobile Robot in Camera Sensor Network,"
Proceedings of the 11th Annual IEEE International Conference on Automation Science and Engineering (CASE2015), pp. 206-211, Gothenburg, Sweden, August 2015. [Link]
• 池 勇勳, 山下 淳, 淺間 一, "移動ロボットによるカメラネットワークの自動キャリブレーション－知能化空間における地図情報による性能向上－," 日本機械学会ロボティクス・メカトロニクス講演会15講演論文集 (ROBOMECH2015), 2A1-P06, pp. 1-2, 京都, May 2015.
• 池 勇勳, 山下 淳, 淺間 一, "知能化空間での移動ロボットによる自己位置推定と自動カメラキャリブレーションの同時実行," 第20回ロボティクスシンポジア講演予稿集, pp. 172-177, 軽井沢, March 2015.
• 池 勇勳, 山下 淳, 淺間 一, "環境知能化による移動ロボットのモンテカルロ位置推定法の性能向上," 第32回日本ロボット学会画学術講演会予稿集 (RSJ2014), RSJ2014AC3J1-06, pp. 1-4, 福岡, September 2014.
Insert title here
Military UGV [ January 2010 ~ March 2013 ]
Platform and Sensor Configuration
Pioneer 3AT (all terrain)
• The most popular outdoor robot
• Length: 0.65m, Height: 0.2m, Width: 0.66m
• Max speed : 0.7m/s, slope mobility : 25°, max payload : 30kg
Fig. 1 Platform and sensor configuration
DSM (digital surface model)
• Most popular maps to represent outdoor environments generated using an aerial mapping system
• Digital representation of ground surface using 2D grids
• Each grid has a single elevation information (2.5D)
• There are many discrepancies (DSM vs. real environment)
Fig. 2 Example of DSM built by aerial mapping system
Local 3D Map
• Accurate representation of the real outdoor environments built by a robot with tilting laser scanner
• Each grid contains the number of surface level and the minimum, maximum elevation at each level
• ICP (iterative closest points)-based integration of local maps (outdoor SLAM)
Fig. 3 ICP-based outdoor 3D SLAM
Combination of DSM and Satellite Image for Virtual Reality
• Texture mapping on DSM using satellite image
• We can confirm the understanding of environment become much easier than before combination of satellite image
Fig. 4 Combination of DSM and satellite image for virtual reality
Particle filter-based outdoor localization
• Localization by matching the environment model and sensor data
• Reference map is built by aerial mapping system or robot with tilting laser scanner
• Monte Carlo localization (MCL): based on range sensor for map matching
Fig. 5 Concept of map matching-based outdoor localization
Mov. 1 Particle filter-based local localization based on DSM
Accurate update of DSM by using local 3D map
• To overcome the limitation of DSM representation
• 2.5D DSM and local 3D map can be represented at once
Fig. 6 Effect of updating DSM: non-updated DSM built by aerial mapping system, and updated DSM fused with local elevation map
Mov. 2 Accurate update of DSM by using local 3D map
• Yong-Ju Lee, Yong-Hoon Ji, Jae-Bok Song, and Sang-Hyun Joo, “Performance Improvement of ICP-
based Outdoor SLAM Using Terrain Classification,” Proceeding of the International Conference on Advanced Mechatronics (ICAM 2010),
pp. 243-246, October, 2010, Osaka, Japan.
• Yong-Hoon Ji, Sung-Ho Hong, Jae-Bok Song, and Ji-Hoon Choi, “DSM Update for Robust Outdoor Localization Using
ICP-based Scan Matching with COAG Features of Laser Range Data,” Proceeding of the IEEE/SICE International Symposium on System Integration
(SII 2011), pp. 1245-1250, December 2011, Kyoto, Japan.
Insert title here
Surveillance Robot [ January 2011 ~ December 2011 ]
Platform and Sensor Configuration
• Length: 0.65m, Height: 0.2m, Width: 0.66m
• 2 tracks for driving and steering, 2 flipper arms (stair climbing is available)
• Max speed : 1.5m/s, slope mobility : 45°, max payload : 15kg
Fig. 1 Platform and sensor configuration
GPS-based outdoor localization
• Extended Kalman filter (EKF)-based sensor fusion
• Odometry and roll, pitch yaw from IMU : used for prediction process of EKF
• GPS : used for update process of EKF
Fig. 2 EKF-based outdoor localization by using wheel odometry and GPS information
Gradient method-based outdoor global path planning
• Optimal path generation using map information from initial position of robot to goal
• Extended 2D gradient method
• Local minimum problem can be avoided
• Traversability map is used
Fig. 3 Global path extraction by gradient method
Implementation of a manipulator on tracked robot
• Mobile tracked robot + 4DOF manipulatorbased on stabilization control
• Efficient unmanned surveillance
• Absorbing vibration at rugged terrain while driving
Mov. 1 Manipulator with stabilization control
• Environment : indoor and outdoor
• Localization, path planning and motion control algorithms are integrated
Mov. 2 Autonomous navigation of tracked robot
• Jae-Bok Song, Yong-Hoon Ji, Jae-Kwan Ryu, Jong-Won Kim, and Joo-Hyun Baek, “ Apparatus for estimating
location of moving object for autonomous driving,” Korean Intellectual Property Office (KIPO), #10-2012-0025468.
• Jae-Bok Song, Yong-Hoon Ji, Jae-Kwan Ryu, Jong-Won Kim, and Joo-Hyun Baek, “Method for estimating location of
mobile robot,” Korean Intellectual Property Office (KIPO), #10-2012-0025469.
Insert title here
Transportation Robot [ July 2010 ~ April 2012 ]
• Length: 0.8m, Height: 1.0m, Width: 0.5m
• 2 motors for steering, another 2 for propulsion
• Top speed : 1.0m/sec (flat ground, no rider condition)
Fig. 1 Transportation robot platform
• Image processing, segmentation, clustering and labeling methods are used
• Lane extracted stably by picking up features generated by using segmentation and clustering
• Extracted lane markers are useful to local localization in outdoor
Fig. 2 Image processing to extract lane feature
• DWA (dynamic windows approach)
• Simultaneous obstacle avoiding algorithm
• Dynamic window : Area in velocity space to reach without collision to obstacle during the given time step with given robot velocity
• Determining the velocity in the dynamic window to reach the goal point fast
Fig. 3 Determining DWA velocity from objective function
Mov. 1 DWA-based obstacle avoidance
• Yong-Hoon Ji, Ji-Hun Bae, Jae-Bok Song, Joo-Hyun Baek, and Jae-Kwan Ryu, “Outdoor Localization through GPS Data and
Matching of Lane Markers for a Mobile Robot,” Journal of Institute of Control, Robotics and Systems, Vol. 18, No. 6, pp. 594-600, June, 2012.