Insert title here Insert title here Insert title here

Selected Research

Insert title here

Operation of Mobile Robot in Snowy Environment [ June 2021 ~ Present ]

Research Background
Semantic Segmentation by Using GAN in Snow-covered Road
In order to reduce the burden on the elderly in heavy snowfall areas, it is important to support snow removal work with robots. In this research, we propose a motion control method to follow snow-covered road for autonomous mobile robot equipped with an RGB camera. To this end, we improve the road detection accuracy of semantic segmentation by using generative adversarial network (GAN) in snowy environment. In addition, the central point of the detected road area was extracted as sub-goal for the motion control of the mobile robot; thus, the mobile robot is able to follow snow-covered road stably.


Mov. 1 Semantic segmentation-based road detecion using fake image translated by GAN.


Mov. 2 Autonomous navigation of mobile robot in snowy environment.


Related Paper
• 高城 友豪, 池 勇勳, "積雪環境における移動ロボットの行動生成―積雪路面におけるGANによる舗道の検出精度向上―," 日本機械学会ロボティクス・メカトロニクス講演会22講演論文集 (ROBOMECH2022), 札幌, June 2022.
• Yugo Takagi and Yonghoon Ji, "Motion Control of Mobile Robot Based on Semantic Segmentation of GAN-generated Fake Image for Snow-covered Environment," Proceedings of the 19th International Conference on Ubiquitous Robots (UR2022), Jeju , Korea, July 2022.

Insert title here

Semi-autonomous Exploration Robot in Disaster Area [ October 2018 ~ Present ]

Research Background
Semantic Survey Map Building Framework
In this research, we propose a semi-autonomous mobile robot system that builds a wide-area survey map including semantic information to carry out damage monitoring in disaster areas such as Fukushima Daiichi nuclear power plant station. To this end, the following technologies are developed and SMLO loop-based seamless integration is realized as shown in Fig. 1.

• A sensor system that measures heat source, radiation source, water source, other substance information as well as color and shape information in the environment.
• SLAM (simultaneous localization and mapping) scheme that generates precise a wide area semantic survey map for learning-based motion generation of the mobile robot.
• A route generation system that performs reinforcement learning based on the built map from the SLAM scheme. An operator is able to control the robot semi-automatically based on the generated route.

The generated semantic survey map can be used for the prevention of secondary disasters and recovery plans, given that it contains useful information for the disaster environment.


Fig. 1 Conceptual image of semantic survey map building process based on SMLO loop.


Mov. 1 Environmental sensing and reinforcement learning by SMLO system
(quote from Kono Lab. at Tokyo Polytechnic University).


Mov. 2 Semantic survey map building framework using semi-autonomous mobile robot in disaster area
[ https://clads.jaea.go.jp/video/].

3D Temperature Mapping by Mobile Robot
In the decommissioning of nuclear facilities, it is important to investigate the temperature distribution inside. In this study, we propose a method to generate 3D map information including temperature distribution using a mobile robot equipped with a depth camera based on NIR (near-infrared) and a thermography based on FIR (far-infrared). To this end, a novel calibration scheme for the depth camera and thermography line information is proposed given that the NIR image and the FIR image have the similar mechanism of projecting infrared rays into the image, the line distributions in both images are considered to be similar. The experimental results demonstrate that our mapping framework can generate reliable 3D temperature.


Mov. 3 3D Temperature mapping by fusion of depth camera and thermography mounted on mobile robot.



Mov. 4 3D Temperature mapping by fusion of LiDAR and thermography mounted on exploration robot.


Related Paper
• 畠山 佑太, 藤井 浩光, 堂前 雅仁, 河野 仁, 池 勇勳, "暗所探査における視野明瞭化のための温度情報と偏光情報を統合した3次元計測システム," 第22回システムインテグレーション部門講演会(SI2021), 鹿児島, pp. 33-36, December 2021. (SI2021優秀講演賞受賞)
• 佐藤 弘和, 池 勇勳, 藤井 浩光, 河野 仁, "転移強化学習における環境適応性能向上を目的とした転移率自動調整法," 2021年電気学会電子・情報・システム部門大会, 富山, pp.706-710, September 2021.
• 河野 仁, 池 勇勳, 藤井 浩光, "被災地情報収集のための半自律移動ロボットを用いたセマンティックサーベイマップ生成システムの開発," 2021年電気学会電子・情報・システム部門大会, 富山, pp.702-705, September 2021.
• Ryosuke Kataoka, Isao Tadokoro, Yonghoon Ji, Hiromitsu Fujii, Hitoshi Kono, and Kazunori Umeda, "Performance Improvement of SLAM Based on Global Registration Using LiDAR Intensity and Measurement Data of Puddle," Proceedings of the 18th International Conference on Ubiquitous Robots (UR2021), pp. 553-556, Gangneung , Korea, July 2021. [doi:10.1109/UR52253.2021.9494671]
• 岡 翔平, 池 勇勳, 藤井 浩光, 河野 仁, "被災地環境における直線情報に基づく深度カメラとサーモグラフィの融合による移動ロボットを用いた3次元温度情報マッピング," 日本機械学会ロボティクス・メカトロニクス講演会21講演論文集 (ROBOMECH2021), 大阪, June 2021.
• 片岡 良介, 田所 功, 池 勇勳, 藤井 浩光, 河野 仁, 梅田 和昇, "LiDARの反射強度及び溜水の計測情報を利用した大域的点群位置合わせによるSLAMの性能向上," 日本機械学会ロボティクス・メカトロニクス講演会21講演論文集 (ROBOMECH2021), 大阪, June 2021.
• 長坂 拓海, 藤井 浩光, 河野 仁, 池 勇勳, "環境情報の3次元提示のための距離センサを用いた異種複数センサ統合," 日本機械学会ロボティクス・メカトロニクス講演会21講演論文集 (ROBOMECH2021), 大阪, June 2021.
• Ryosuke Kataoka, Ryuki Suzuki, Yonghoon Ji, Hiromitsu Fujii, Hitoshi Kono, and Kazunori Umeda, "ICP-based SLAM Using LiDAR Intensity and Near-infrared Data," Proceedings of the 2021 IEEE/SICE International Symposium on System Integration (SII2021), pp. 199-104, Iwaki, Japan, January 2021. [doi:10.1109/IEEECONF49454.2021.9382647]
• Ryuki Suzuki, Ryosuke Kataoka, Yonghoon Ji, Hiromitsu Fujii, Hitoshi Kono, and Kazunori Umeda, "SLAM Using ICP and Graph Optimization Considering Physical Properties of Environment," Proceedings of the 2020 21st International Conference on Research and Education in Mechatronics (REM2020), Cracow, Poland, December 2020. [doi:10.1109/REM49740.2020.9313074]
• 菅原 岬, 藤井 浩光, 河野 仁, 池 勇勳, "水の近赤外線の吸光特性を用いた水系領域の3次元提示," 第21回計測自動制御学会システムインテグレーション部門講演会 (SI2020), 1A3-10, pp. 118-112, オンライン, December 2020. (SI2020優秀講演賞受賞)
• 片岡 良介, 鈴木 龍紀, 池 勇勳, 藤井 浩光, 河野 仁, 梅田 和昇, "LiDARの反射強度及び溜水の計測情報を利用したICPによるSLAM," 第38回日本ロボット学会学術講演会予稿集 (RSJ2020), オンライン, October 2020.
• 長坂 拓海, 藤井 浩光, 河野 仁, 池 勇勳, "水面の揺動による偏光情報の変化を用いた溜水位置のステレオ計測," 日本機械学会ロボティクス・メカトロニクス講演会20講演論文集 (ROBOMECH2020), 金沢, May 2020.
• 佐藤 弘和, 大津 亮二, 池 勇勳, 藤井 浩光, 河野 仁, "強化学習における転移学習のための転移率自動推定法," 日本機械学会ロボティクス・メカトロニクス講演会20講演論文集 (ROBOMECH2020), 金沢, May 2020.
• 鈴木 龍紀, 片岡 良介, 池 勇勳, 藤井 浩光, 河野 仁, 梅田 和昇, "環境の持つ物理的属性を考慮した 3 次元地図生成," 日本機械学会ロボティクス・メカトロニクス講演会20講演論文集 (ROBOMECH2020), 金沢, May 2020.
• 片岡 良介, 鈴木 龍紀, 池 勇勳, 藤井 浩光, 河野 仁, 梅田 和昇, "環境の持つ物理的属性を考慮したICPとループ閉じ込みにおけるSLAM," 第25回ロボティクスシンポジア, 函館, March 2020. (Reviewed)
• 菅原 岬, 藤井 浩光, 河野 仁, 池 勇勳, "遠隔操作ロボットによる水源サーベイマップ構築のための近赤外線情報の3次元可視化," 第20回計測自動制御学会システムインテグレーション部門講演会講演論文集 (SI2019), 高松, December 2019.
• Hiromitsu Fujii, Misaki Sugawara, Hitoshi Kono, and Yonghoon Ji, "3D Visualization of Near-Infrared Information for Detecting Water Source," Proceedings of the Fukushima Research Conference 2019 on Remote Technologies for Nuclear Facilities (FRC2019), Fukushima, Japan, p. 5, October 2019.
• Hitoshi Kono, Tomohisa Mori, Yonghoon Ji, Hiromitsu Fujii, Tsuyoshi Suzuki, "Development of Perilous Environment Estimation System by Rescue Robot Using On-board LiDAR for Teleoperator," Proceedings of the 2019 IEEE/SICE International Symposium on System Integrations (SII2019), pp.7-10, Paris, France, January 2019. [Link]
Yonghoon Ji, Hiromitsu Fujii, and Hitoshi Kono, "Semantic Survey Map Building Framework Using Semi-autonomous Mobile Robot in Disaster Area," Proceedings of the Fukushima Research Conference 2018 on Remote Technologies for Nuclear Facilities (FRC2018), Fukushima, Japan, p. 20, October 2018.

Insert title here

Underwater Robotics [ April 2015 ~ Present ]

Research Background
3D Reconstruction of Underwater Environment Using Acoustic Camera
In recent years, waterfront development, such as construction and reclamation projects related to airports, ports, and submarine tunnels, has become considerably more critical. To conduct such heavy work, there exist underwater construction machines operated by divers in an underwater environment. However, hazards may prohibit human access and the limited field of vision due to turbidity and lack of illumination makes underwater operations difficult. To complete tasks, such as inspection, removal of hazardous materials, or excavation work, a remote control robot equipped with a 3D system for reconstructing the underwater environment is required, as shown in Fig. 1.


Fig. 1 Example of underwater construction using a remote control underwater crawler-type robot based on dense 3D mapping of surrounding environment using acoustic camera.

Recently, the development of forward-looking sonars, which is also known as acoustic cameras, such as the dual frequency identification sonar (DIDSON) and adaptive resolution imaging sonar (ARIS), which can generate high-resolution and wide-range images, has facilitated our understanding of underwater situations. This type of sonar sensor is relatively small and can easily be mounted on an underwater robot and gather information of a relatively larger area considerably faster. The acoustic camera can also be mounted on an arm of a crawlertype robot, and thus, the robot can fulfill complex underwater tasks, such as manipulation, even in turbid water.

In this research, a novel dense 3D mapping paradigm for an acoustic camera in an underwater situation is proposed. As a result, it is possible to build a dense 3D map of the underwater environment precisely and robustly as shown in Mov. 1.

Mov. 1 3D underwater mapping using the acoustic camera mounted on the robot arm.


First, each of the 3D local maps is generated from each viewpoint of the acoustic camera as shown in Mov. 2. Here, an effective rotation for probability updates, which rotates around the acoustic axis (i.e., the roll rotation of the acoustic camera), is performed for each viewpoint by the rotator mounted on the acoustic camera. Then, odometry (i.e., the movement of the acoustic camera) is estimated from the transform matrices of consecutive local maps without requiring internal sensor data.

Mov. 2 3D local map generation from each viewpoint of the acoustic camera by roll rotation.

Finally, a graph optimization process is performed to realize the accurate pose estimation of each viewpoint and generate a 3D global map simultaneously. As a result shown in Mov. 3, it is possible to build a dense 3D map of the underwater environment precisely and robustly.


Mov. 3 Dense 3D global map built by graph optimization process.

Additionally, we propose another novel approach to estimate the missing dimension (i.e., estimating the unknown elevation angle) in 2D acoustic images based on a deep neural network for 3D reconstruction, as shown in Mov. 4. Here, the deep neural network is trained using simulated images. To mitigate the sim-real gap, a neural style transfer method is implemented to generate a realistic image dataset for training.


Mov. 4 Dense 3D global map built by graph optimization process.


Forward-looking Sonar Simulator
The difficulty and high cost of acquiring acoustic images in real experiments encourage researchers to consider the generation of simulated acoustic image datasets. Therefore, we develop a novel simulator to generate realistic acoustic datasets for forward-looking sonars, as shown in Mov. 4 and Fig. 2. We first build a novel user-friendly acoustic image simulator based on 3D modeling software. Then, the CycleGAN is applied to generate realistic acoustic images based on the generated dataset from the simulator.
[https://github.com/sollynoay/Sonar-simulator-blender]


Fig. 2 Configuration in the simulator which can simulate sound waves in an active sonar system.

ACMarker: Acoustic Camera-Based Fiducial Marker System
ACMarker shown in Mov. 5 is an acoustic camera-based fiducial marker system designed for underwater environments. Optical camera-based fiducial marker systems have been widely used in computer vision and robotics applications such as augmented reality (AR), camera calibration, and robot navigation. However, in underwater environments, the performance of optical cameras is limited owing to water turbidity and illumination conditions. We propose methods to recognize a simply designed marker and to estimate the relative pose between the acoustic camera and the marker. The proposed system can be applied to various underwater tasks such as object tracking and localization of unmanned underwater vehicles.

The markers can be placed directly on the walls of an underwater structure or on the seabed with a concreteor plaster base. This facilitates the navigation of autonomous underwater vehicles (AUVs), as well as underwater structure inspection. The contributions of the system can be summarizedas follows:

• We propose detection and ID identification methods basedon simply designed square markers. 
• We propose a method to accurately and precisely estimatethe 6DoF relative pose between the acoustic camera andthe marker.
• Detection and pose estimation can be processed based ona single image and should work in real time.


Mov. 5 ACMarker system.

Related Papers
• Yusheng Wang, Yonghoon Ji, Hiroshi Tsuchiya, Hajime Asama, and Atsushi Yamashita, "Learning Pseudo Front Depth for 2D Forward-Looking Sonar-based Multi-view Stereo," Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), October 2022.
• Yusheng Wang, Yonghoon Ji, Dingyu Liu, Hiroshi Tsuchiya, Atsushi Yamashita and Hajime Asama, "Simulator-aided Edge-based Acoustic Camera Pose Estimation," OCEANS 2022 Chennai, February 2022.
• Dingyu Liu, Yusheng Wang, Yonghoon Ji, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "Development of Image Simulator for Forward-looking Sonar Using 3D Rendering," Proceedings of the SPIE 11794, 15th International Conference on Quality Control by Artificial Vision (QCAV2021), Vol. 11794, pp. 117940H, Tokushima, Japan, May 2021. [doi:10.1109/10.1117/12.2590004]
• Yusheng Wang, Yonghoon Ji, Dingyu Liu, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "Elevation Angle Estimation in 2D Acoustic Images Using Pseudo Front View," IEEE Robotics and Automation Letters, Vol. 6, No. 2, pp. 1535-1542, April 2021. (Impact Factor 3.6) [doi:10.1109/LRA.2021.3058911]
• Dingyu Liu, Yusheng Wang, Yonghoon Ji, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "CycleGAN-based Realistic Image Dataset Generation for Forward-looking Sonar," Advanced Robotics, Vol. 35, 2021. (Impact Factor 1.5) [doi:10.1080/01691864.2021.1873845]
• Dingyu Liu, Yusheng Wang, Yonghoon Ji, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "Development of Image Simulator for Forward-looking Sonar Using 3D Rendering," Proceedings of the SPIE 11794, 15th International Conference on Quality Control by Artificial Vision (QCAV2021), Vol. 11794, pp. 117940H, Tokushima, Japan, May 2021. [doi:10.1109/10.1117/12.2590004]
• Yusheng Wang, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "Acoustic Camera-based Pose Graph SLAM for Dense 3-D Mapping in Underwater Environments," IEEE Journal of Oceanic Engineering, Vol. 46, No. 3, pp. 829-847, July 2021. (Impact Factor 3.005)
[doi: 10.1109/JOE.2020.3033036]
• Yusheng Wang, Yonghoon Ji, Dingyu Liu, Yusuke Tamura, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "ACMarker: Acoustic Camera-based Fiducial Marker System in Underwater Environment," IEEE Robotics and Automation Letters, Vol. 5, No. 4, pp. 5018-5025, October 2020. (Impact Factor 3.6) (SICE International Young Authors Award for IROS2020 (SIYA-IROS2020) (Yusheng Wang)) [doi:10.1109/LRA.2020.3005375]
• Yusheng Wang, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "Planar AnP: A Solution to Acoustic-n-Point Problem on Planar Target," Global OCEANS 2020, Singapore, Singapore, October 2020. [doi:10.1109/IEEECONF38699.2020.9389267]
• Yusheng Wang, 池 勇勳, 劉 丁瑜, 田村 雄介, 土屋 洋, 山下 淳, 淺間 一, "音響カメラに基づいた水中環境における人工マーカシステムの開発," 日本機械学会ロボティクス・メカトロニクス講演会20講演論文集 (ROBOMECH2020), 金沢, May 2020.
• Yusheng Wang, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Hiroshi Tsuchiya, Atsushi Yamashita, and Hajime Asama, "Rotation Estimation of Acoustic Camera Based on Illuminated Area in Acoustic Image," Proceedings of the 12th IFAC Conference on Marine Systems (CAMS2019), pp. 217-222, Daejon, Korea, September 2019.
• Yusheng Wang, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "Three-dimensional Underwater Environment Reconstruction with Graph Optimization Using Acoustic Camera," Proceedings of the 2019 IEEE/SICE International Symposium on System Integrations (SII2019), pp. 28-33, Paris, France, January 2019. [Link]
• Yusheng Wang, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "3D Occupancy Mapping Framework Based on Acoustic Camera in Underwater Environment," Proceedings of the 12th IFAC Symposium on Robot Control (SYROCO2018), pp. 1-7, Budapest, Hungary, August 2018. (IFAC PaperOnLine, Vol. 51, No. 22, pp. 324-339, August 2018) [Link]
• Ngoc Trung Mai, Yonghoon Ji, Hanwool Woo, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "Acoustic Image Simulator Based on Active Sonar Model in Underwater Environment," Proceedings of the 15th International Conference on Ubiquitous Robots (UR2018), pp. 781-786, Hawaii, USA, June 2018. [Link]
• Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "3D Reconstruction of Line Features Using Multi-view Acoustic Images in Underwater Environment," Proceedings of 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI2017), pp. 312-317, Daegu, Korea, November 2017. [Link]
• Ngoc Trung Mai, Hanwool Woo, Yonghoon Ji, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "3-D Reconstruction of Underwater Object Based on Extended Kalman Filter by Using Acoustic Camera Images," Preprints of the 20th World Congress of the International Federation of Automatic Control, pp. 1066-1072, Toulouse, France, July 2017. (IFAC PaperOnLine, Vol. 50, No. 1, pp. 1043-1049, July 2017) [Link]
Yonghoon Ji, Seungchul Kwak, Atsushi Yamashita, and Hajime Asama, "Acoustic Camera-based 3D Measurement of Underwater Objects through Automated Extraction and Association of Feature Point," Proceedings of the 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI2016), pp. 224-230, Baden-Baden, Germany, September 2016. [Link]
• マイ ゴクチュン, 禹 ハンウル, 池 勇勳, 田村 雄介, 山下 淳, 淺間 一, "音響カメラ画像を用いた拡張カルマンフィルタに基づく水中物体の3次元計測手法の構築," 第34回日本ロボット学会学術講演会予稿集 (RSJ2016), RSJ2016AC1C3-06, pp. 1-4, 山形, September 2016.
• Seungchul Kwak, Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "3-D Reconstruction of Underwater Objects Using Arbitrary Acoustic Views," Proceedings of the 2016 11th France-Japan congress on Mechatronics 9th Europe-Asia congress on Mechatronics 17th International Conference on Research and Education in Mechatronics (MECHATRONICS-REM2016), pp. 74-79, Compiegne, France, June 2016. [Link]
• 곽 승철, 지 용훈, Atsushi Yamashita, and Hajime Asama, "다시점의 음향카메라 영상을 이용한 수중물체의 3차원 형상 복원," 2016 제31회 제어・로봇・시스템학회 학술대회, pp. 1-2, 서울, March 2016.
• Seungchul Kwak, Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "3-D Reconstruction of Underwater Object: Analytical System for Extracting Feature Points Using Two Different Acoustic Views," Proceedings of the 2015 JSME/RMD International Conference on Advanced Mechatronics (ICAM2015), pp.197-198, Tokyo, Japan, December 2015. [Link]
• 郭 承澈, 池 勇勳, 山下 淳, 淺間 一, "2視点における音響カメラ画像をを用いた水中物体の特徴点の3次元計測," 第33回日本ロボット学会学術講演会予稿集 (RSJ2015), pp. 1-4, 東京, September 2015.
• Seungchul Kwak, Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Development of Acoustic Camera-Imaging Simulator Based on Novel Model," Proceedings of the 2015 IEEE International Conference on Environment and Electrical Engineering (EEEIC2015), pp. 1719-1724, Rome, Italy, June 2015. [Link]

Insert title here

Motion Planning for Off-Road UGVs [ April 2015 ~ December 2018 ]

Research Background
Adaptive Motion Planning Based on Vehicle Characteristics and Regulations
In recent years, autonomous mobile robots and UGVs have attracted the attention of many researchers, and are becoming capable of dealing with various environments. Safe and reliable motion planning for mobility is one of the most important requirements for such unmanned robots. However, there have been very few studies on establishing an outdoor motion planning methodology in off-load environments, despite the undeniable fact that it is an indispensable requirement to operate unmanned robots traveling on rough terrain such as disaster sites where hazards prohibit human access.

When the UGV navigates autonomously on an off-road environment where rough terrain exists, the importance of avoiding accidents, such as collision and turnover, cannot be overemphasized in the sense of safe navigation. Therefore, the UGV is required to avoid such risks and to select a route within the traversable area. Hence, estimating the traversability and appropriate motion planning on rough terrain are very important tasks to meet these requirements. In this respect, we aim to propose a novel motion planning methodology for UGVs to navigate safely to a destination within convoluted environments, including rough terrain.

When designing a novel motion planner, we need to consider the following.
• All DoFs, namely the 6-DoFs of vehicular pose (position and orientation), which include height direction and roll and pitch angles.
• The unique characteristics of each vehicle, such as the size of the vehicle, minimum turning radius, or travelable maximum inclination angle, depending on the driving speed.
• Regulations necessary for vehicular operation depending on different situations, such as maintaining the driving speed and suppressing the change of posture.
• Feasible processing time to identify a solution, even in relatively large-scale environments.

The purpose of this research is to establish a novel motion planner for off-road UGVs, which addresses all the aforementioned issues. Specifically, when the user specifies the initial pose and the target pose of the UGV with respect to the environmental map composed of a 3D point cloud provided a priori, the motion planner should solve the problem of generating a path that connects these two states offline. We propose an adaptive methodology for global motion planning. Here, “adaptive” means that the proposed methodology enables to perform appropriate planning that satisfies different conditions defined from the vehicle characteristics and the regulations. A random sampling based scheme (Mov. 1) is applied to carrying out global path planning. In regard to the scale of the environment map, we have treated the scale spanning several hundred meters as large-scale in this study, and the proposed motion planner was applied to environmental maps with this size. Experimental results (Fig. 1) showed that the proposed off-road motion planner could generate an appropriate path, which satisfies vehicle characteristics and predefined regulations.


Mov. 1 Random sampling based scheme for global motion planning.

Fig. 1 Experimental results in simulation environment. (a) All generated nodes, output path as solution, and changes of several variables for each node on generated motion in case of low speed regulation. (b) All generated nodes, output path as solution, and changes of several variables for each node on generated motion in case of high speed regulation.

Autonomous End-to-end Motion Control of Exploration Robot Based on Deep Reinforcement Learning
In this research, we propose a novel approach to allow exploration robots to solve navigation problems in complex environments including rough terrain. It is difficult for robots to autonomously navigate without prior information such as an environmental map. Recently, advances in deep reinforcement learning (DRL) have made it possible to complete the autonomous motion control without the map. In this respect, we apply DRL to realize fully autonomous navigation on rough terrain for the exploration robot. The exploration robot can generate suitable control motion converted from observed depth information through a neural network.


Mov. 2 Autonomous Motion Control Using DRL for Exploration Robot on Rough Terrain.

Related Paper
• Zijie Wang, Yonghoon Ji, Hiromitsu Fujii, and Hitoshi Kono, "Autonomous Motion Control Using Deep Reinforcement Learning for Exploration Robot on Rough Terrain," Proceedings of the 2022 IEEE/SICE International Symposium on System Integration (SII2022), pp. 1021-1025, Narvik, Norway, January 2022. [doi:10.1109/SII52469.2022.9708814]
• Shinya Katsuma, Hanwool Woo, Yonghoon Ji, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "Efficient Motion Planning for Mobile Robots Dealing with Changes in Rough Terrain," Proceedings of the 1st IFAC Workshop on Robot Control (WROCO2019), pp. 460-463, Daejon, Korea, September 2019.
• 勝間 慎弥, 禹 ハンウル, 池 勇勳, 田村 雄介, 山下 淳, 淺間 一, "不整地の環境変化に効率的に対応する移動ロボットの動作計画," 日本機械学会ロボティクス・メカトロニクス講演会19講演論文集 (ROBOMECH2019), 広島, June 2019.
Yonghoon Ji, Yusuke Tanaka, Yusuke Tamura, Mai Kimura, Atsushi Umemura, Yoshiharu Kaneshima, Hiroki Murakami, Atsushi Yamashita, and Hajime Asama, "Adaptive Motion Planning Based on Vehicle Characteristics and Regulations for Off-Road UGVs," IEEE Transactions on Industrial Informatics, Vol. 15, No, 1, pp. 599 - 611, ISSN 1551-3203, January 2019. [doi:10.1109/TII.2018.2870662] (Impact Factor 7.377)
• Yuki Doi, Yonghoon Ji, Yusuke Tamura, Yuki Ikeda, Atsushi Umemura, Yoshiharu Kaneshima, Hiroki Murakami, Atsushi Yamashita and Hajime Asama, "Robust Path Planning against Pose Errors for Mobile Robots in Rough Terrain," Advances in Intelligent Systems and Computing 867, Intelligent Autonomous Systems 15 (Marcus Strand, Rudiger Dillmann, Emanuele Menegatti and Stefano Ghidoni (Eds.)) (Proceedings of the 15th International Conference IAS-15, Held July 2018, Baden-Baden (Germany)), Springer, pp. 27-39, ISSN. 2194-5357, January 2019 (Online: eISSN. 2194-5365). [doi:10.1007/978-3-030-01370-7_3]
• 土居 悠輝, 池 勇勳, 田村 雄介, 池田 裕樹, 梅村 篤志, 金島 義治, 村上 弘記, 山下 淳, 淺間 一, "不整地走行移動ロボットの位置誤差を考慮したロバストな経路計画," 第18回計測自動制御学会システムインテグレーション部門講演会講演論文集 (SI2017), pp. 3438-3443, 仙台, December 2017.
• 田中 佑典, 池 勇勳, 田村 雄介, 木村 麻衣, 梅村 篤志, 金島 義治, 村上 弘記, 山下 淳, 淺間 一, "3次元環境地図を用いた不整地走行無人車両の経路計画," 第22回ロボティクスシンポジア講演予稿集, pp. 203-204, 群馬, March 2017.
• 田中 佑典, 池 勇勳, 河野 仁, 田村 雄介, 江本 周平, 板野 肇, 村上 弘記, 山下 淳, 淺間 一, "複数台移動ロボットによる環境計測結果に基づいた不整地走行のための移動ロボットの進路方向決定手法の構築," 第21回ロボティクスシンポジア, pp. 250-255, 長崎, March 2016
• Yusuke Tanaka, Yonghoon Ji, Yusuke Tamura, Atsushi Yamashita, and Hajime Asama, "Course Detection from Integrated 3D Environment Measurement by Multiple Mobile Robots," Proceedings of the 2015 JSME/RMD International Conference on Advanced Mechatronics (ICAM2015), pp.237-238, Tokyo, Japan, December 2015. [Link]
• 田中 佑典, 池 勇勳, 山下 淳, 淺間 一, "移動ロボットの性能に応じた走行可能性推定が可能な不整地に対する走行可能性推定および行動生成手法," 精密工学会誌, Vol. 81, No. 12, pp. 1119-1126, ISSN 1348-8724, December 5 2015 (Online: eISSN 1881-8722). [doi:10.2493/jjspe.81.1119]
• Yusuke Tanaka, Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Fuzzy Based Traversability Analysis for a Mobile Robot on Rough Terrain," Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA2015), pp. 3965-3970, Seattle, USA, May 2015. [Link]
• 田中 佑典, 池 勇勳, 山下 淳, 淺間 一, "ファジィ推論を利用した移動ロボットのための不整地の走行可能性推定手法の構築," 日本機械学会ロボティクス・メカトロニクス講演会15 講演論文集 (ROBOMECH2015), 1P1-J06, pp. 1-4, 京都, May 2015.
• 田中 佑典, 池 勇勳, 山下 淳, 淺間 一, "ファジィ推論を利用した不整地の走行可能性推定に基づく移動ロボットの進路方向判断手法の構築," 第32回日本ロボット学会学術講演会予稿集 (RSJ2014), RSJ2014AC2D2-01, pp. 1-4, 福岡, September 2014

Insert title here

Construction of Intelligent Space [ April 2014 ~ May 2019 ]

Research Background
Automatic Calibration of Camera Sensor Network
Figure 1 (a) illustrates an example of the map information which is built by typical simultaneous localization and mapping (SLAM) schemes. However, considering human-robot coexistence environments, the map information, which is a static model, cannot deal with such dynamic environments because it cannot reflect changes in the environment (e.g., moving objects, etc.). On the other hand, the concept of an intelligent space, as illustrated in Fig. 1 (b), which constructs a distributed sensor network in an external environment, can monitor what is occurring in it.

Distributed sensor networks installed in external environments can recognize various events that occur in the space, so that such intelligent space can be of much service in human–robot coexistence environments, as shown in Fig. 1 (b). Distributed camera sensor networks with multi-camera systems provide the most general infrastructure for constructing such intelligent space. In order to obtain reliable information from such a system, pre-calibration of all the cameras in the environment (i.e., determining the absolute positions and orientations of each camera) is an essential task that is extremely tedious. This research considers the automatic calibration method for camera sensor networks based on 3D texture map information of a given environment as shown in Fig. 1(a). In other words, this research solves a global localization problem for the poses of the camera sensor networks given the 3D texture map information. The proposed complete 6DOF calibration system in this research only uses the environment map information; therefore, the proposed scheme easily calibrates its parameters. The results shown in Mov. 1 demonstrate that the proposed system can calibrate complete external camera parameters successfully.

Fig. 1 Environmental information: (a) static information from map and (b) dynamic information from sensor network which is components of intelligent space.


Mov. 1 Experimental results of automatic calibration of camera sensor network using wireless IP camera.

Indoor Positioning System Based on Distributed Camera Sensor Network
An importance of accurate position estimation in the field of mobile robot navigation cannot be overemphasized. In case of an outdoor environment, a global positioning system (GPS) is widely used to measure the position of moving objects. However, the satellite based GPS does not work indoors. This research proposes an indoor positioning system (IPS) that uses calibrated camera sensor networks for mobile robot navigation.

The IPS information is obtained by generating a bird's-eye image from multiple camera images; thus, our proposed IPS can provide accurate position information when the moving object is detected from multiple camera views. We evaluate the proposed IPS in a real environment in a wireless camera sensor network. The results shown in Mov. 2 demonstrate that the proposed IPS based on the camera sensor network can provide accurate position information of moving objects.


Mov. 2 Experimental results of IPS for mobile robot localization.

Related Paper
Yonghoon Ji, Atsushi Yamashita, Kazunori Umeda, and Hajime Asama, "Automatic Camera Pose Estimation Based on a Flat Surface Map," Proceedings of the SPIE 11172, 14th International Conference on Quality Control by Artificial Vision (QCAV2019), Vol. 11172, pp. 111720X-1-111720X-6, Mulhouse, France, May 2019. [Link]
池 勇勳, 山下 淳, 梅田 和昇, 淺間 一, "人工物環境における直線情報を用いたカメラの外部パラメータ推定法," 第19回計測自動制御学会システムインテグレーション部門講演会講演論文集 (SI2018), 3B3-17, pp. 2598-2600, 大阪, December 2018. (SI2018優秀講演賞受賞)
Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Automatic Calibration of Camera Sensor Network Based on 3D Texture Map Information," Robotics and Autonomous Systems, Vol. 87, pp. 313-328, ISSN 0921-8890, January 2017 (Online: October 5 2016). [doi:10.1016/j.robot.2016.09.015] (Impact Factor 2.928)
Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot," Advances in Intelligent Systems and Computing 531, Intelligent Autonomous Systems 14 (Weidong Chen, Koh Hosoda, Emanuele Menegatti, Masahiro Shimizu and Hesheng Wang (Eds.)) (Proceedings of the 14th International Conference IAS-14, Held July 2016, Shanghai (China)), Springer, pp. 1089-1101, ISSN. 2194-5357, February 2017 (Online: eISSN. 2194-5365). [doi:10.1007/978-3-319-48036-7]
지 용훈, Atsushi Yamashita, Hajime Asama, "실내 환경에서의 이동로봇의 위치추정을 위한 카메라 센서 네트워크 기반의 실내 위치 확인 시스템," 제어・로봇・시스템학회 논문지, Vol. 22, No. 11, pp. 952-959, ISSN 1976-5622, November 2016 (Online: eISSN 2233-4335). [Link]
지 용훈, Atsushi Yamashita, and Hajime Asama, "카메라 네트워크를 활용한 3 차원 지도정보 기반의 실내 위치 확인 시스템," 2016 제31회 제어・로봇・시스템학회 학술대회, pp. 1-2, 서울, March 2016.
Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Automatic Camera Pose Estimation Based on Textured 3D Map Information," Proceedings of the 2015 JSME/RMD International Conference on Advanced Mechatronics (ICAM2015), pp.100-101, Tokyo, Japan, December 2015. [Link] (ICAM2015 Honorable Mention)
Yonghoon Ji, Atsushi Yamashita, and Hajime Asama, "Automatic Calibration and Trajectory Reconstruction of Mobile Robot in Camera Sensor Network," Proceedings of the 11th Annual IEEE International Conference on Automation Science and Engineering (CASE2015), pp. 206-211, Gothenburg, Sweden, August 2015. [Link]
池 勇勳, 山下 淳, 淺間 一, "移動ロボットによるカメラネットワークの自動キャリブレーション-知能化空間における地図情報による性能向上-," 日本機械学会ロボティクス・メカトロニクス講演会15講演論文集 (ROBOMECH2015), 2A1-P06, pp. 1-2, 京都, May 2015.
池 勇勳, 山下 淳, 淺間 一, "知能化空間での移動ロボットによる自己位置推定と自動カメラキャリブレーションの同時実行," 第20回ロボティクスシンポジア講演予稿集, pp. 172-177, 軽井沢, March 2015.
池 勇勳, 山下 淳, 淺間 一, "環境知能化による移動ロボットのモンテカルロ位置推定法の性能向上," 第32回日本ロボット学会画学術講演会予稿集 (RSJ2014), RSJ2014AC3J1-06, pp. 1-4, 福岡, September 2014.

Insert title here

Military UGV [ January 2010 ~ March 2013 ]

Platform and Sensor Configuration
Pioneer 3AT (all terrain)
• The most popular outdoor robot
• Length: 0.65m, Height: 0.2m, Width: 0.66m
• Max speed : 0.7m/s, slope mobility : 25°, max payload : 30kg

Fig. 1 Platform and sensor configuration
Research Contents
DSM (digital surface model)
• Most popular maps to represent outdoor environments generated using an aerial mapping system
• Digital representation of ground surface using 2D grids
• Each grid has a single elevation information (2.5D)
• There are many discrepancies (DSM vs. real environment)

Fig. 2 Example of DSM built by aerial mapping system

Local 3D Map
• Accurate representation of the real outdoor environments built by a robot with tilting laser scanner
• Each grid contains the number of surface level and the minimum, maximum elevation at each level
• ICP (iterative closest points)-based integration of local maps (outdoor SLAM)

Fig. 3 ICP-based outdoor 3D SLAM

Combination of DSM and Satellite Image for Virtual Reality
• Texture mapping on DSM using satellite image
• We can confirm the understanding of environment become much easier than before combination of satellite image

Fig. 4 Combination of DSM and satellite image for virtual reality

Particle filter-based outdoor localization
• Localization by matching the environment model and sensor data
• Reference map is built by aerial mapping system or robot with tilting laser scanner
• Monte Carlo localization (MCL): based on range sensor for map matching

Fig. 5 Concept of map matching-based outdoor localization



Mov. 1 Particle filter-based local localization based on DSM

Accurate update of DSM by using local 3D map
• To overcome the limitation of DSM representation
• 2.5D DSM and local 3D map can be represented at once

Fig. 6 Effect of updating DSM: non-updated DSM built by aerial mapping system, and updated DSM fused with local elevation map



Mov. 2 Accurate update of DSM by using local 3D map

Related Paper
• Yong-Ju Lee, Yong-Hoon Ji, Jae-Bok Song, and Sang-Hyun Joo, “Performance Improvement of ICP- based Outdoor SLAM Using Terrain Classification,” Proceeding of the International Conference on Advanced Mechatronics (ICAM 2010), pp. 243-246, October, 2010, Osaka, Japan.
Yong-Hoon Ji, Sung-Ho Hong, Jae-Bok Song, and Ji-Hoon Choi, “DSM Update for Robust Outdoor Localization Using ICP-based Scan Matching with COAG Features of Laser Range Data,” Proceeding of the IEEE/SICE International Symposium on System Integration (SII 2011), pp. 1245-1250, December 2011, Kyoto, Japan.

Insert title here

Surveillance Robot [ January 2011 ~ December 2011 ]

Platform and Sensor Configuration
Jaguar
• Length: 0.65m, Height: 0.2m, Width: 0.66m
• 2 tracks for driving and steering, 2 flipper arms (stair climbing is available)
• Max speed : 1.5m/s, slope mobility : 45°, max payload : 15kg

Fig. 1 Platform and sensor configuration
Research Contents
GPS-based outdoor localization
• Extended Kalman filter (EKF)-based sensor fusion
• Odometry and roll, pitch yaw from IMU : used for prediction process of EKF
• GPS : used for update process of EKF

Fig. 2 EKF-based outdoor localization by using wheel odometry and GPS information

Gradient method-based outdoor global path planning
• Optimal path generation using map information from initial position of robot to goal
• Extended 2D gradient method
• Local minimum problem can be avoided
• Traversability map is used

Fig. 3 Global path extraction by gradient method

Implementation of a manipulator on tracked robot
• Mobile tracked robot + 4DOF manipulator based on stabilization control
• Efficient unmanned surveillance
• Absorbing vibration at rugged terrain while driving

Mov. 1 Manipulator with stabilization control

Autonomous navigation
• Environment : indoor and outdoor
• Localization, path planning and motion control algorithms are integrated

Mov. 2 Autonomous navigation of tracked robot

Related Patent
• Jae-Bok Song, Yong-Hoon Ji, Jae-Kwan Ryu, Jong-Won Kim, and Joo-Hyun Baek, “ Apparatus for estimating location of moving object for autonomous driving,” Korean Intellectual Property Office (KIPO), #10-2012-0025468.
• Jae-Bok Song, Yong-Hoon Ji, Jae-Kwan Ryu, Jong-Won Kim, and Joo-Hyun Baek, “Method for estimating location of mobile robot,” Korean Intellectual Property Office (KIPO), #10-2012-0025469.

Insert title here

Transportation Robot [ July 2010 ~ April 2012 ]

Platform
Trabot
• Length: 0.8m, Height: 1.0m, Width: 0.5m
• 2 motors for steering, another 2 for propulsion
• Top speed : 1.0m/sec (flat ground, no rider condition)

Fig. 1 Transportation robot platform

Research Contents
Lane extraction
• Image processing, segmentation, clustering and labeling methods are used
• Lane extracted stably by picking up features generated by using segmentation and clustering
• Extracted lane markers are useful to local localization in outdoor

Fig. 2 Image processing to extract lane feature

Obstacle avoidance
• DWA (dynamic windows approach)
• Simultaneous obstacle avoiding algorithm
• Dynamic window : Area in velocity space to reach without collision to obstacle during the given time step with given robot velocity
• Determining the velocity in the dynamic window to reach the goal point fast

Fig. 3 Determining DWA velocity from objective function


Mov. 1 DWA-based obstacle avoidance

Related Paper
Yong-Hoon Ji, Ji-Hun Bae, Jae-Bok Song, Joo-Hyun Baek, and Jae-Kwan Ryu, “Outdoor Localization through GPS Data and Matching of Lane Markers for a Mobile Robot,” Journal of Institute of Control, Robotics and Systems, Vol. 18, No. 6, pp. 594-600, June, 2012.