MU Qi, LIANG Xin, GUO Yuanjie, WANG Yuhao, LI Zhanli. An edge aware enhanced visual SLAM method for underground coal mines[J]. COAL GEOLOGY & EXPLORATION.
Citation: MU Qi, LIANG Xin, GUO Yuanjie, WANG Yuhao, LI Zhanli. An edge aware enhanced visual SLAM method for underground coal mines[J]. COAL GEOLOGY & EXPLORATION.

An edge aware enhanced visual SLAM method for underground coal mines

More Information
  • Received Date: August 22, 2024
  • Revised Date: February 04, 2025
  • [Objective] Low illumination, weak textures, and degraded structured features are commonly found in underground coal mines, resulting in insufficient effective features or high mismatch rates in Visual SLAM (Simultaneous Localization and Mapping) systems. This severely limits the accuracy and robustness of localization. [Methods] An edge aware enhancement based Visual SLAM method is proposed. Initially, an edge aware constrained low light image enhancement module is constructed. The Retinex algorithm is optimized with an adaptive scale gradient domain guided filter to obtain images with clear textures and uniform illumination, sensibly improving feature extraction performance under low and uneven lighting conditions. Subsequently, an edge aware enhanced feature extraction and matching module is built in the visual odometry. It enhances feature detectability and matching accuracy in weakly textured and structured environments. The point and line features are extracted using ORB(Oriented FAST and Rotated BRIEF) and EDLines(Edge Drawing Lines) algorithms, with precise matching achieved through GMS(Grid-based Motion Statistics) and ratio test strategies. Finally, the method is evaluated on the TUM dataset and an underground coal mine real world dataset, in comparison with ORB-SLAM2 and ORB-SLAm3, covering image enhancement, feature matching, and localization. [Results and Conclusions] The results show that (1) on the TUM dataset, the proposed method reduces the root mean square error of absolute and relative trajectory errors by 4%~38.46% and 8.62%~50% compared to ORB-SLAM2, and by 0%~61.68% and 3.63%~47.05% compared to ORB-SLAm3, respectively; (2) in underground coal mine real world dataset experiments, the localization trajectory of the proposed method is closer to the camera motion reference trajectory; (3) the proposed method effectively improves the accuracy and robustness of Visual SLAM in feature degradation scenes in underground coal mines, providing a technical solution for the application of Visual SLAM technology in coal mines. Research on Visual SLAM methods for degraded feature scenarios in underground environments is important for advancing the robotization of mobile equipment in coal mines.
  • [1]
    王国法,张建中,薛国华,等.煤矿回采工作面智能地质保障技术进展与思考[J].煤田地质与勘探,2023,51(02):12-26.

    WANG Guofa, ZHANG Jianzhong, Xue Guohua, et al. Progress and reflection of intelligent geological guarantee technology in coal mining face[J]. Coal Geology&Exploration,2023,51(02):12-26.
    [2]
    王海军,曹云,王洪磊.煤矿智能化关键技术研究与实践[J].煤田地质与勘探,2023,51(01):44-54.

    WANG Haijun, CAO Yun, WANG Honglei. Research and practice on key technologies for intelligentization of coal mine[J]. Coal Geology&Exploration, 2023,51(01):44-54.
    [3]
    CHEN Weifeng, ZHOU Chengjun, SHANG Guangtao, et al. SLAM Overview: from single sensor to heterogeneous fusion[J]. Remote Sensing, 2022, 14(23): 6033-6086.
    [4]
    葛世荣,胡而已,李允旺.煤矿机器人技术新进展及新方向[J].煤炭学报,2023,48(01):54-73.

    GE Shirong, HU Eryi, LI Yunwang. New progress and direction of robot technology in coal mine.[J]. Journal of China Coal Society, 2023,48(01):54-73.
    [5]
    胡博妮,陈霖,徐丙立,等.基于无人机平台的地表环境实时稠密点云生成与数字模型构建[J].遥感学报,2024,28(05):1206-1221.

    HU Boni, CHEN Lin, XU Bingli, et al. Real-time dense point cloud generation and digital model construction of surface environment based on UAV platform[J]. National Remote Sensing Bulletin, 2024,28(05):1206-1221.
    [6]
    高毅楠,姚顽强,蔺小虎,等.煤矿井下多重约束的视觉SLAM关键帧选取方法[J].煤炭学报,2024,49(S1):472-482.

    GAO Yinan, YAO Wanqiang, LIN Xiaohu, et al. Visual SLAM keyframe selection method with multiple constraints in underground coal mines[J]. Journal of China Coal Society, 2024, 49(S1): 472-482.
    [7]
    HUANG Zenghua, GE Shirong, HE Yonghua, et al. Research on the Intelligent System Architecture and Control Strategy of Mining Robot Crowds[J]. Energies, 2024, 17(8): 1834.
    [8]
    LI Menggang, HU Kun, LIU Yuwang, et al. A multimodal robust simultaneous localization and mapping approach driven by geodesic coordinates for coal mine mobile robots[J]. Remote Sensing, 2023, 15(21): 5093.
    [9]
    薛光辉,张钲昊,张桂艺,等.煤矿井下点云特征提取和配准算法改进与激光SLAM研究[J/OL].煤炭科学技术, http://kns.cnki.net/kcms/detail/11.2402.TD.20240722.1557.003.html,[2024-11-25].

    XUE Guanghui, ZHANG Zhenghao, ZHANG Guiyi, et al. Im-provenment of point cloud feature extraction and alignment algorithms and LiDAR SLAM in coal mine underground[J/OL]. Coal Science and Technology, http://kns.cnki.net/kcms/detail/11.2402.TD.20240722.1557.003.html,[2024-11-25].
    [10]
    YU Rui, FANG Xinqiu, HU Chengjun, et al. Research on positioning method of coal mine mining equipment based on monocular vision[J]. Energies, 2022, 15(21): 8068-8082.
    [11]
    王纪武,万伟鹏,尚学强,等.基于图像增强和自适应阈值的语义视觉SLAM系统[J/OL].计算机集成制造系统,https: //doi.org/10.13196/ j. cims.2023.0I05,[2024-07-08]. WANG Jiwu, WAN Weipeng, SHANG Xueqiang, et al. Semantic visual SLAM system based on image enhancement and adaptive thresholding[J/OL]. Computer Integrated Manufacturing Systems, 1-22[2024-07-08]. https: //doi.org/10.13196/ j. cims.2023.0I05.
    [12]
    龚云,颉昕宇.基于同态滤波方法的煤矿井下图像增强技术研究[J].煤炭科学技术,2023,51(03):241-250.

    GONG Yun, XIE Xinyu. Research on coal mine underground image recognition technology based on homomorphic filtering method[J]. Coal Science and Technology, 2023,51(03):241-250.
    [13]
    占必超,吴一全,纪守新.基于平稳小波变换和Retinex的红外图像增强方法[J].光学学报,2010,30(10):2788-2793.

    ZHAN Bizhao, WU Yiquan, JI Shouxin. Infrared image enhancement method based on stationary wavalet transformation and retinex[J]. Acta Optica Sinica. 2010, 30(10): 2788-2793.
    [14]
    WANG Yifan, WANG Hongyu, YIN Chuanli, et al. Biologica-lly inspired image enhancement based on retinex[J]. Neuroc-omputing, 2016, 177: 373-384.
    [15]
    梅英杰,宁媛,陈进军.融合暗通道先验和MSRCR的分块调节图像增强算法[J].光子学报,2019,48(07):124-135.

    MEI Yingjie, NING Yuan, CHEN Jinjun. Block-adjusted image enhancement algorithm combining dark channel prior with MSRCR[J]. Acta Photonica Sinica, 2019,48(07):124-135.
    [16]
    LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110.
    [17]
    RUBLEE E, RABAUD V, KONOLIGE K: ORB: An efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision. Barcelona: IEEE, 2011:2564- 2571.
    [18]
    马艾强,姚顽强.煤矿井下移动机器人多传感器自适应融合SLAM方法[J].工矿自动化,2024,50(05):107-117.

    MA Aiqiang, YAO Wanqiang. Multi sensor adaptive fusion SLAM method for underground mobile robots in coal mines[J]. Journal of Mine Automation, 2024,50(05):107-117.
    [19]
    张旭辉,杨红强,白琳娜,等.基于改进RANSAC特征提取的掘进装备视觉定位方法研究[J].仪器仪表学报,2022,43(12):168-177.

    ZHANG Xuhui, YANG Hongqiang, BAI Linna, et al. Research on the visual positioning method of tunneling equipment based on the improved RANSAC feature extraction[J]. Chinese Journal of Scientific Instrument, 2022,43(12):168-177.
    [20]
    VON GIOI R G, JAKUBOWICZ J, MOREL J M, et al. LSD: A fast line segment detector with a false detection control[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 32(4): 722-732.
    [21]
    姚建均,李英朝,吴杨,等.融合点线特征的视觉惯性同时定位及建图[J].哈尔滨工程大学学报,2024,45(04):771-778.

    YAO Jianjun, LI Yingzhao, WU Yang, et al. Visual-inertial simultaneous localization and mapping based on point-and-line features[J]. Journal of Harbin Engineering University, 2024, 45(04):771-778.
    [22]
    龚坤,徐鑫,陈小庆,等.弱纹理环境下融合点线特征的双目视觉同步定位与建图[J].光学精密工程,2024,32(05):752-763.

    GONG Kun, XU Xin, CHEN Xiaoqing, et al. Binocular vision SLAM with fused point and line features in weak texture environment[J]. Optics and Precision Engineering, 2024, 32(05): 752-763.
    [23]
    BIAN Jiawang, LIN Wenyan, MATSUSHITA Y. GMS: Grid-Based Motion Statistics for Fast, Ultra-Robust Feature Correspondence[C]//IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 2828-2837.
    [24]
    王笛,胡辽林.基于双目视觉的改进特征立体匹配方法[J].电子学报,2022,50(01):157-166.

    WANG Di, HU Liaolin. Improved feature stereo matching method based on binocular vision[J]. Acta Electronica Sinica, 2022, 50(01):157-166.
    [25]
    CHEN Xinyu, YU Yantao. An unsupervised low-light image enhancement method for improving V-SLAM localization in uneven low-light construction sites[J]. Automation in Construction, 2024, 162: 105404.
    [26]
    刘冬,于涛,丛明,等.基于深度学习图像特征的动态环境视觉SLAM方法[J].华中科技大学学报(自然科学版),2024,52(06):156-163.

    LIU Dong, YU Tao, CONG Ming, et al. Visual SLAM method for dynamic environment based on deep learning image features[J]. Journal of Huazhong University of Science and Technology (Natural Science Edition), 2024,52(06):156-163.
    [27]
    MURARTAL R, TARDOS J D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras[J]. IEEE Transactions on Robotics, 2017,33(5):1255-1262.
    [28]
    LAND E H. The retinex theory of color vision[J]. Scientific American, 1978, 237(6): 108-128.
    [29]
    HE Kaiming, SUN Jian, TANG Xiaoou. Guided image filtering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012,35(6):1397-1409.
    [30]
    KOU Fei, CHEN Weihai, WEN Changyun, et al. Gradient domain guided i mage filtering[J]. IEEE Transactions on Image Processing, 2015, 24(11): 4528-4539.
    [31]
    XU Xin, YU Zhibin: Low-light image enhancement based on retinex theory[C]//International Conference on Electronic Information and Communication Technology. Qingdao: IEEE, 2023: 1-6.
    [32]
    AKINLAR C, TOPAL C. EDLines: A real-time line segment detector with a false detection control[J]. Pattern Recognition Letters, 2011,32(13):1633-1642.
    [33]
    PUMAROLA A, VAKHITOY A, AGUDO A: PL-SLAM: Real-time monocular visual SLAM with points and lines[C]//IEEE International Conference on Robotics and Automation. Singapore: IEEE, 2017: 4503-4508.
    [34]
    STURM J, ENGELHARD N, ENDRES F: A benchmark for the evaluation of RGB-D SLAM systems[C]// IEEE/RSJ International Conference on Intelligent Robots and Systems.Portugal: IEEE, 2012: 573-580.
    [35]
    CAMPOS C, ELVIRA R, RODRIGUEZ J J G, et al. ORB-SLAm3: An accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890.

Catalog

    Article Metrics

    Article views (12) PDF downloads (2) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return