Show simple item record

dc.contributor.authorPoudel, Rajeev
dc.contributor.authorde Lima, Luciano Netto
dc.contributor.authorAndrade, Fabio Augusto de Alcantara
dc.date.accessioned2024-04-10T11:58:14Z
dc.date.available2024-04-10T11:58:14Z
dc.date.created2024-01-17T12:45:59Z
dc.date.issued2023
dc.identifier.citationPoudel, R., Lima, L., & Andrade, F. (2023, 3. & 7. januar). A Novel Framework To Evaluate and Train Object Detection Models for Real-Time Victims Search and Rescue at Sea With Autonomous Unmanned Aerial Systems Using High-Fidelity Dynamic Marine Simulation Environment [Paperpresentasjon]. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, 2023 (pp. 239-247). Waikoloa.en_US
dc.identifier.isbn978-1-6654-9346-8
dc.identifier.urihttps://hdl.handle.net/11250/3125810
dc.description.abstractThis work presents a novel framework providing the ability to control an Unmanned Aerial System (UAS) while detecting objects in real-time with visible detections, containing class names, bounding boxes, and confidence scores, in a changeable high-fidelity sea simulation environment, where the major attributes like the number of human victims and debris floating, ocean waves and shades, weather conditions such as rain, snow, and fog, sun brightness and intensity, camera exposure and brightness can easily be manipulated. Developed using Unreal Engine, Microsoft Air-Sim, and Robot Operating System (ROS), the framework was firstly used to find the best possible configuration of the UAS flight altitude, and camera brightness with high average prediction confidence of human victim detection, and then only autonomous real-time test missions were carried out to calculate the accuracies of two pretrained You Only Look Once Version 7 (YOLOv7) models: YOLOv7 retrained on SeaDronesSee Dataset (YOLOv7-SDS) and YOLOv7 originally trained on Microsoft COCO Dataset (YOLOv7-COCO), which resulted in high values of 97.8% and 93.79%, respectively. Furthermore, it is proposed that the framework developed in this study can be reverse en-gineered for autonomous real-time training with automatic ground-truth labeling of the images from the gaming engine that already has all the details of all objects placed in the environment for rendering them onto the screen. This is required to be done to avoid the cumbersome and time-consuming manual labeling of large amount of synthetic data that can be extracted using this framework which could be a groundbreaking achievement in the field of maritime computer vision.en_US
dc.language.isoengen_US
dc.relation.ispartof2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
dc.relation.urihttps://openaccess.thecvf.com/content/WACV2023W/MaCVi/html/Poudel_A_Novel_Framework_To_Evaluate_and_Train_Object_Detection_Models_WACVW_2023_paper.html
dc.titleA Novel Framework To Evaluate and Train Object Detection Models for Real-Time Victims Search and Rescue at Sea With Autonomous Unmanned Aerial Systems Using High-Fidelity Dynamic Marine Simulation Environmenten_US
dc.title.alternativeA Novel Framework To Evaluate and Train Object Detection Models for Real-Time Victims Search and Rescue at Sea With Autonomous Unmanned Aerial Systems Using High-Fidelity Dynamic Marine Simulation Environmenten_US
dc.typeChapteren_US
dc.description.versionacceptedVersionen_US
dc.source.pagenumber239-247en_US
dc.identifier.doihttps://doi.org/10.1109/WACVW58289.2023.00030
dc.identifier.cristin2228632
cristin.ispublishedtrue
cristin.fulltextpreprint
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record