{"guid":"0c025aaa-29c7-53fa-84c8-b1a4bd8597ca","title":"Investigating the capability of UAV imagery in AI-assisted mapping of Refugee Camps in East Africa","subtitle":null,"slug":"state-of-the-map-2022-academic-track-19438-investigating-the-capability-of-uav-imagery-in-ai-assisted-mapping-of-refugee-camps-in-east-africa","link":"https://2022.stateofthemap.org/sessions/FRJXCQ/","description":"This pilot project is connected to a larger initiative to open-source the assisted mapping platform for Humanitarian OpenStreetMap (HOTOSM) based on Very High Resolution (VHR) drone imagery. The study test and evaluate multiple U-Net based architectures on building segmentation of Refugee Camps in East Africa.\n\nIntroduction\n\nRefugee camps and informal settlements provide accomodation to some of\nthe most vulnerable population, the majority of which are located in Sub-\nSaharan East Africa (UNHCR, 2016). Many of these settlements often lack\nup-to-date maps of which we take for granted in developed settlements. Hav-\ning up-to-date maps are important for assisting administration tasks such as\npopulation estimates and infrastructure development in data impoverished\nenvironments, and thereby encourages economic productivity (Herfort et al.,\n2021). The data inequality between the developed and developing countries\nare often resulted from a lack of commercial interest, especially with the\nrecent trend of corporate OSM mappers (Anderson et al., 2019, Veselovsky\net al., 2021). Such disparity can be reduced using assisted mapping tech-\nnology. To extract geospatial and imagery characteristics of dense urban\nenviornments, a combination of VHR satellite imagery and Machine Learn-\ning (ML) are commonly used (Taubenböck et al., 2018). Classical ML based\nmethods that exploit the textual (e.g. GLCM), spectral, and morphological\ncharacteristics of VHR imagery are based on the principles of Computer\nVision (CV). Although many have shown promising results in satellite VHR\n(1m to 5m resolution) scenarios such as differentiating slum and non-slum\n(Kuffer et al., 2016 \u0026 Wurm et al., 2021), in VHR drone imagery (5cm to\n10cm resolution) however, results might suffer from noise caused by environ-\nment and drone-based specific problems such as motion artefacts and litter.\nRecent advances in CV based Deep Learning might be able to address these\nissues (Chen et al., 2021 \u0026 Carrivick et al., 2016).\n\nPurpose of the study\n\nThe study is connected to a larger initiative to open-source the assisted\nmapping platform in the current Humanitarian OpenStreetMap (HOTOSM)\necosystem. This study is a pilot-project to investigate the capabilities of\napplying semantic segmentation using community open-sourced VHR drone\nimagery collected by the partner organisation OpenAerialMap. The study\naims to rigourosly assess the various components and inputs that would\ncontribute to the ML based mapping system, and to produce a detailed\nevaluation on class-based accuracy assessment (Congalton \u0026 Green, 2019).\nThis pilot study focuses on 2 camps in East Africa, where data availability\nand the geography of the camps are within a similar savannah ecosystem.\nThis enables highly-detailed method testing and analysis of transferability\nof the results between the two camps.\n\nData and Methodology\n\nThe first camp is located in Dzaleka, Dowa, Malawi, which is sub-divided\ninto the Dzaleka North and Dzaleka main camp. The Dzaleka camps are\nhome to around 40,000 refugees mainly coming from the African Great Lakes\nregion. The Dzaleka North camp is characterised by a newer, spatially well-\nplanned metal-sheeted roofs, while the southern main camp is characterised\nby complex, dense mud-walled building with stone-lined thatched-roofs (UN-\nHCR, 2014). The second camp, the Kalobeyei settlement is part of the sub-\ncamp of Kakuma, located in the rural county of Turkana, North-West Kenya.\nThe Kalobeyei settlement was home to approximately 34,849 refugees as of\n2019. This camp is significantly more spacious and is characterised by spa-\ntiall well-planned metal-sheeted roofs (UNHCR \u0026 DANIDA, 2019). VHR\ndrone imageries were provided for both camps and vector labels produced\nby HOTOSM volunteers were provided for the Dzaleka and Dzaleka North\ncamp.\n\nSince CV based Deep Learning is very dependent on the quality of the\nlabelled referenced data, especially when performing pixel-based semantic\nsegmentaion, it is of crucial importance that care is taken when producing\nhighly accurate labels that ensure sucessful training (Ng A., 2018). A large\nquanitiy of available labels did not have such a task in mind, imperfection\nin labelling around existing drone artefacts could cause the trained model\nto misclassify such pixels. In order to train a model which performs well\non drone imagery, the motion artefact will be a signficant feature for the\nmodel to learn.he combination of data availability have allow a unique set of\nresearch questions concerning the input data quality and experiment setup\nto surface. Therefore, to test out U-Net and a few variation of the U-Net\nperformance, an additional set of label data was created in order to supple-\nment the imperfection in the labelled data of the Dzaleka camps. Initially,\nthe models will be trained on the pixel-perfect and less complex Kalobeyei\ndataset, this will be then be followed by introducing the Dzaleka datasets\nof higher complexity. A comparison of baseline performance between the U-\nNet variations (Ronneberger et al., 2015) and the Open-Cities-AI-Challenge\n(OCC) winning model is conducted. The baseline experiement aims to keep\nthe hyperparameters (e.g. optimiser, learning rate, weight decay etc.) con-\nstant to obtain an objective view of the architectual responses on the same\ndataset setup. This will provide a clear picture of the feasibility and how to\ntake this project further, so that further resources could be justified to scale\nfuture experiments.\n\nFindings and Discussion\n\nInitial baseline experiments on the Kalobeyei dataset and Kalobeyei with\nthe Dzaleka(s) seem to suggest limited transferability from the OCC model.\nThis suggests that the OCC model is perhaps over-generalised to the compe-\ntition test dataset. Despite achieving very high confidence on metal-sheeted\nrooftops, it does not detect any of the more complicated thatched roofs com-\nmon in the Dzaleka camp. The OCC model also struggle with\nsome of the more obscure drone motion artefacts occuring at the edge of the\nimagery in the Kalobeyei camp. Meanwhile, the Precision and Recall statis-\ntics favour other variations or further transfer training on the OCC model,\nand the EfficientNet B1 header U-Net pretrained with ImageNet weights.\nHowever validation loss suggests there might be little room for improvement\nin the further transfer training of the OCC model.\n\nPrecision and Recall have both reached above 0.7 in most experiments,\nwhich outline the general capability of the strategies used. However there\nare still significant variations among different architectures and setups. The\nnext step is to perform systemic fine-tuning to increase the confidence level\nof the appropriate architectures.","original_language":"eng","persons":["Christopher Chan"],"tags":["sotm2022","19438","2022","OSM","OpenStreetMap"],"view_count":44,"promoted":false,"date":"2022-08-21T10:40:00.000+02:00","release_date":"2022-10-11T00:00:00.000+02:00","updated_at":"2026-01-14T10:30:12.201+01:00","length":356,"duration":356,"thumb_url":"https://static.media.ccc.de/media/events/sotm/2022/19438-0c025aaa-29c7-53fa-84c8-b1a4bd8597ca.jpg","poster_url":"https://static.media.ccc.de/media/events/sotm/2022/19438-0c025aaa-29c7-53fa-84c8-b1a4bd8597ca_preview.jpg","timeline_url":"https://static.media.ccc.de/media/events/sotm/2022/19438-0c025aaa-29c7-53fa-84c8-b1a4bd8597ca.timeline.jpg","thumbnails_url":"https://static.media.ccc.de/media/events/sotm/2022/19438-0c025aaa-29c7-53fa-84c8-b1a4bd8597ca.thumbnails.vtt","frontend_link":"https://media.ccc.de/v/state-of-the-map-2022-academic-track-19438-investigating-the-capability-of-uav-imagery-in-ai-assisted-mapping-of-refugee-camps-in-east-africa","url":"https://api.media.ccc.de/public/events/0c025aaa-29c7-53fa-84c8-b1a4bd8597ca","conference_title":"State of the Map 2022","conference_url":"https://api.media.ccc.de/public/conferences/sotm2022","related":[],"recordings":[{"size":27,"length":356,"mime_type":"video/webm","language":"eng","filename":"sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_webm-hd.webm","state":"new","folder":"webm-hd","high_quality":true,"width":1920,"height":1080,"updated_at":"2022-10-11T22:15:37.717+02:00","recording_url":"https://cdn.media.ccc.de/events/sotm/2022/webm-hd/sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_webm-hd.webm","url":"https://api.media.ccc.de/public/recordings/63011","event_url":"https://api.media.ccc.de/public/events/0c025aaa-29c7-53fa-84c8-b1a4bd8597ca","conference_url":"https://api.media.ccc.de/public/conferences/sotm2022"},{"size":12,"length":356,"mime_type":"video/webm","language":"eng","filename":"sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_webm-sd.webm","state":"new","folder":"webm-sd","high_quality":false,"width":720,"height":576,"updated_at":"2022-10-11T22:10:15.029+02:00","recording_url":"https://cdn.media.ccc.de/events/sotm/2022/webm-sd/sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_webm-sd.webm","url":"https://api.media.ccc.de/public/recordings/63006","event_url":"https://api.media.ccc.de/public/events/0c025aaa-29c7-53fa-84c8-b1a4bd8597ca","conference_url":"https://api.media.ccc.de/public/conferences/sotm2022"},{"size":8,"length":356,"mime_type":"video/mp4","language":"eng","filename":"sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_sd.mp4","state":"new","folder":"h264-sd","high_quality":false,"width":720,"height":576,"updated_at":"2022-10-11T22:06:53.716+02:00","recording_url":"https://cdn.media.ccc.de/events/sotm/2022/h264-sd/sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_sd.mp4","url":"https://api.media.ccc.de/public/recordings/63003","event_url":"https://api.media.ccc.de/public/events/0c025aaa-29c7-53fa-84c8-b1a4bd8597ca","conference_url":"https://api.media.ccc.de/public/conferences/sotm2022"},{"size":5,"length":356,"mime_type":"audio/mpeg","language":"eng","filename":"sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_mp3.mp3","state":"new","folder":"mp3","high_quality":false,"width":0,"height":0,"updated_at":"2022-10-11T22:05:54.767+02:00","recording_url":"https://cdn.media.ccc.de/events/sotm/2022/mp3/sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_mp3.mp3","url":"https://api.media.ccc.de/public/recordings/63002","event_url":"https://api.media.ccc.de/public/events/0c025aaa-29c7-53fa-84c8-b1a4bd8597ca","conference_url":"https://api.media.ccc.de/public/conferences/sotm2022"},{"size":18,"length":356,"mime_type":"video/mp4","language":"eng","filename":"sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_hd.mp4","state":"new","folder":"h264-hd","high_quality":true,"width":1920,"height":1080,"updated_at":"2022-10-11T22:04:32.224+02:00","recording_url":"https://cdn.media.ccc.de/events/sotm/2022/h264-hd/sotm2022-19438-eng-Investigating_the_capability_of_UAV_imagery_in_AI-assisted_mapping_of_Refugee_Camps_in_East_Africa_hd.mp4","url":"https://api.media.ccc.de/public/recordings/63000","event_url":"https://api.media.ccc.de/public/events/0c025aaa-29c7-53fa-84c8-b1a4bd8597ca","conference_url":"https://api.media.ccc.de/public/conferences/sotm2022"}]}