Identifying refugee settlements using satellite images of high spatial and temporal resolution: A comparison of sensors and approaches

Andreas Braun1, Stefan Lang2, Edith Rogenhofer3

1 Andreas Braun CDL GEOHUM, University of Salzburg, 5020 Salzburg, Austria; Department of Geography, University of Tübingen, 72070 Tübingen, Germany

2 Stefan Lang CDL GEOHUM, University of Salzburg, 5020 Salzburg, Austria

3 Edith Rogenhofer Satellite Image Team, GIS Centre, Médecins Sans Frontières (MSF) Austria


The increasing number of satellites offering daily image acquisition coverage at very high resolution (VHR) provides new insights into the dynamics of fine-scale changes at temporary settlements. This article presents first experiences with spaceborne mapping of refugee settlements using daily revisiting radar (ICEYE) and optical (PlanetScope) imagery of a study area in the Northeastern Democratic Republic of Congo, which hosts villages and informal dwellings of forcibly displaced persons. The information content of the sensors is systematically compared with respect to mapping land use, delineating built-up areas, and identifying camps. Validation against independent reference data shows many of the lightweight dwellings to not be included in the radar image due to the use of natural construction materials. The best distinctions between built-up areas and bare soil was achieved upon merging optical and radar data, resulting in a classification accuracy of 94.1%. The classification is based on four predefined classes, and more accurately identifies urban structures compared to the two-class approach that either underestimates or overestimates built-up areas. Accuracies between 83.4 % and 92.8 % have been achieved for 10 identified campsites. The results are a promising step toward operationally identifying refugee settlements for emergency response.

Full Text


Methods of Earth observation (EO) are being used more and more in the humanitarian domain because they allow information about large areas to be objectively and reliably retrieved at regular intervals (Lang et al., 2020). This is especially helpful for areas difficult to reach due to their remoteness or terrain and provides quick information as required by decision makers for emergency response (Denis et al., 2016). In general, geospatial techniques have become part of the daily work of many non-governmental organizations (NGOs), assisting the survey, mapping, analyzing, and monitoring of situations related to human displacement and natural disasters (Convergne & Snyder, 2015; Herfort et al., 2021; Rogenhofer et al., 2020). Geospatial techniques and satellite imagery in particular have a wide area of application, ranging from mapping settlements and infrastructures (Bjorgo, 2000; Ghorbanzadeh et al., 2021; Tomaszewski et al., 2016) and monitoring natural resources, disasters and armed conflicts (Joyce et al., 2009; Wendt et al., 2015; Witmer, 2015) to assessing the environmental impacts caused by displaced persons (Braun et al., 2016; Hagenlocher et al., 2012).

Knowledge about the location of unplanned or informal settlements is a key piece of information for effectively providing aid. In areas of conflict or natural disaster in particular, these settlements rapidly evolve in rural areas, and humanitarian organizations often don’t immediately know their locations. Synthetic aperture radar (SAR) satellites can be valuable within this context because they deliver images regardless of cloud cover or daylight and therefore allow the quick provision of information in emergency situations (Braun, 2019). These systems employ microwaves, which are sensitive to the physical characteristics of objects and surfaces. However, their suitability for mapping informal settlements and buildings made from natural construction materials remains little understood and investigated (Braun, 2020).

Fostered by technological progress and shifting user demand toward higher spatial and temporal resolutions, new imaging satellite missions are designed as constellations of several identical sensors that allow images to be proved at daily intervals (Curzi et al., 2020). As these large fleets can only be realized by trade-offs in the development and construction phase under commercial aspects, the radiometric and geometric quality of the delivered images cannot compete with the flagship satellite programs of national or international space agencies (Helder et al., 2020). Bearing this in mind, the overall aim of this study is to evaluate the information content of such missions and their potential contribution to humanitarian mapping. We use images from two selected missions using both passive and active sensors (PlanetScope and ICEYE) that have been acquired over a rural area of the Democratic Republic of Congo (DRC), which features both urban areas and settlements from displaced persons.

Data and Methods

Study Area

The test site selected for this study is located in the Northeastern Democratic Republic of Congo, close to Lake Albert and the border with Uganda (the red dot on the small map in Figure 1), including the settlements of Nizi, Lopa, and Kilomines. As part of Ituri Province, this region has faced ongoing violence since the early 2000s, mostly related to ethnic conflicts between the Hema and Lendu communities (Tusiime, 2019; Vlassenroot, 2004). In December 2017, the violence generated by armed groups that had claimed to defend the interests of the Hema and Lendu communities in 2000 reignited, and since then has displaced more than one million people in the region. Yet due to the frequent movement of the population, knowing exact numbers is impossible. After the most recent displacement, around 200,000 people sought safety and shelter in camps in the area of Nizi, with additional hundreds of thousands living with host families in the area. The people in the camps rely on humanitarian aid for medical assistance, food, shelter, and water (Médecins Sans Frontières |Doctors Without Borders; MSF|, 2020). The area contains several known locations of settlements of displaced people within and outside of the villages (yellow letters on the large map in Figure 1). The blue box in Figure 1 denotes the footprint of the used data, covering an area of around 30 square kilometers. The savanna ecosystem contains low vegetation and small-scale agricultural fields with fractional forest stands. Patches of bare soil appear along the roads and near the urban settlements, especially near spots of artisanal mining (Pottier, 2003).

Figure 1. Study area.

Figure 1. Study area.

Satellite Images and Preprocessing

A multispectral image of the PlanetScope mission (Planet Team, 2017) acquired on October 19, 2020 is used as main input for this study. It contains four bands (i.e., red, green, blue, infrared) at a spatial resolution of 3.0 meters (Figure 2a). This commercial mission consists of over 350 operating nanosatellites capable of imaging the Earth’s entire landmass once per day (Safyan, 2020). Although the achieved pixel resolution is insufficient for mapping individual buildings, its benefit for humanitarian action lies in its temporal resolution, allowing crisis situations and new camps emerging from people on the run to be mapped. However, as with every optical imaging system, its applicability faces severe limitations from cloudy conditions. As an alternative, radar satellites are active systems that send and receive signals by penetrating cloud cover. In this study, an image from the ICEYE mission acquired in Spotlight mode on October 19, 2020 is used to provide a spatial resolution of 0.76 m (Ignatenko et al., 2020. Compared to optical sensors, SAR only contains one band of information (i.e., backscatter intensity at vertical polarization; Figure 2b). The SAR signal was radiometrically calibrated to Sigma0 and geometrically terrain-corrected within the open-source Sentinel-1 Toolbox provided by the European Space Agency (ESA, 2021). An intensity-driven adaptive neighborhood filter (Vasile et al., 2006) was applied to reduce the speckling effect. As the last step, the Planet image was co-registered to the spatial resolution of the radar image using bilinear resampling. The images for both commercial missions were granted freely for scientific use, as cited in the acknowledgments section.

Figure 2. Comparing the optical (left) and radar (right) images.

Figure 2. Comparing the optical (left) and radar (right) images.

Reference Data

The footprints of buildings from parts of the study area were extracted using the very high-resolution (VHR) imagery acquired on January 1, 2020 (WorldView-3; orange polygon in Figure 3) and April 8, 2020 (Pléiades; red polygon in Figure 3) through research collaborations with humanitarian organizations (Lang et al., 2018; Tiede et al., 2013). Of the 32,000 buildings contained in the dataset, roughly 15,000 are within the study area. These building footprints have been used for training the different classifiers, as described in the next section. Also shown in Figure 3, parts of the study area (blue polygon) remain uncovered by either VHR product and therefore have no building footprints. In these areas, buildings were manually digitized by visual interpreting the open-source imagery dated January 2021 (Google Earth, 2021) to serve as independent validation data.

Furthermore, the locations of several refugee camps in the area are available as point data, 10 of which are located inside the study area (yellow letters in Figure 3).

Figure 3. Spatial extent of the reference building footprints.

Figure 3. Spatial extent of the reference building footprints.

The building footprints were overlaid with the optical and radar data for a primary visual inspection, and several potential error sources were able to be identified for the subsequent extraction of information.

Firstly, while buildings can be identified as aggregates of bright pixels in the optical image (Figure 4a), many do not fully intersect with the building footprints retrieved from VHR data. Besides the different spatial resolutions, the main reason for this is the imprecise geocoding of both images as well as terrain-induced geometric distortions (Dave et al., 2015). Despite this problem, which arises when combining information retrieved from different sensors (Chen et al., 2003), we deliberately left the data geometrically uncorrected as delivered by the providers and only applied the standard terrain correction to the radar image as the applicability of these daily data sources will be tested in an operational context. This allows the findings from this study to be applicable to workflows of humanitarian decision making as well.

Secondly, the signatures for buildings in the SAR image (Figure 4b) are indifferent, ranging from star-like patterns (also known as the cardinal effect; Weydahl, 2002) or overly bright rectangular shapes to pixels with no indication of a building presence at all. This is because many building materials in the study area have low electric conductivity and therefore produce only low backscatter intensity (Franceschetti et al., 2002). Accordingly, dwellings covered by wood, grass (huts), or canvas (tents) in particular are barely visible in the SAR image because their physical properties are similar to the bare vegetative surroundings.

Lastly, topographic variations in the study area cause irregular patterns in the SAR image (Figure 2a); this complicates distinguishing between natural surfaces (darker) and fabricated structures (brighter). These effects can only be reduced with a digital elevation model [DEM] of similar spatial resolution (Small, 2011). However, a high spatial resolution DbM was unavailable for the study area, and the open-source DEMs (SRTM, ALOS World 3D, Copernicus DEM) all have an insufficient resolution of 30 meters.

These findings have led to the conclusion that neither the SAR nor the optical imagery permit single dwellings to be reliably extracted at this stage. Therefore, general approaches for classifying urban areas are presented in the following section.

Figure 4. Building footprints (green) from the village of Nizi in both optical (left) and radar (right) images.

Figure 4. Building footprints (green) from the village of Nizi in both optical (left) and radar (right) images.


Specific objectives and methods of information extraction are systematically compared to evaluate the contributions from these two sensors for humanitarian mapping. Those objectives, called land use and land cover (LULC), Binary, and Camp, are briefly outlined in Table 1.

In the first step, a supervised LULC classification was conducted based on manually collected training polygons for four predefined classes: urban (built-up areas, villages, and refugee camps), forest (trees and large bushes), vegetation (grassland and agriculture), and open (unpaved roads and bare soil). Further details on collecting training samples are provided below.

The second objective (Binary) follows the same principle but is based only on two predefined classes: urban (built-up areas, villages, and refugee camps) and non-urban (all other surfaces). It was designed to test whether fewer classes and more aggregated training areas are sufficient for identifying rural settlements for rapid decision making, as distinguishing between forest and vegetation doesn’t have the particularly highest relevance in human displacement settings.

The third objective (Camp) validates the results from the Binary classification exclusively for the informal camp settlements in the study area (Figure 1) to test how well these are represented by the pixels classified as urban. The next section describes the underlying training samples used for both the Binary and Camp classifications.

Table 1. Objectives of the Camp-Related Mapping Activities Used in This Study

Table 1. Objectives of the Camp-Related Mapping Activities Used in This Study

The three objectives are pursued with five different approaches (see Table 2). All have been conducted in the System for Automated Geoscientific Analyses (SAGA; Conrad et al., 2015) software environment and are described in Table 2. These approaches are combinations of input data and method for extracting the desired information based on the random forest (RF) classifier, a method which repeatedly uses random subsets from the training data to find ideal thresholds separating classes by a row of nested statements (Breiman, 2001). Its applicability for classifying remote sensing imagery has been demonstrated in numerous cases (Belgiu & Drăguţ, 2016; Pal, 2005). As a popular way of image fusion, principal component analysis (PCA) approach of image sharpening based on a covariance matrix (Cheng & Hsia, 2003) was selected. This technique combines the spatial resolution of radar imagery with the multispectral information from PlanetScope imagery, resulting in four bands of increased levels of detail while maintaining the color content of the four input bands (i.e., red, green, blue, infrared). Lastly, an image segmentation was performed as an alternative to the pixel-based approaches, and the segments were processed within an object-based image analysis (OBIA; Blaschke et al., 2008; Blaschke et al., 2014) approach, generating classified image objects of homogenous informational content and several pixels in size.

Table 2. Methodological Approaches Use in This Study

Table 2. Methodological Approaches Use in This Study

In the end, all five approaches from Table 2 were evaluated according to the results from the three objectives shown in Table 1 based on their overall accuracy (OA, or the percentage of correctly classified pixels) and Cohen’s kappa, which takes into consideration the statistical chance occurrence of this agreement (Cohen, 1960). Furthermore, the producer’s accuracy (PA) and user’s accuracy (UA) are calculated for each class. While PA defines how many of the validation pixels of each class were classified correctly (and inversely contains how many of them were missed [i.e., error of omission]), UA denotes how many of the classified pixels are correct per class (corresponding to falsely assigned classes [i.e., error of commission]). For example, a PA of 0.7 for urban means the approach was able to predict 70% of the urban areas inside the validation data while missing 30%. In turn, a UA of 0.8 for urban means 80% of all urban pixels in the result are actually urban areas, while 20% belong to a different class. All measures together give a broader understanding of the quality of the produced maps.


Land Use and Land Cover (LULC) Classification

As outlined in the previous chapter, the supervised classification was conducted based on training polygons (Figure 5a). To grant a certain degree of objectivity, training polygons were selected based on the radiometric indices and thresholds as described in Braun and Hochschild (2017). A second dataset was created to independently validate the classified maps (Figure 5b). The sizes of the polygons were determined by the relative share of each class in the study area and the equations identifying the required number of samples proposed by Congalton (2001); as such, the size of the validation dataset is a defined multiple of the training datasets per class. Table 3 provides a summary of the training and validation data.

Table 3. Spatial Representation of Training and Validation Sites used for the Supervised LULC Classification

Table 3. Spatial Representation of Training and Validation Sites used for the Supervised LULC Classification

Figure 5. Spatial distribution of training and validation sites used for the supervised LULC classification.

Figure 5. Spatial distribution of training and validation sites used for the supervised LULC classification.

Figure 6 provides the results from the LULC classification and shows how the spatial distribution of the predicted LULC classes differs with respect to the selected input datasets and methods. One major finding is that the radar image is suitable for distinguishing between forest and urban, but insufficient for delineating urban areas within this approach. This confirms the indications from the right image in Figure 4 where many of the lightweight buildings in the study area do not produce distinctive backscatter. However, this also shows the urban area to be best delineated when combining optical and radar data. To support these findings at a statistical level, Table 4 shows the results from the accuracy assessment based on the validation polygons: The approach based on SAR data alone (ICEYE) provides the poorest results (OA = 0.546. The highest OA is achieved by combining both products within the RF classifier (Planet + ICEYE, OA = 0.941), which outperforms all other approaches by at least 10%. The PA for urban in particular is distinctively higher in this approach (0.824) compared to the others, which shows combining optical and SAR data to help distinguish urban from open, which are both bright and therefore similar in just the optical data.

Table 4 furthermore shows that, regardless of the chosen approaches, the UAs for urban are higher than the PAs. This means urban areas are largely assigned correctly but underestimated in all cases. Lastly, neither the fusion of optical and SAR bands through principal component analysis (PCA) nor the OBIA significantly increase the accuracy of the LULC classification.

Table 4. Accuracy of the supervised LULC classification.

Table 4. Accuracy of the supervised LULC classification.


Figure 6. Spatial outcomes of the supervised LULC classification.

Figure 6. Spatial outcomes of the supervised LULC classification.


Binary Classification

Unlike the LULC classification, the Binary classification is trained not over manually collected samples but over the actual spatial distribution of the building footprints derived from the VHR images (bright red in Figure 7a). Building footprints were manually digitized for independent validation when no reference data were available (dark red in Figure 7b). To take into account the potential offset between the building footprints and the underlying satellite data (as shown in Figure 4), as well as differences resulting from the coarser spatial resolution, a variable buffer was added to each building at a distance the square root of its size (e.g. a building with a size of 49 m2 was buffered by an additional 7 m) as shown in Figure 7b).

Accordingly, compared to the LULC classification, the Binary classification is based on a larger and more solid amount of training data and reference data, and therefore allows a more robust and accurate assessment.

Figure 7. Training and validation data of the Binary classification.

Figure 7. Training and validation data of the Binary classification.

The results from the Binary classification are shown in Figure 8 and Table 5. Similar to the results of the LULC classification (Figure 6 and Table 4), the approach based on radar data alone (ICEYE) has the lowest accuracy. Its OA = 0.847, which indicates 84.7% of the map to be correct, but this is caused by the large proportion of existent non-urban areas while strongly underestimating the urban areas (PA = 0.003; κ = 0.004). Again, this confirms that, despite its spatial resolution of 0.74 meters, radar data alone are not useful for predicting the presence of buildings or urban areas in general in this context. Surprisingly, while the simple combination approach (Planet+ICEYE) outperformed the others in the LULC classification, this does not stand out in the Binary classification. In general, integrating the SAR data did not increase the classification accuracy. Except for the SAR-only approach, the PAs for urban range between 0.529 and 0.558, indicating nearly half of the urban pixels in the validation data to have been overlooked (underestimated). One explanation for this is that open areas and built-up areas in particular are very similar due to their bright color and physical characteristics (natural surfaces). At the same time, UAs for urban also range between 0.713 and 0.753 in the Binary classification and are therefore distinctively lower than in the LULC classification (between 0.924 and 0.963). This means 25% to 30% of all urban pixels are falsely assigned (overestimated) in the Binary classification and only 4% to 8% are falsely assigned in the LULC classification. They are clearly visible as the many red patches outside the existing villages in Figure 6 throughout all approaches (except for ICEYE).

These findings lead to the conclusion that a binary classification of urban and non-urban delivers less accurate results than a classification of multiple classes, at least for the conditions present in the current study area. Obviously, offering more predefined classes to the classifiers contributes to a more precise distinction of urban areas. However, both objectives do not allow an analysis of how well these approaches detected refugee camps in the area. This will be evaluated in the following section.

Table 5. Accuracy of the Supervised Binary Classification

Table 5. Accuracy of the Supervised Binary Classification

Camp Classification

As outlined previously, no distinct classification is produced in this section, only the results from the Binary classification as presented in the previous section (Figure 8) are currently being evaluated for their accuracy regarding the 10 refugee camps in the study area. For this reason, a rectangular bounding box was created around each refugee camp and manually divided into areas containing the classes Camp or No Camp (Figure 9). Their locations were denoted as identified by the reference data shown in Figure 3, and the outlines of the informal settlements were digitized based on visual interpretation to grant the highest quality of the validation dataset. In this regard, a camp has been defined as an aggregate of buildings distinguished from their surroundings based on size (mostly smaller than normal urban buildings) or pattern (regularly aligned tents or makeshift dwellings). Village areas were excluded from both the Camp and No Camp validation areas to allow the accuracy assessment to only measure how successfully camps have been included in the Binary classification. Table shows the overall results: Surprisingly, the best results are achieved for the Planet approach which only uses the RF classifier on optical data (OA = 0.889; κ = 0.714). Of all areas evaluated as camp, 81.1% were identified correctly (PA), meaning only 18.9% were overlooked, and 23% were falsely assigned (1-UA). The table also shows one single SAR image to be insufficient for predicting the presence of either urban areas or camps in particular. It furthermore shows merging optical and SAR data to not improve classification accuracy, nor does an OBIA approach. A more detailed camp-wise evaluation is given in the following section.

Table 6. Accuracy of the Supervised Binary Classification Evaluated for All Camp Areas

Table 6. Accuracy of the Supervised Binary Classification Evaluated for All Camp Areas


Figure 8. Spatial results of the supervised binary classification.

Figure 8. Spatial results of the supervised binary classification.

Table 7 shows the PA for each of the 10 camps in the study area and reveals the error of omission at the different locations. The PAs for the camps range between 0.635 and 0.971, with only three out of the 10 being below 0.75, indicating less than 25% of the camp areas to have been overlooked by the Binary classification for most cases. Again, the Planet approach based on optical data alone produced the highest accuracies, which leads to the conclusion that radar data does not contribute to more accurate mapping. The OBIA approach, despite being more powerful in general, cannot be said to have provided better results for most camps. No general relationship between the approach, the camp, and its situation within the study area can be identified from the table. Obviously, camps situated within the larger villages (B, C, or J) have higher PAs than the more isolated ones outside the urban areas (A, I, or H), as their underestimation is less likely. No correlation could be determined between the dwelling structure (regular or irregular) and accuracy.

Table 7. Producer Accuracies (PAs) of the Supervised Binary Classification Evaluated for Each Camp

Table 7. Producer Accuracies (PAs) of the Supervised Binary Classification Evaluated for Each Camp

Table 8 shows the UAs for all camps, representing the proportion of camp areas correctly assigned to this class. Average values above 0.85 mean that only 15 % of the areas classified as camp had been overestimated. Again, the ICEYE approach shows the lowest scores. Its high PAs for Camps A, B, and H result from small clusters of camp pixels inside the validation areas being assigned correctly; however, because all camps were strongly underestimated in the radar-based approach (Table 7), these high PAs are rather misleading measures. This underlines PA to not be an ideal measure for identifying displaced people’s settlements as it only shows the error of commission (overestimation of this class). Again, no clear pattern seems to exist as to why Camps B, H, and I have lower PAs, as their overestimation partly happened in urban and forest areas. Similar to Table 8, no approach also clearly outperforms the others. This indicates that, as long as the Planet data were involved, all results were equally suitable for mapping camps without larger overestimations.

Table 8. User Accuracies (UAs) of the Supervised Binary Classification Evaluated for Each Camp

Table 8. User Accuracies (UAs) of the Supervised Binary Classification Evaluated for Each Camp

Overall, Camps F and J resulted in both high PAs and UAs. Both settings consist of different land cover types (urban, vegetation, forest, and open) and dwellings of different sizes and patterns. This again leads to the conclusion that no favorable settings exist regarding the composition of camp areas and their surroundings in the study area. In conclusion, high classification accuracies can be achieved with satellites providing daily images, both in regard to the error of omission (underestimation) as well as of commission (overestimation). However, the sufficiency of the observed accuracies need to be discussed for emergency situations. These aspects are discussed in the next section.

Figure 9. Validation areas of the camp classification (red: camp, black: no camp).

Figure 9. Validation areas of the camp classification (red: camp, black: no camp).

To analyze whether dwelling density has an impact on a camp’s classification accuracy, we plotted the UAs and PAs of all 10 camp sites from the Planet+ICEYE approach (see Tables 7 and 8) against their dwelling densities, as calculated from the building reference dataset. As shown in Figure 10, a positive correlation exists between building density and PA (blue, R² = 0.674). This indicates the chances of omission errors (i.e., overlooking camp areas in the classification) to be reduced with higher dwelling densities. This is reasonable because the building sizes are mostly smaller than the spatial resolution of the satellite images: The closer they stand together, the more distinct their per-pixel signal. In turn, camp areas consisting of rather scattered buildings have higher risk of being underestimated. In contrast to these findings, no clear relationship was observable between dwelling density and UA (red, R² = 0.097). This means the error of commission (i.e., correctly classifying pixels as camps) to not be directly related to the number of buildings in an area. However, both relationships have a vague statistical basis (N = 10) and should be interpreted with care. Studies with larger sample sizes of camp areas with different interior structures and building materials should be carried out to verify these indications.

Figure 10. Relationship between camp detection accuracy and building density.

Figure 10. Relationship between camp detection accuracy and building density.

Discussion and Outlook

This study aimed to assess the general mapping potential and interpretability of two satellite missions capable of providing daily imagery in a humanitarian context. While the PlanetScope mission already achieved a daily coverage of large parts of the Earth’s landmasses, the ICEYE constellation does not systematically acquire images, but only upon tasking by users. Still, both can deliver rapid information in emergency situations. Their informational content was assessed for several objectives and approaches, and an accuracy assessment was performed based on independently validating datasets.

As the major findings, the present study has demonstrated the benefits of both optical and radar sensors for mapping remote and rural areas and has systematically compared their contributions to the quality of the results. However, based on the defined objectives (Table 1) and selected approaches (Table 2) the study was not particularly nor primarily designed to mitigate the commonly known limitations of sensors. With respect to optical imagery, this is dependence from cloud-free conditions. Surely with satellite missions like PlanetScope that provide daily images, the chances for usable data are higher compared to missions with longer revisit times. For example, this reaches its limits in areas with constantly high cloud cover, such as West and Central Africa as well as large parts of Southeast Asia (Sudmanns et al., 2019), areas that are long-term hotspots for humanitarian work. This raises the need for robust workflows based on radar imagery unaffected by cloud cover. In this study, this benefit was not evident because the optical image was unaffected. However, dependence on cloud-free conditions can be a problem, especially in crisis situations (Füreder et al., 2015). Meanwhile, radar data are prone to radiometric effects resulting from strong topographic variations (Figure 2b). Accordingly, the increasing spatial resolution of SAR data can only be fully exploited if a digital elevation model of comparable quality is available in preprocessing for removing these radiometric distortions. Independent from terrain, the physical nature of surfaces plays a stronger role for radar data. As one result, the short wavelength of ICEYE (X-band, ~3 cm) was not sensitive toward the surface properties of the dwellings in the study area, despite its high spatial resolution of 0.7 meters theoretically allowing their delineation. This is because the natural construction materials of the buildings do not produce the strong backscatter intensity observed in cities of industrialized countries (Auer & Gernhardt, 2014; Ferro et al., 2011; Weydahl, 2002; Zhao et al., 2013). To make better use of VHR SAR imagery in a humanitarian context, adapted approaches have to be found that consider the materials of the region, such as being based on interferometric features (Dubois et al., 2016), multi-temporal information (Amitrano et al., 2016) or convolutional neural networks (Shahzad et al., 2018). With the aim to map LULC, however, the integrated use of optical and SAR imagery helps distinguish between bare soil and urban areas that are otherwise similar within the optical spectrum. The overall accuracy of multiple LULC classes was higher than that for the binary classification separating only urban and non-urban areas. Whether the fusion of optical and radar data is justifiable for smaller improvements in classification accuracies depends on the user scenario.

Meanwhile, other things are to be discussed with respect to the study design and results. The observed misregistration of building footprints and the input satellite imagery (Figure 4) was mitigated by applying a variable buffer; however, this can of course lead to errors during both the training and validation of the classifiers. The validation of the camp classification (Table 8) is particularly sensitive to these small shifts because of their proportions in the comparably small validation areas (Figure 9). The development of operational approaches applicable in crisis situations should therefore consider integrating automated co-registration of all input products (Sun et al., 2020). Moreover, the temporal difference of the satellite images (both from October 19, 2020) and the VHR images used to extract the building footprints can lead to temporal inconsistencies, especially in the lower part of the study area where most of the camps were located (Figure 3).

This study showed the a-priori selection of four classes to result in higher overall accuracies than just aiming for two, but further studies may investigate the ideal number of classes so that urban areas in particular result in the highest accuracy, such as systematically analyzed by Ma et al. (2017). This study also didn’t cover how the urban class could be divided into traditional and refugee dwellings for a more detailed image on the population distribution and their corresponding needs. This could be achieved with more elaborate, object-based approaches that take into account the textures of the segmented image objects that are expected to differ between formal and informal buildings. Also in this respect, the ideal configuration of the segmentation size for the OBIA should be elaborated upon more carefully (Drǎguţ et al., 2010).

We intentionally refrained from increasing the classification accuracy through different methods of post-processing because we wanted to maintain the comparability of the initial results as combinations of input data, objectives (Table 1), and approaches (Table 2). However, different techniques exist for increasing the quality of the classified maps based on the method. As demonstrated by Seebach et al. (2013), the accuracy of pixel-based binary classifications (as shown in Figure 8 and Table 5) can be improved by applying morphological filters, which at the same time reduces overestimations of changes assessed in multi-temporal analyses. Multi-class results require different techniques such as majority filtering, object-based voting, relearning algorithms, or random field filtering (Huang et al., 2014). These differ regarding their complexity and potential to increase the classification accuracy. Errors in object-based results (Figure 6, bottom left) in particular are easy to reduce with automated loops based on further parameters such as shape, neighborhood, or manually defined class thresholds (Rastner et al., 2014). From a user perspective, the sufficiency of the achieved accuracies need to be evaluated for emergency situations. As outlined in the beginning, situations exist where people flee from violent conflicts or natural disasters where the humanitarian organizations have little indication regarding their number or location. Whenever they establish makeshift dwellings, the proposed approach could be used to locate these for more quickly providing medical assistance and shelter and relief items. Identifying locations where displaced populations are as soon as possible is crucial for planning and providing the required humanitarian assistance in a timely fashion. However, one general spatial indication is necessary, because the satellites used have limited footprints. In such situations, the accuracies of the proposed approaches are sufficient because they allow the quick identification of settlements within an operational workflow. As a recent example, the violent conflict in the Tigray region of Ethiopia starting in November 2020 led to the displacement of around two million Ethiopians by May 2021. In the early stages of the conflict, the locations of these people were unknown, but this information was urgently requested in order for to be able to provide medical aid and document the violence against civilians (Annys et al., 2021). The potential radar data has for detecting the evolution of rural settlements has already been demonstrated on a global scale by Esch et al. (2017). However, as reported by van den Hoek and Friedrich (2021), global datasets such as the High Resolution Settlement Layer (HRSL) or the World Settlement Footprint (WSF) currently lack the precision to also detect refugee settlements, nor are they updated at intervals that allow for responding to humanitarian emergencies. Accordingly, high or very high-resolution radar data (see Braun 2021 for an overview of capable satellites) are the ideal input for providing short-term information on newly emerging settlements within the context of humanitarian crisis situations. Accordingly, the proposed method could serve as a suitable framework. In such scenarios, at least two images (before and during the conflict) are systematically compared and classified to identify changes in rural dwellings. As these situations require a workflow ready for operational use (Lang et al., 2020), the advances of object-based image analyses could be combined with the transferability of convolutional neural networks (CCNs), as described by Saha et al. (2021) who developed a method to identify changes in buildings based on an unsupervised approach. This reduces the dependence from labeled training sites as presented in this study (Figure 5), and therefore makes detection more objective and transferable. This is especially desirable due to the challenges related to speckle and geometric distortions within radar data, which lead to problems for both visual interpreting changes and detecting changes based on backscatter thresholding. The first approaches highlighting the emergence of new buildings based on openly available Sentinel-1 imagery have already been successfully tested in urban agglomerations (Zhang et al., 2019), but need to be refined for rural areas with smaller buildings constructed from lighter materials.

On the other hand, the high classification accuracies of the two vegetation classes are encouraging results with respect to ecosystem monitoring. This means the proposed classification schemes are modifiable for more ecosystem-focused approaches (e.g., the impact of emerging settlements on their natural surroundings; Braun et al., 2016; Braun & Hochschild, 2017), especially because this is often used as an argument against hosting refugees in certain areas despite any evidence of their negative ecological impact (Kibreab, 1997). Another aspect is food security, which can be monitored and ensured by strategically managing resources based on geospatial techniques and satellite-retrieved information (Enenkel et al., 2015). Again, this article proved that data of such quality is highly suitable for these tasks. As already documented in other studies, a linear relationship exists between short-wave radar backscatter intensity and stages of grassland and crop development (Asilo et al., 2019; Fontanelli et al., 2013). This could be exploited to assess landscape dynamics related to displacement. Lastly, the sudden decrease of lower vegetation or emerging patches of vegetation-free areas in suspicious shapes could serve as indicators of the presence of new settlements, even if these do not lead to a distinctive signal (e.g., makeshift tents; Braun, 2021).

Speaking of all these potentials, if the aim of humanitarian NGOs is to map gradual changes of existing settlements, overestimations or underestimations of only 10% as reported in this study can already lead to false conclusions implying camp growth to have either taken place or not. Approaches using time-series could then be more appropriate for such settings (Tuna et al., 2019). Ultimately, the application context determines whether emergencies situations benefit from robust and simple methods or if more data or more complex techniques are necessary to achieve higher quality. The answer to this question must be found in collaborations between scientists and users.

Lastly, while discussing all technical aspects of camp mapping, we want to underline the points raised by Rothe et al. (2021), who stated that all visual technologies can have political effects. Therefore, while mapping refugee camps may assist humanitarian work, it may also impact the people located in these areas by exposing them more to potential threats. Rothe et al. (2021) highlighted the mapped camp space to be more than a closed area of informal dwellings, also including cultures, infrastructures, practices, technologies, and most importantly people, things that cannot be mapped by such approaches. We agree and believe in the importance of not forgetting the overall aim of humanitarian remote sensing as supporting the decision making of those providing help and assuring that populations in need receive the assistance they need. For the future, more integrated solutions should be developed that include both scientists who are developing methods as well as humanitarian workers who help to evaluate the results qualitatively so that the generated products are not only correct but also useful.

Figures & Tables

Figure 1. Study area.
Figure 9. Validation areas of the camp classification (red: camp, black: no camp).
Figure 8. Spatial results of the supervised binary classification.
Figure 6. Spatial outcomes of the supervised LULC classification.
Figure 5. Spatial distribution of training and validation sites used for the supervised LULC classification.
Figure 2. Comparing the optical (left) and radar (right) images.
Figure 3. Spatial extent of the reference building footprints.
Figure 4. Building footprints (green) from the village of Nizi in both optical (left) and radar (right) images.
Table 1. Objectives of the Camp-Related Mapping Activities Used in This Study
Table 2. Methodological Approaches Use in This Study
Table 3. Spatial Representation of Training and Validation Sites used for the Supervised LULC Classification
Table 4. Accuracy of the supervised LULC classification.
Figure 7. Training and validation data of the Binary classification.
Table 5. Accuracy of the Supervised Binary Classification
Table 6. Accuracy of the Supervised Binary Classification Evaluated for All Camp Areas
Table 7. Producer Accuracies (PAs) of the Supervised Binary Classification Evaluated for Each Camp
Table 8. User Accuracies (UAs) of the Supervised Binary Classification Evaluated for Each Camp
Figure 10. Relationship between camp detection accuracy and building density.


Amitrano, D., Belfiore, V., Cecinati, F., Di Martino, G., Iodice, A., Mathieu, P.‑P., Medagli, S., Poreh, D., Riccio, D., & Ruello, G. (2016). Urban areas enhancement in multitemporal SAR RGB images using adaptive coherence window and texture information. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 9(8), 3740–3752.

Annys, S., VandenBempt, T., Negash, E., Sloover, L. de, Ghekiere, R., Haegeman, K., Temmerman, D., & Nyssen, J. (2021). Tigray: Atlas of the humanitarian situation: Version 2.1.

Asilo, S., Nelson, A., Bie, K. de, Skidmore, A., Laborte, A., Maunahan, A., & Quilang, E. J. P. (2019). Relating x-band sar backscattering to leaf area index of rice in different phenological phases. Remote Sensing, 11(12), 1462.

Auer, S., & Gernhardt, S. (2014). Linear signatures in urban sar images—Partly misinterpreted? IEEE Geoscience and Remote Sensing Letters, 11(10), 1762–1766.

Belgiu, M., & Drăguţ, L. (2016). Random forest in remote sensing: A review of applications and future directions. ISPRS Journal of Photogrammetry and Remote Sensing, 114, 24–31.

Bjorgo, E. (2000). Refugee camp mapping using very high spatial resolution satellite sensor images. Geocarto International, 15(2), 79–88.

Blaschke, T., Hay, G. J., Kelly, M., Lang, S., Hofmann, P., Addink, E., Feitosa, R. Q., van der Meer, F., van der Werff, H., & van Coillie, F. (2014). Geographic object-based image analysis–towards a new paradigm. ISPRS Journal of Photogrammetry and Remote Sensing, 87, 180–191.

Blaschke, T., Lang, S., & Hay, G. (2008). Object-based image analysis: spatial concepts for knowledge-driven remote sensing applications. Springer Science & Business Media.

Braun, A. (2019). Radar satellite imagery for humanitarian response: Bridging the gap between technology and applications (Unpublished doctoral dissertation). Eberhard-Karls Universität Tübingen, Tübingen.

Braun, A. (2020). Spaceborne radar imagery – An under-utilized source of information for humanitarian relief. Journal of Humanitarian Engineering, 8(1).

Braun, A. (2021). Extraction of dwellings of displaced persons from vhr radar imagery – A review on current challenges and future perspectives. GI_Forum, 1, 201–208.

Braun, A., & Hochschild, V. (2015). Combined use of SAR and optical data for environmental assessments around refugee camps in semiarid landscapes. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-7/W3, 777–782.

Braun, A., & Hochschild, V. (2017). A SAR-based index for landscape changes in African savannas. Remote Sensing, 9(4), 359.

Braun, A., Lang, S., & Hochschild, V. (2016). Impact of refugee camps on their environment: A case study using multi-temporal SAR data. Journal of Geography, Environment and Earth Science International, 4(2), 1–17.

Breiman, L. (2001). Random Forests. Machine Learning, 45(1), 5–32.

Chen, H., Arora, M. K., & Varshney, P. K. (2003). Mutual information-based image registration for remote sensing data. International Journal of Remote Sensing, 24(18), 3701–3706.

Cheng, S. C., & Hsia, S. C. (2003). Fast algorithms for color image processing by principal component analysis. Journal of Visual Communication and Image Representation, 14(2), 184–203.

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.

Congalton, R. G. (2001). Accuracy assessment and validation of remotely sensed and other spatial information. International Journal of Wildland Fire, 10(4), 321–328.

Conrad, O., Bechtel, B., Bock, M., Dietrich, H., Fischer, E., Gerlitz, L., Wehberg, J., Wichmann, V., & Böhner, J. (2015). System for automated geoscientific analyses (SAGA): Version 7.7.0. Geoscientific Model Development, 8(7), 1991–2007.

Convergne, E., & Snyder, M. R. (2015). Making maps to make peace: Geospatial technology as a tool for UN peacekeeping. International Peacekeeping, 22(5), 565–586.

Curzi, G., Modenini, D., & Tortora, P. (2020). Large constellations of small satellites: A survey of near future challenges and missions. Aerospace, 7(9), 133.

Dave, C. P., Joshi, R., & Srivastava, S. S. (2015). A survey on geometric correction of satellite imagery. International Journal of Computer Applications, 116(12).

Denis, G., Boissezon, H. de, Hosford, S., Pasco, X., Montfort, B., & Ranera, F. (2016). The evolution of earth observation satellites in Europe and its impact on the performance of emergency response services. Acta Astronautica, 127, 619–633.

Drǎguţ, L., Tiede, D., & Levick, S. R. (2010). ESP: a tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. International Journal of Geographical Information Science, 24(6), 859–871.

Dubois, C., Thiele, A., & Hinz, S. (2016). Building detection and building parameter retrieval in InSAR phase images. ISPRS Journal of Photogrammetry and Remote Sensing, 114, 228–241.

Enenkel, M., See, L., Karner, M., Álvarez, M., Rogenhofer, E., Baraldès-Vallverdú, C., Lanusse, C., & Salse, N. (2015). Food security monitoring via mobile data collection and remote sensing: Results from the Central African Republic. PloS One, 10(11), e0142030.

Esch, T., Heldens, W., Hirner, A., Keil, M., Marconcini, M., Roth, A., Zeidler, J., Dech, S., & Strano, E. (2017). Breaking new ground in mapping human settlements from space – The global urban footprint. ISPRS Journal of Photogrammetry and Remote Sensing, 134, 30–42.

European Space Agency. (2021). SNAP – ESA sentinel application platform: Version 8.0.3. Retrieved from

Ferro, A., Brunner, D., Bruzzone, L., & Lemoine, G. (2011). On the relationship between double bounce and the orientation of buildings in vhr sar images. IEEE Geoscience and Remote Sensing Letters, 8(4), 612–616.

Fontanelli, G., Paloscia, S., Zribi, M., & Chahbi, A. (2013). Sensitivity analysis of x-band sar to wheat and barley leaf area index in the Merguellil Basin. Remote Sensing Letters, 4(11), 1107–1116.

Franceschetti, G., Iodice, A., & Riccio, D. (2002). A canonical problem in electromagnetic backscattering from buildings. IEEE Transactions on Geoscience and Remote Sensing, 40(8), 1787–1801.

Füreder, P., Lang, S., Rogenhofer, E., Tiede, D., & Papp, A. (2015). Monitoring displaced people in crisis situations using multi-temporal vhr satellite data during humanitarian operations in South Sudan. GI_Forum, 1, 391–401.

Ghorbanzadeh, O., Tiede, D., Wendt, L., Sudmanns, M., & Lang, S. (2021). Transferable instance segmentation of dwellings in a refugee camp – integrating CNN and OBIA. European Journal of Remote Sensing, 54(sup1), 127–140.

Google Earth. (2021). Satellite image of Nizi, DRC: 1.728703° N 30.308399° E [Eye altitude 3.83 km].

Hagenlocher, M., Lang, S., & Tiede, D. (2012). Integrated assessment of the environmental impact of an IDP camp in Sudan based on very high-resolution multi-temporal satellite imagery. Remote Sensing of Environment, 126, 27–38.

Helder, D., Anderson, C., Beckett, K., Houborg, R., Zuleta, I., Boccia, V., Clerc, S., Kuester, M., Markham, B., & Pagnutti, M. (2020). Observations and recommendations for coordinated calibration activities of government and commercial optical satellite systems. Remote Sensing, 12(15), 2468.

Herfort, B., Lautenbach, S., Porto de Albuquerque, J., Anderson, J., & Zipf, A. (2021). The evolution of humanitarian mapping within the OpenStreetMap community. Scientific Reports, 11(1), 3037.

Huang, X., Lu, Q., Zhang, L., & Plaza, A. (2014). New postprocessing methods for remote sensing image classification: A systematic study. IEEE Transactions on Geoscience and Remote Sensing, 52(11), 7140–7159.

Ignatenko, V., Laurila, P., Radius, A., Lamentowski, L., Antropov, O., & Muff, D. (2020). ICEYE microsatellite SAR constellation status update: Evaluation of first commercial imaging modes. In 2020 IEEE International Geoscience & Remote Sensing Symposium: Proceedings (pp. 3581–3584). IEEE.

Joyce, K. E., Belliss, S. E., Samsonov, S. V., McNeill, S. J., & Glassey, P. J. (2009). A review of the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters. Progress in Physical Geography: Earth and Environment, 33(2), 183–207.

Kibreab, G. (1997). Environmental causes and impact of refugee movements: A critique of the current debate. Disasters, 21(1), 20–38.

Lang, S., Füreder, P., Riedler, B., Wendt, L., Braun, A., Tiede, D., Schoepfer, E., Zeil, P., Spröhnle, K., Kulessa, K., Rogenhofer, E., Bäuerl, M., Öze, A., Schwendemann, G., & Hochschild, V. (2020). Earth observation tools and services to increase the effectiveness of humanitarian assistance. European Journal of Remote Sensing, 53(2), 67–85.

Lang, S., Füreder, P., & Rogenhofer, E. (2018). Earth observation for humanitarian operations. In C. Al-Ekabi & S. Ferretti (Eds.), Yearbook on space policy 2016 (pp. 217–229). Springer International Publishing.

Ma, L., Li, M., Ma, X., Cheng, L., Du, P., & Liu, Y. (2017). A review of supervised object-based land-cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 130, 277–293.

Médecins Sans Frontières [Doctors Without Borders]. (2020). Displaced by community violence, living in dire conditions in Ituri provice. Retrieved from

Pal, M. (2005). Random forest classifier for remote sensing classification. International Journal of Remote Sensing, 26(1), 217–222.

Planet Team. (2017). Planet application program interface: In space for life on earth. Retrieved from

Pottier, J. (2003). Emergency in Ituri, DRC: Political complexity, land and other challenges in restoring food security. Paper presented at FAO International Workshop on Food Security in Complex Emergencies: Building policy frameworks to address longer-term programming challenges. Symposium conducted at the meeting of Food and Agriculture Organization of the United Nations, Tivolo.

Rastner, P., Bolch, T., Notarnicola, C., & Paul, F. (2014). A comparison of pixel- and object-based glacier classification with optical satellite images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(3), 853–862.

Rogenhofer, E., Laborderie, S. de, Vyncke, J., Braun, A., & Schwendemann, G. (2020). EO data supporting MSF’s medical interventions. In S. Ferretti (Ed.), Studies in Space Policy. Space capacity building in the XXI century (Vol. 22, pp. 147–163). Springer International Publishing.

Rothe, D., Fröhlich, C., & Rodriguez Lopez, J. M. (2021). Digital humanitarianism and the visual politics of the refugee camp: (Un)seeing control. International Political Sociology, 15(1), 41–62.

Safyan, M. (2020). Planet’s dove satellite constellation. In J. N. Pelton (Ed.), Handbook of small satellites (pp. 1–17). Springer International Publishing.

Saha, S., Bovolo, F., & Bruzzone, L. (2021). Building change detection in VHR SAR images via unsupervised deep transcoding. IEEE Transactions on Geoscience and Remote Sensing, 59(3), 1917–1929.

Seebach, L., Strobl, P., Vogt, P., Mehl, W., & San-Miguel-Ayanz, J. (2013). Enhancing post-classification change detection through morphological post-processing – A sensitivity analysis. International Journal of Remote Sensing, 34(20), 7145–7162.

Shahzad, M., Maurer, M., Fraundorfer, F., Wang, Y., & Zhu, X. X. (2018). Buildings detection in VHR SAR images using fully convolution neural networks. IEEE Transactions on Geoscience and Remote Sensing, 57(2), 1100–1116.

Small, D. (2011). Flattening gamma: Radiometric terrain correction for SAR Imagery. IEEE Transactions on Geoscience and Remote Sensing, 49(8), 3081–3093.

Sudmanns, M., Tiede, D., Augustin, H., & Lang, S. (2019). Assessing global sentinel-2 coverage dynamics and data availability for operational earth observation applications using the EO-Compass. International Journal of Digital Earth, 13(7), 768–784.

Sun, Y., Montazeri, S., Wang, Y., & Zhu, X. X. (2020). Automatic registration of a single SAR image and GIS building footprints in a large-scale urban area. ISPRS Journal of Photogrammetry and Remote Sensing, 170, 1–14.

Tiede, D., Füreder, P., Lang, S., Hölbling, D., & Zeil, P. (2013). Automated analysis of satellite imagery to provide information products for humanitarian relief operations in refugee camps – From scientific development towards operational services. Photogrammetrie – Fernerkundung – Geoinformation, 3, 185–195.

Tomaszewski, B., Tibbets, S., Hamad, Y., & Al-Najdawi, N. (2016). Infrastructure evolution analysis via remote sensing in an urban refugee camp – Evidence from Za’atari. Procedia Engineering, 159, 118–123.

Tuna, C., Merciol, F., & Lefevre, S. (2019, May 22-24). Monitoring urban growth with spatial filtering of satellite image time-series. Paper presented at Joint Urban Remote Sensing Event (JURSE), Vannes France. IEEE.

Tusiime, N. (2019). The complexity of ethnic conflict: Hema and Lendu case study. Malmö University.

van den Hoek, J., & Friedrich, H. K. (2021). Satellite-based human settlement datasets inadequately detect refugee settlements: A critical assessment at thirty refugee settlements in Uganda. Remote Sensing, 13(18), 3574.

Vasile, G., Trouvé, E., Lee, J. S., & Buzuloiu, V. (2006). Intensity-driven adaptive-neighborhood technique for polarimetric and interferometric SAR parameters estimation. IEEE Transactions on Geoscience and Remote Sensing, 44(6), 1609–1621.

Vlassenroot, K. (2004). The politics of rebellion and intervention in Ituri: The emergence of a new political complex? African Affairs, 103(412), 385–412.

Wendt, L., Hilberg, S., Robl, J., Braun, A., Rogenhofer, E., Dirnberger, D., Strasser, T., Füreder, P., & Lang, S. (2015). Using remote sensing and GIS to support drinking water supply in refugee/IDP camps. Journal for Geographic Information Science, 1, 449–458.

Weydahl, D. J. (2002). Backscatter changes of urban features using multiple incidence angle Radarsat images. Canadian Journal of Remote Sensing, 28(6), 782–793.

Witmer, F. D. W. (2015). Remote sensing of violent conflict: eyes from above. International Journal of Remote Sensing, 36(9), 2326–2352.

Zhang, H., Cao, H., Wang, C., Dong, Y., Zhang, B., & Li, L. (2019). Detection of land use change in urban agglomeration using sentinel-1 SAR data. Paper presented at 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images (pp. 1–4). IEEE.

Zhao, L., Zhou, X., & Kuang, G. (2013). Building detection from urban SAR image using building characteristics and contextual information. EURASIP Journal on Advances in Signal Processing, 2013(1), 1–16.

We use cookies to help us deliver our services.    More Info