Motivated by applications in political science, we propose the novel vision task of estimating the legibility of international borders in overhead imagery. We collect a global imagery dataset, propose several baselines, and evaluate performance on a crowdsourced validation dataset.
Key aspects of international policy, such as those pertaining to migration and trade, manifest in the physical world at international political borders; for this reason, borders are of interest to political science studying the impacts and implications of these policies. While some prior efforts have worked to characterize features of borders using trained human coders and crowdsourcing, these are limited in scale by the need for manual annotations. In this paper, we present a new task, dataset, and baseline approaches for estimating the legibility of international political borders automatically and on a global scale. Our contributions are to (1) define the border legibility estimation task; (2) collect a dataset of overhead (aerial) imagery for the entire world's international borders, (3) propose several classical and deep-learning-based approaches to establish a baseline for the task, and (4) evaluate our algorithms against a validation dataset of crowdsourced legibility comparisons. Our results on this challenging task confirm that while low-level features can often explain border legibility, mid- and high-level features are also important. Finally, we show preliminary results of a global analysis of legibility, confirming some of the political and geographic influences of legibility.
@InProceedings{Ortega_2023_WACV, author = {Ortega, Trevor and Nelson, Thomas and Crane, Skyler and Myers-Dean, Josh and Wehrwein, Scott}, title = {Computer Vision for International Border Legibility}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {3838-3847} }
This work was supported in part by the National Science Foundation under Grant No. 1917573. The authors gratefully thank Andrew Dunn, Nate Maassen, and Vivian White for their help with data collection.