In recent years, there has been an exponential increase in aerial motion imagery due to advances in airborne sensor technologies, rising adoption of manned and unmanned aerial vehicles (UAVs), and emergence of new applications associated with these technologies including aerial surveillance, traffic monitoring, search and rescue, disaster relief, and precision agriculture. We are witnessing a growing need for robust aerial image and video analysis capabilities to take full advantage of this data and to address the pressing needs of its applications. Novel methods, particularly those relying on artificial intelligence/machine learning (AI/ML) approaches, coupled with rapid advances in computational hardware (more powerful, lighter weight, lower energy, lower cost computing platforms) are revolutionizing the image processing, pattern recognition, and computer vision fields.

Download the Call for Paper in PDF

Keynote Speaker

Avideh Zakhor
University of California-Berkeley, Dept. of Electrical Engineering and Computer Sciences
Title: EO/IR processing of unmanned aerial vehicle captured imagery [video]
Abstract: In this talk, I will describe a number of image processing pipelines associated with drone imagery. I start with a reality capture pipeline for 3D building reconstruction from RGB imagery captured via an unmanned aerial vehicle. This is an important problem with applications in urban planning, emergency response, disaster planning, and building energy efficiency. We leverage the commercial software to construct a 3D point cloud from RGB drone imagery, which is then used in conjunction with image processing and geometric methods to extract a building footprint. The footprint is then extruded vertically based on the heights of the segmented rooftops to result in an ultra-compact 3D model of the building. Next I describe a method to estimate window to wall ratio (WWR) which has been shown to critically influence heat loss, solar gain and daylighting levels with implications for visual and thermal comfort as well as building energy performance. Next, I will describe a pipeline for semantic segmentation of 3D point clouds into buildings and vegetation obtained via photogrammetry from aerial RGB camera images. The motivation is to compute the minimum distance between vegetation and buildings which could be a fire hazard in fire prone regions such as California. Finally, I will describe challenges in combining EO and IR imagery to detect ground objects using unmanned aerial vehicles.

Panel Discussion on AI for Aerial Imagery

Raghuveer Rao
US Army Research Laboratory
Michael McDonald
Defense Research and Development Canada (DRDC), NATO
Mike McGinty
TerraSense Analytics
Lars Sommer
Fraunhofer IOSB Karlsruhe Institute of Technology