Classification and Change Detection of high-resolution Point Clouds

Florian Politz, M. Sc.

Main Supervisor: M. Sester; Co-Supervisor:  C. Heipke


In National Mapping Agencies area-wide, controlled Airborne Lascerscanning point cloud data sets (ALS) with different point densities are available, which are usually differentiated into at least the classes ground and non-ground. These classes represent the minimum information required to derive Digital Terrain Models (DTMs) and Digital Surface Models (DSMs) from these point clouds. ALS point clouds have a high position and height accuracy, but have a comparatively poor point resolution of only about 4-16 points/m². The Arbeitsgemeinschaft der Vermessungsverwaltungen (AdV) is discussing an update cycle for ALS point clouds of 10 years. Furthermore, the National Mapping Agencies derive 3D point clouds based on digital aerial images using high overlapping ratios and the so-called "Dense Image Matching" (DIM) method, which results in point clouds with a resolution of the ground sampling distance, which corresponds to about 100 points/m². The basis for this is a 2-3 year flight cycle to gather those aerial images. Due to the different recording sensors, namely laser scanners or cameras, ALS and DIM point clouds have, besides the pure geometry, also specific properties and behaviour. While laser-scanning records the reflection properties of the hit objects in the form of an intensity and the echo properties, radiometric information from the aerial images expands DIM point clouds. The laser pulse is capable of penetrating vegetation, which means that ALS point clouds contain both the vegetation and its terrain structures below. Due to image correlation, DIM point clouds are usually limited to a surface model, i.e. they only contain terrain points below vegetation, if this terrain is visible in a sufficient amount of images. Also, the behaviour with respect to water is different between the two point clouds. Due to reflectance behaviour of the laser pulse, its signal only returns back to the receiver, when shot close to nadir view. As such, there are hardly any points on the water in ALS point clouds. The entire water surface is reconstructed in DIM point clouds. However, due to the lack of textures on the water surface, there is a high degree of erroneous pixel matching, which leads to a large dispersion in height in the final point clouds.

The project focuses on the integration of these two point cloud types in the form of common processing procedures using the example of point cloud classification and change detection. Methods will be developed which are robust with respect to point accuracy and point resolution and which consider the specific behaviour of the two point cloud types for certain object types such as vegetation and water, thereby achieving high quality results independent of the point cloud type.

For a joint processing of the two point cloud types on a nationwide level, a purely geometric representation of the point clouds in raster format was chosen to counteract the irregular point distribution and point density. At the same time, the point cloud-specific attributes are kept out of the process. Using a suitable model, the points within a raster cell and depending on their height are divided into up to two point sets. Subsequently, height distributions within the subsets were approximated using normal distributions which finally represent these points (see Figure 1). As the developed methods should be applicable nationwide, different normalization methods were investigated which reduced the influence of the terrain height in the geometry to different extents, and thus let the mean values of the lower distributions approach zero. This geometric representation is used for the classification and change detection task.

Fig. 1: Sketch of exemplary height distributions.

For the classification, a neural network is used as a classifier, which has an encoder-decoder structure that uses height distributions as input and classifies the point clouds into the classes ground, building, water, non-ground and bridge. The network was trained on point cloud data from Rostock in Germany and tested in other areas without additional fine-tuning. The network is trained ten times and the final ensemble of networks then decides the respective class(es) within a grid cell. By applying the proposed normalization methods on the height distributions to turn them to heights above ground we could prove that the once trained neural network could achieve a classification accuracy of up to 93.4% for an ALS point cloud in the corresponding test area near Rostock. Also when applying the classifier to other areas, it achieved an overall accuracy of only about 1.5% less than in the test set of the training area in most cases. When the same network was trained on DIM point clouds, the class accuracy in the test area of Rostock even reached up to 97.6%.  

Fig. 2: ALS point cloud from Brunswick in Germany, which is classified with the network trained on ALS data. Classes: ground - grey, buildings - red, water - blue, non-ground - green, bridge - purple.
Fig. 3: DIM point cloud of the test area near Rostock in Germany, which is classified with the network trained on DIM data. Classes: ground - grey, buildings - red, water - blue, non-ground - green.

The height distributions and the classification results are used for the change detection, which also considers their level of uncertainty. The height distributions of two point clouds are tested statistically for a significant change, which is used to decide whether a change happened between datasets. The results can then be filtered using the class information of both point clouds for each raster cell.

Fig. 4: Detected height changes between two ALS point clouds from 2012 and 2016 for all classes or only classified buildings.
Florian Politz, M. Sc.
Address
Institut für Kartographie und Geoinformatik
Appelstraße 9A
30167 Hannover
Building
Room
616
Florian Politz, M. Sc.
Address
Institut für Kartographie und Geoinformatik
Appelstraße 9A
30167 Hannover
Building
Room
616