• Zielgruppen
  • Suche
 

PhD-Projects

Expand all

1 | Alternative integrity measures based on interval mathematics

M.Sc. Hani Dbouk

Main Supervisor: Schön; Co-Supervisor: Neumann

Project description

This project deals with the development of alternative integrity measures based on interval mathematic, fuzzy theory and imprecise random variables. For the typical sensors used in our central experimentation facility, remaining systematics have to be bound by error bands and subsequently by intervals. The theoretical studies have to be validated by dedicated experiments. In a next step, these error contributions must be transferred to the estimated parameters and navigation states in order to derive guaranteed bounds. In addition to the interval extension of the classical estimators and the determination of rotation invariant uncertainty regions by zonotopes, the focus is on set inversion methods. Furthermore, the advantages of this approach, compared to the classical pure stochastic integrity description, should be highlighted. Rules of thumb and best practice procedures should be proposed for multi-sensor systems used in the central experimentation facility. This topic is linked and interacts with the topic on filtering (2) representation in maps (3,8) and the time uncertainty treated by topic 9.

2 | Development of a filter model with integrity measures

M.Sc. Ligang Sun

Erstbetreuer: Neumann; Co-Betreuer: Schön

Projektbeschreibung

This project deals with the development of methods and procedures for reliable solutions with integrity measures for dynamic sensor networks in which both the observations and the system knowledge are superimposed by random and unknown but bounded (UBB) uncertainties. For this purpose, the nonlinear observation and system equation of the filter are extended to the imprecise case and solved by methods considering both random and UBB uncertainties. This includes the extension of the available basic concepts by elements of guaranteed parameter estimation as well as filtering for merging the knowledge from observation and about the system. A particular focus of this project is on the propagation of imprecision, with the goal of significantly solving the problem of overestimation, which usually occurs with interval mathematics when finding suitable reformulations. These contents are transferred to selected multi-sensor systems in the experimental laboratory. As input values, measurements collected in projects of the Geodetic Institute can be used. Real measurements as well as measurements with superior accuracy of a reference measurement system are available, which data may serve for the validation of the methods.

Currently we developed two set-based filteringmodels: Ellipsoidal and Gaussian Kalman Filter (EGKF) for discrete-time nonlinear systems and Zonotopic and Gaussian Kalman Filter (ZGKF) for hybrid LTI systems. Here are the estimated results when applying EGKF onmulti-sensor system (MSS) and Mapathon data sets.

 

The future work includes the applications of these set-based filtering models on real data sets, e.g., state estimation for multi-sensor system or overtaking strategies for autonomous vehicles.

3 | Semantic segmentation of point clouds using transfer learning

M.Sc. Torben Peters

Main Supervisor: Sester; Co-Supervisor:: Brenner

Projectdescription

Many state of the art solutions in the fields of artificial intelligence are based on deep learning. In autonomous driving, deep learning is used i.a. for motion planning, object classification or even end-to-end learning. In classical supervised learning such a network is trained with data of a specific domain for the given task. However if one domain intersects with another domain the knowledge can be transferred to another task. This procedure is therefore called transfer learning. Autonomous cars are often using different sensors in order to solve related tasks. In this project we want to fuse different sensor information in order to solve semantic segmentation in 3d. The Problem state is to map and control the information flow between different domains while preserving the quality of the data. Furthermore classification in 3d is still considered as problematic since there is no common solution. To that end we want to advance in the field of 3d point cloud classification using state-of-the-art deep learning techniques.

4 | Optimal collaborative positioning

M.Sc. Nicolas Garcia Fernandez

Main-Supervisor: Schön; Co-Supervisor: Heipke

Project Description

Collaborative Positioning (CP) is a promising technique in which a group of dynamic nodes (pedestrians, vehicles, etc.) equipped with different (time synchronized) sensors can increase the quality of the Positioning, Navigation and Timing (PNT) information by exchanging navigation information as well as performing measurements between nodes or to elements of the environment (urban furniture, buildings, etc.). The robustness of positioning increases, describing an improvement in the accuracy, integrity, continuity and availability terms compared to single node positioning, like e.g. standalone GNSS or tightly coupled GNSS + IMU solutions. Hence, the navigation system can be considered as a geodetic network in which some of the nodes are changing their position and in which the links between nodes are defined by measurements carried out with additional sensors (V2X measurements). In order to get insights into the behavior of such networks and to evaluate the benefits of CP with respect to single vehicle approaches, a realistic simulation tool for collaborative navigation systems was developed. We combine satellite navigation, inertial navigation, laser scanner, photogrammetry and odometry in order to get an algorithm which serves us to fulfill the purposes of fusing multi-sensor data as well as evaluating the correlations and dependencies of the estimated parameters and observations. The simulation tool allows us to grade the advantages and disadvantages of the different sensor measurement fusion algorithms capable of executing CP techniques. The validation of the simulation tool with real data guarantees that the conclusions taken from the analysis lead unequivocally to an improvement in the robustness of the estimation which can be translated into safe and precise navigation.

5 | Dynamic Control Information for the relative Positioning of Nodes in a Sensor Network

M.Sc. Max Coenen

Main-Supervisor: Rottensteiner; Co-Supervisor: Heipke

Project Description

 

Motivation: Autonomous driving comes with the need to handle highly dynamic environments. To ensure safe navigation and to enable collaborative positioning and interactive motion planning together with other traffic participants, 3D scene reconstruction and the identification and reconstruction of moving objects, especially of vehicles, are fundamental tasks. Furthermore, collaborative motion planning and vehicle positioning requires knowledge about the relative poses between cars for them to be used as vehicle-to-vehicle (V2V) measurements.

Enabling the communication and transmission of relative poses between the vehicles allows incorporating them as dynamic control information to enhance the positioning. This leads to the need of techniques for precise 3D object reconstruction to derive the poses of other vehicles relative to the position of the observing vehicle. In this context, stereo cameras provide a cost-effective solution for sensing a vehicle's surroundings.

Consequently, this project is mainly based on stereo images acquired by a stereo camera rig mounted on the moving vehicle as observations and has the goal to detect and identify other sensor nodes, i.e. other vehicles in this case, and to determine their relative poses.

Most of the existing techniques for vehicle detection and pose estimation are restricted to a coarse estimation of the viewpoint in 2D, whereas the precise determination of vehicle pose, especially of the orientation, and vehicle shape is still an open problem, that is addressed here.

Goal: The goal of this project is to propose a method for precise 3D reconstruction of vehicles in order to reason about the relative vehicle poses in 3D, i.e. the position and rotation of the vehicles with respect to the observing vehicle, and to leverage the determined shape for the identification of other sensor nodes.

For the detection of vehicles, we combine a generic 3D object detection approach with a state-of-the-art image object detector. To reason about the vehicle poses and shapes, a deformable vehicle model is learned from CAD vehicle models (Fig. 1). Given the vehicle detections our aim is to fit such a deformable vehicle model to each detection, thus finding the pose and shape parameters representing the model that describes the information derived from the images best. The model fitting is based on the shape prior, reconstructed 3D points, image features and automatically derived scene knowledge. Fig. 2 shows qualitative results of our vehicle reconstruction approach.

6 | Collaborative Pedestrian Tracking

M.Sc. Uyen Nguyen

Main-Supervisor: Heipke; Co-Supervisor: Rottensteiner

Project description

People detection and tracking are significant for applications related to autonomous driving, robotics, safety surveillance, etc. Most tracking work focuses on assigning and generating a complete trajectory rather than improving the geometric accuracy of the resulting trajectories in world coordinates. However, in many practical applications such as autonomous driving the geometric accuracy is a significant factor need to be taken into account.

In this project, we address the pedestrian tracking problem using stereo images.  With stereopsis information, 3D position in object space of tracked people can be estimated, which is significant for applications related to autonomous driving. Moreover, we also extend the multiple persons tracking problem from only single viewpoint to multiple perspectives so that lacking information from a certain viewpoint can be fulfilled by the others. A scenario was set up in which multiple moving cars collaboratively carried out the tracking task, illustrated in Fig.1.

Our tracking system is based on tracking-by-detection method, which comes in three phases: first, an object detector performs in each image independently; second, corresponding detections in different frames are associated w.r.t each other; finally, a filter step is employed to smooth the trajectory based on their previous states.  Fig. 2 shows exemplary trajectory of tracked pedestrians back projected to the image space.

7 | Integrity information-based georeferencing

M.Sc. Sören Vogel

Min-Supervisor: Neumann; Co-Supervisor: Brenner

Project Description

 

Mainly due to absence of accurate and reliable GNSS observations, georeferencing with integrity aspects of kinematic multi-sensor systems (MSS) within complex indoor environments (e.g. common office spaces) as well as in inner-city outdoor environments (e.g. with many high buildings) is very challenging and expensive. Thus, demands for real-time processing or high accuracies are difficult to achieve. This dissertation project deals with the development of a general mathematical approach for georeferencing a kinematic MSS based on various information. The research tasks comprise in particular the examination of an optimal integration of laser scanner based object space information as well as a mathematical formulation of prior-information based on geometrical (in)equality constraints (e.g. total tolerances in the building industry based on standards). In addition, permanent guarantee of integrity for the MSS by consideration of such independent geometrical information will be ensured. Application and validation of the theoretical approach will be done by simulated data as well as measured scenarios. It is intended to transfer the methodological developments to a multi-sensor system which is used also in Topic 2. Thus, it is possible to compare the results of the specific modelling based on integrity measures with the results obtained by means of geometrical (in)equality constraints. There exist relations to the filter model with integrity measures (Topic 2), to the relative positioning of sensor knots (Topic 5), as well as to the collaborative acquisition of environmental information (Topic 8).

8 | Massively collaborative acquisition of dynamic environments and their transformation into digital maps (Brenner/Sester)

Supervisor

Main-Supervisor: Brenner; Co-Supervisor: Sester

Project Description

Usually, the determination of the position and orientation of platforms uses observations which are obtained relative to an environment. Therefore, the acquisition and provision of a common model of the environment is a fundamental part of state estimation. In turn, the required observations are obtained from a large number of platforms in a collaborative manner. It is thus an essential task to investigate the aggregation of individually (per platform) obtained maps, to a common representation, under consideration of integrity constraints. A major aspect of this is the consideration of the dynamics of the environment, for example by representing decaying or periodic state transitions by appropriate latent variables. This ensures that conflicting observations, which are due to multitemporal effects, can still be handled consistently. In particular, the effect of discrete decisions will be considered, which are used, for example, during the assignment of observations to the existing map data (data association problem), as well as the representation of the overall map by several alternative models. This topic is directly related to topic 3 (multiscale maps), as well as to alternative integrity measures (topic 1), filtering (topics 2 and 7), collaborative positioning (topics 4 and 6) and the handling of temporal aspects (topic 9).

9 | Bounded-Error Visual-Lidar Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties

M.Sc. Rafael Voges

Main-Supervisor: Wagner, Co-Supervisor: Brenner

Projec Description

 

To localize without GPS information, mobile robots have to compute their ego-motion gradually using different sensors such as cameras, laser scanners or inertial measurement units (IMUs). In order to do that, this project aims at developing a visual-lidar odometry algorithm that computes – in contrast to established approaches – no point-valued pose, but a bounded domain that is guaranteed to contain the true pose. To properly fuse information from these different sensors, we have to assume unknown but bounded errors not only for each individual sensor, but also for inter-sensor properties such as the transformation between sensor coordinate systems and offsets or drifts between sensor clocks. Using intervals for these errors is more adequate than stochastic error modeling since transformation errors or sensor clock offsets are naturally bounded (e.g. +/- 1 cm or +/- 10 ms). Furthermore, in contrast to stochastic error modeling, unknown systematic errors that often arise during inter-sensor calibration are perfectly compliant with the assumption about unknown but bounded errors.

Thus, the first goal of this project is to investigate approaches to determine these inter-sensor properties – which we also call spatiotemporal calibration parameters – under interval uncertainty. Subsequently, we develop an approach that fuses information from camera, laser scanner and IMU in a bounded error context to compute a robot’s ego-motion. This allows us to determine guaranteed bounds for a robot’s true pose. These bounds can then be used to limit the search space of traditional stochastic approaches making them more reliable, and thus are especially relevant for safety-critical systems such as autonomous cars. Finally, we have to account for the previously determined spatiotemporal calibration parameters in the visual-lidar odometry algorithm.

1 | Alternative integrity measures based on interval mathematics

M.Sc. Hani Dbouk

Main Supervisor: Schön; Co-Supervisor: Neumann

Project description

This project deals with the development of alternative integrity measures based on interval mathematic, fuzzy theory and imprecise random variables. For the typical sensors used in our central experimentation facility, remaining systematics have to be bound by error bands and subsequently by intervals. The theoretical studies have to be validated by dedicated experiments. In a next step, these error contributions must be transferred to the estimated parameters and navigation states in order to derive guaranteed bounds. In addition to the interval extension of the classical estimators and the determination of rotation invariant uncertainty regions by zonotopes, the focus is on set inversion methods. Furthermore, the advantages of this approach, compared to the classical pure stochastic integrity description, should be highlighted. Rules of thumb and best practice procedures should be proposed for multi-sensor systems used in the central experimentation facility. This topic is linked and interacts with the topic on filtering (2) representation in maps (3,8) and the time uncertainty treated by topic 9.

2 | Development of a filter model with integrity measures

M.Sc. Ligang Sun

Erstbetreuer: Neumann; Co-Betreuer: Schön

Projektbeschreibung

This project deals with the development of methods and procedures for reliable solutions with integrity measures for dynamic sensor networks in which both the observations and the system knowledge are superimposed by random and unknown but bounded (UBB) uncertainties. For this purpose, the nonlinear observation and system equation of the filter are extended to the imprecise case and solved by methods considering both random and UBB uncertainties. This includes the extension of the available basic concepts by elements of guaranteed parameter estimation as well as filtering for merging the knowledge from observation and about the system. A particular focus of this project is on the propagation of imprecision, with the goal of significantly solving the problem of overestimation, which usually occurs with interval mathematics when finding suitable reformulations. These contents are transferred to selected multi-sensor systems in the experimental laboratory. As input values, measurements collected in projects of the Geodetic Institute can be used. Real measurements as well as measurements with superior accuracy of a reference measurement system are available, which data may serve for the validation of the methods.

Currently we developed two set-based filteringmodels: Ellipsoidal and Gaussian Kalman Filter (EGKF) for discrete-time nonlinear systems and Zonotopic and Gaussian Kalman Filter (ZGKF) for hybrid LTI systems. Here are the estimated results when applying EGKF onmulti-sensor system (MSS) and Mapathon data sets.

 

The future work includes the applications of these set-based filtering models on real data sets, e.g., state estimation for multi-sensor system or overtaking strategies for autonomous vehicles.

3 | Semantic segmentation of point clouds using transfer learning

M.Sc. Torben Peters

Main Supervisor: Sester; Co-Supervisor:: Brenner

Projectdescription

Many state of the art solutions in the fields of artificial intelligence are based on deep learning. In autonomous driving, deep learning is used i.a. for motion planning, object classification or even end-to-end learning. In classical supervised learning such a network is trained with data of a specific domain for the given task. However if one domain intersects with another domain the knowledge can be transferred to another task. This procedure is therefore called transfer learning. Autonomous cars are often using different sensors in order to solve related tasks. In this project we want to fuse different sensor information in order to solve semantic segmentation in 3d. The Problem state is to map and control the information flow between different domains while preserving the quality of the data. Furthermore classification in 3d is still considered as problematic since there is no common solution. To that end we want to advance in the field of 3d point cloud classification using state-of-the-art deep learning techniques.

4 | Optimal collaborative positioning

M.Sc. Nicolas Garcia Fernandez

Main-Supervisor: Schön; Co-Supervisor: Heipke

Project Description

Collaborative Positioning (CP) is a promising technique in which a group of dynamic nodes (pedestrians, vehicles, etc.) equipped with different (time synchronized) sensors can increase the quality of the Positioning, Navigation and Timing (PNT) information by exchanging navigation information as well as performing measurements between nodes or to elements of the environment (urban furniture, buildings, etc.). The robustness of positioning increases, describing an improvement in the accuracy, integrity, continuity and availability terms compared to single node positioning, like e.g. standalone GNSS or tightly coupled GNSS + IMU solutions. Hence, the navigation system can be considered as a geodetic network in which some of the nodes are changing their position and in which the links between nodes are defined by measurements carried out with additional sensors (V2X measurements). In order to get insights into the behavior of such networks and to evaluate the benefits of CP with respect to single vehicle approaches, a realistic simulation tool for collaborative navigation systems was developed. We combine satellite navigation, inertial navigation, laser scanner, photogrammetry and odometry in order to get an algorithm which serves us to fulfill the purposes of fusing multi-sensor data as well as evaluating the correlations and dependencies of the estimated parameters and observations. The simulation tool allows us to grade the advantages and disadvantages of the different sensor measurement fusion algorithms capable of executing CP techniques. The validation of the simulation tool with real data guarantees that the conclusions taken from the analysis lead unequivocally to an improvement in the robustness of the estimation which can be translated into safe and precise navigation.

5 | Dynamic Control Information for the relative Positioning of Nodes in a Sensor Network

M.Sc. Max Coenen

Main-Supervisor: Rottensteiner; Co-Supervisor: Heipke

Project Description

 

Motivation: Autonomous driving comes with the need to handle highly dynamic environments. To ensure safe navigation and to enable collaborative positioning and interactive motion planning together with other traffic participants, 3D scene reconstruction and the identification and reconstruction of moving objects, especially of vehicles, are fundamental tasks. Furthermore, collaborative motion planning and vehicle positioning requires knowledge about the relative poses between cars for them to be used as vehicle-to-vehicle (V2V) measurements.

Enabling the communication and transmission of relative poses between the vehicles allows incorporating them as dynamic control information to enhance the positioning. This leads to the need of techniques for precise 3D object reconstruction to derive the poses of other vehicles relative to the position of the observing vehicle. In this context, stereo cameras provide a cost-effective solution for sensing a vehicle's surroundings.

Consequently, this project is mainly based on stereo images acquired by a stereo camera rig mounted on the moving vehicle as observations and has the goal to detect and identify other sensor nodes, i.e. other vehicles in this case, and to determine their relative poses.

Most of the existing techniques for vehicle detection and pose estimation are restricted to a coarse estimation of the viewpoint in 2D, whereas the precise determination of vehicle pose, especially of the orientation, and vehicle shape is still an open problem, that is addressed here.

Goal: The goal of this project is to propose a method for precise 3D reconstruction of vehicles in order to reason about the relative vehicle poses in 3D, i.e. the position and rotation of the vehicles with respect to the observing vehicle, and to leverage the determined shape for the identification of other sensor nodes.

For the detection of vehicles, we combine a generic 3D object detection approach with a state-of-the-art image object detector. To reason about the vehicle poses and shapes, a deformable vehicle model is learned from CAD vehicle models (Fig. 1). Given the vehicle detections our aim is to fit such a deformable vehicle model to each detection, thus finding the pose and shape parameters representing the model that describes the information derived from the images best. The model fitting is based on the shape prior, reconstructed 3D points, image features and automatically derived scene knowledge. Fig. 2 shows qualitative results of our vehicle reconstruction approach.

6 | Collaborative Pedestrian Tracking

M.Sc. Uyen Nguyen

Main-Supervisor: Heipke; Co-Supervisor: Rottensteiner

Project description

People detection and tracking are significant for applications related to autonomous driving, robotics, safety surveillance, etc. Most tracking work focuses on assigning and generating a complete trajectory rather than improving the geometric accuracy of the resulting trajectories in world coordinates. However, in many practical applications such as autonomous driving the geometric accuracy is a significant factor need to be taken into account.

In this project, we address the pedestrian tracking problem using stereo images.  With stereopsis information, 3D position in object space of tracked people can be estimated, which is significant for applications related to autonomous driving. Moreover, we also extend the multiple persons tracking problem from only single viewpoint to multiple perspectives so that lacking information from a certain viewpoint can be fulfilled by the others. A scenario was set up in which multiple moving cars collaboratively carried out the tracking task, illustrated in Fig.1.

Our tracking system is based on tracking-by-detection method, which comes in three phases: first, an object detector performs in each image independently; second, corresponding detections in different frames are associated w.r.t each other; finally, a filter step is employed to smooth the trajectory based on their previous states.  Fig. 2 shows exemplary trajectory of tracked pedestrians back projected to the image space.

7 | Integrity information-based georeferencing

M.Sc. Sören Vogel

Min-Supervisor: Neumann; Co-Supervisor: Brenner

Project Description

 

Mainly due to absence of accurate and reliable GNSS observations, georeferencing with integrity aspects of kinematic multi-sensor systems (MSS) within complex indoor environments (e.g. common office spaces) as well as in inner-city outdoor environments (e.g. with many high buildings) is very challenging and expensive. Thus, demands for real-time processing or high accuracies are difficult to achieve. This dissertation project deals with the development of a general mathematical approach for georeferencing a kinematic MSS based on various information. The research tasks comprise in particular the examination of an optimal integration of laser scanner based object space information as well as a mathematical formulation of prior-information based on geometrical (in)equality constraints (e.g. total tolerances in the building industry based on standards). In addition, permanent guarantee of integrity for the MSS by consideration of such independent geometrical information will be ensured. Application and validation of the theoretical approach will be done by simulated data as well as measured scenarios. It is intended to transfer the methodological developments to a multi-sensor system which is used also in Topic 2. Thus, it is possible to compare the results of the specific modelling based on integrity measures with the results obtained by means of geometrical (in)equality constraints. There exist relations to the filter model with integrity measures (Topic 2), to the relative positioning of sensor knots (Topic 5), as well as to the collaborative acquisition of environmental information (Topic 8).

8 | Massively collaborative acquisition of dynamic environments and their transformation into digital maps (Brenner/Sester)

Supervisor

Main-Supervisor: Brenner; Co-Supervisor: Sester

Project Description

Usually, the determination of the position and orientation of platforms uses observations which are obtained relative to an environment. Therefore, the acquisition and provision of a common model of the environment is a fundamental part of state estimation. In turn, the required observations are obtained from a large number of platforms in a collaborative manner. It is thus an essential task to investigate the aggregation of individually (per platform) obtained maps, to a common representation, under consideration of integrity constraints. A major aspect of this is the consideration of the dynamics of the environment, for example by representing decaying or periodic state transitions by appropriate latent variables. This ensures that conflicting observations, which are due to multitemporal effects, can still be handled consistently. In particular, the effect of discrete decisions will be considered, which are used, for example, during the assignment of observations to the existing map data (data association problem), as well as the representation of the overall map by several alternative models. This topic is directly related to topic 3 (multiscale maps), as well as to alternative integrity measures (topic 1), filtering (topics 2 and 7), collaborative positioning (topics 4 and 6) and the handling of temporal aspects (topic 9).

9 | Bounded-Error Visual-Lidar Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties

M.Sc. Rafael Voges

Main-Supervisor: Wagner, Co-Supervisor: Brenner

Projec Description

 

To localize without GPS information, mobile robots have to compute their ego-motion gradually using different sensors such as cameras, laser scanners or inertial measurement units (IMUs). In order to do that, this project aims at developing a visual-lidar odometry algorithm that computes – in contrast to established approaches – no point-valued pose, but a bounded domain that is guaranteed to contain the true pose. To properly fuse information from these different sensors, we have to assume unknown but bounded errors not only for each individual sensor, but also for inter-sensor properties such as the transformation between sensor coordinate systems and offsets or drifts between sensor clocks. Using intervals for these errors is more adequate than stochastic error modeling since transformation errors or sensor clock offsets are naturally bounded (e.g. +/- 1 cm or +/- 10 ms). Furthermore, in contrast to stochastic error modeling, unknown systematic errors that often arise during inter-sensor calibration are perfectly compliant with the assumption about unknown but bounded errors.

Thus, the first goal of this project is to investigate approaches to determine these inter-sensor properties – which we also call spatiotemporal calibration parameters – under interval uncertainty. Subsequently, we develop an approach that fuses information from camera, laser scanner and IMU in a bounded error context to compute a robot’s ego-motion. This allows us to determine guaranteed bounds for a robot’s true pose. These bounds can then be used to limit the search space of traditional stochastic approaches making them more reliable, and thus are especially relevant for safety-critical systems such as autonomous cars. Finally, we have to account for the previously determined spatiotemporal calibration parameters in the visual-lidar odometry algorithm.