Toolkit "Collaborative Integrity"

Toolkit "Collaborative Integrity"


In the research training group i.c.sens, methodological foundations as well as integrity and collaboration concepts for dynamic sensor networks in connection with digital maps are developed, with a focus on navigation information. The results will be made available in the form of a cross-application building blocks for collaborative integrity that can be integrated into concrete applications with little effort, thus opening up new fields of application and cost-sensitive applications. In addition to the generic, initially purely "methodological toolkit", which realizes integrity and collaboration for the task of state estimation, later cohorts will bundle the developed concepts in the sense of a cross-application "collaborative integrity toolkit", which will be verified by means of exemplary applications (e.g. autonomous driving).

One of the main goals of the research training group is to derive executable software modules from the algorithms for data analysis and/or data transformation developed by the PhD students, which can also be reused in other application contexts. In connection with the practical implementation, questions arise in the areas of software engineering and system architecture.  Common feature of these software modules is the ability to transform integrity parameters of the input into integrity parameters of the output, i.e. to consider quantified uncertainties/accuracies of the input data - e.g. measurement data of physical sensors or upstream processes - by methods like error propagation within the algorithms. The result of these modules is then not only a most probable solution of the implemented algorithm, but also a measure of the module's confidence that the result is correct. If complex algorithms consist of successive, independent sub-steps, these are transformed into individual, sequentially executable modules. This allows the use of the methods developed in research training group in safety-critical applications where guarantees for the correctness of calculations or a reliable quantification of the expected error are required.

Software engineering challenges

The core of the modular concept is the interoperability of the developed software modules with each other: if the corresponding interfaces match, they can be linked to form a process chain in which the integrity of all processing steps is completely guaranteed. This results in the necessity of uniform communication channels, i.e. either programming language-independent inter-process communication or a uniform programming language or programming environment when developing the modules. A side effect of this practical requirement is the reusability of modules within the Research Training Group: as the work oft he group progresses, PhD students will be able to access more and more already implemented integer variants of algorithms that can be used across applications and reuse them as components in their own analyses.

In preparation for the coordinated development of software modules for the building set in Cohort 1, the first cohort already considered suitable common development environments (programming languages and frameworks) and developed first prototypes for individual concrete algorithms. The experience gained in this process is the starting point for the joint specification and development activities starting in cohort 2.


In this context, a decision was made early on to use ROS (Robot Operation System) as a common framework for the modular modules. ROS is an open-source "meta operating system" that emulates typical operating system functionalities, including its own interprocess communication interface, and thus allows the execution of distributed processes (on one computer or within a computer network) independent of the hardware and operating systems used. These processes are loosely coupled via generic, text-based communication channels (so-called topics), which are realized via the publish/subscribe design pattern. Due to its popularity in robotics, there are numerous freely available modules for ROS, which realize functionalities frequently required in robotics. This includes drivers for a large number of sensors used in the experiments of the research training group, which allow a direct integration of sensor data and the further processing modules. A useful feature of ROS in this context is the generation of chronological recordings (so-called ROSBags) of the communication about certain topics (e.g. sensor data), which can be replayed at a later time. Thereby algorithms can be developed and tested on real data any time after the real experiment.

In connection with the development of the first prototypes in cohort 1, the additional effort required to transform the algorithms developed in the PhD projects into executable ROS modules has already been investigated. Direct development in a ROS environment represents the simplest case. Most other languages (C++, python, MatLab) used by the PhD students for development support ROS directly; here, processing effort is reduced to the integration of the corresponding ROS libraries and the redirection of inputs and outputs via ROS topics (possibly several times when decomposed into independent modules).

Coordination of software development

From cohort 2 on, the development of the ROS modules is part of every doctoral project. In joint programming workshops (so-called "hackathons") the necessary programming activities are coordinated and synchronized. In this context, common interfaces and data types (especially for the communication of integrity parameters) are identified and formally specified. For the doctoral students, these events provide an opportunity to discuss their own problems and solutions with the other members of the research training group.


In the following video for the module "Bounded-Error-Visual-LiDAR Odometry" Raphael Voges demonstrates the support of visual odometry (with a monocamera) by fusion of the image data with a 180° FOV LiDAR sensor. The point cloud is used to combine depth information with image features, which allows estimation of the 6DOF transformation (i.e. the change in vehicle pose) between successive images. GNSS measurements are used to support the calculated pose; until the next GNSS measurement, the relative pose change is accumulated before it is corrected by it.

A special feature of Mr. Voges' approach is the use of interval mathematics to model all errors: limiting intervals are assumed as guaranteed value ranges for all measurements; nothing is known about the position of the actual measurement values within the intervals. The intervals are taken into account in all operations (SIVIA algorithm = "set inversion via interval analysis" and so-called contractors); the result of each calculation step are again intervals. The advantage of this method is the guarantee that the calculated vehicle pose is actually within the interval limits (representing the maximum possible error), provided that the initial assumptions for the maximum errors of the sensor observations are correct.

The second example is the ROS node for the „Simulation Framework for Collaborative Navigation“ by Nicolas Garcia Fernandez, which integrates observations from automotive sensors (Vehicle-to-vehicle or vehicle-to-environment (V2X) measurements) to maximize the self-localization accuracy of all vehicles through collaborative exchange of information.

The first video depicts the measurement generation step of a single vehicle within the developed framework:

The vehicle frame is represented by the red, green and blue (x, y and z) axes, and moves at a variable velocity in an urban scenario represented by the 3D City model with level of detail 2.

The vehicle is assumed to be equipped with a GNSS receiver (line-of-sight and non-line-of-sight paths to the received GPS satellites are displayed in orange), IMU (measurements not depicted), a 2D laser scanner (with its measurement geometry depicted with blue circles) and a pair of stereo cameras (represented by the two conic projections depicted by the red and green triangles for the left and right cameras, respectively). Finally, the exteroceptive sensor measurements are computed by ray-to-plane intersections (magenta “x”). As the vehicle moves, it can be seen how both the satellite geometry and the exteroceptive sensor measurement geometry varies significantly.

The second video depicts the dynamic network estimation with a plane-based Collaborative Extended Kalman Filter (C-EKF), after the data acquisition carried out as explained in the previous video. The top panel shows the simulated collaborative (three vehicles) scenario from a top view, where the characteristics of the environment and trajectories can be examined.

The bottom panels show the following parameters obtained from the estimation of Vehicle 1 (red trajectory):

  • The quality of the satellite geometry by making use of the Horizontal Dilution of Precision (HDOP) values, in which it is possible to see also some periods of signal interruption (e.g. 200s after start).
  • The cross-track (blue) and along-track (magenta) estimated deviations of the estimated position with respect to the nominal trajectory.
  • The deviations of the estimated heading with respect to the nominal values.
  • The deviations of the estimated velocities in the North (red) and East (green) with respect to the nominal values.