Artikel & Konferenzen
Wissenschaftliche Veröffentlichungen

Wissenschaftliche Veröffentlichungen

Auf dieser Seite werden wissenschaftliche Veröffentlichungen der Verbundpartner, z.B. in Journalen und auf Konferenzen, vorgestellt und verlinkt. 


Workshop auf der Konferenz Intelligent Vehicles 2016

Im Rahmen des OFP Projektes organisieren Prof. Cristóbal Curio und Dr. Michael Schilling einen "Workshop on holistic interfaces for environmental fusion models". Der Workshop findet am 19. Juni 2016 auf der Intelligent Vehicles Konferenz in Göteburg statt. 


Connecting the virtual fusion model to the real world 

Currently various automated driving prototypes are available all over the world, to explore future mobility. Yet the prototypes are all still far from series production due to several issues.

While the sensor capabilities are good and work in a stand-alone solution of a single car manufacturer or a single supplier one main challenge for the step to series production is the huge variety of interfaces to the fusion system. This diversity does not allow a quick and easy integration into other near series technologies. And the interface problem is not limited to technical interfaces, e.g. between sensors and fusion platform or between fusion platform and actuators, but is also present in e.g. V2X interface contents, HMI interfaces to the driver and communication to other traffic participants without a V2X interface.

The challenges of interfacing a virtual world, e.g. the environmental fusion model of a car, to the real world is also a well know problem in robotics, so that the robotic research may provide good solutions for the automotive world.

Within this workshop the need for holistic interfaces to an environmental fusion model shall be discussed on different levels:

  • technical interfaces of sensors and actuators
  • information fusion levels
  • multimodal environment perception
  • content of V2X communication needed for specific solutions
  • HMI interfaces with the driver
  • prediction of the intention of other traffic participants, e.g. of pedestrians at a cross-walk, and their interaction with an autonomous vehicle

The workshop wants to promote the discussion about all kind of interfaces related to an environmental fusion model and how standardization processes can help to reduce the time until autonomous vehicles become reality on our streets.



09:30-10:00  Welcome and introduction: “Holistic Interfaces in an Open Fusion Platform”, Michael Schilling (Hella) 
10:00-10:30  Invited talk: “Measuring the World: Designing Robust Vehicle Localization for Autonomous Driving”, Frank Schuster, Martin Haueis, Christoph G. Keller (Daimler)
Autonomous vehicles rely on map data for trajectory planning and to extend the knowledge of the environment beyond the sensor range. In order to use the map data, it is essential to solve the localization problem in the map. To address this problem in the real world, different environmental conditions have to be considered. It turns out that a key aspect of localization is to find a suitable representation of the world that can be used for data association between map and sensor measurements.
We show in this presentation in multiple examples that besides choosing a suitable sensor setup and data extraction, the mapping algorithms and furthermore map representation are equally important to achieve high accuracy and reliability. The actual choice of these factors depends on the use case.
As this talk focusses on the mapping process, map representations can be handled differently to incorporate environmental changes and to model changing environments both on a small scale and on large datasets provided by mapping companies.
A case study conducted in a dynamic environment demonstrates such a specific localization design for a radar equipped autonomous prototype.
10:30-11:00  Coffee break 
11:00-11:30 Contributed paper: “The need for a sensor fusion to address all ASIL levels at the time”, Rolf Johansson (SPTR), Jonas Nilsson (Volvo Cars)
In order to perform safety assessment of vehicles for highly automated driving, it is critical that the vehicle can be proven to adapt its driving according to the sensed objects that might become a hinder. There is a complicated relation between the confidence of what hinders that might exist coming out of a sensor fusion block, and the tactical decisions about the driving style done by the autonomous vehicle. A good strategy that enables safety assessment according to ISO26262, implies that the sensor fusion block should continuously address its safety requirements with all the ASIL attribute values at the time. In
this paper we argue why every functional safety requirement allocated to a sensor fusion block should preferable be instantiated four times, each with a different ASIL value.
11:30-12:00  Invited talk: “Examining Pedestrian Intentions at Urban Crosswalks”, Benjamin Voelz (Bosch)
A comprehensive scene understanding is crucial for future fully automated vehicles. Especially urban traffic scenarios involving pedestrians remain challenging. This talk will tackle one part of the problem regarding the prediction of pedestrian intentions at urban crosswalks. Due to safety and comfortability reasons it is essential to identify the pedestrians’ intentions as early as possible. Particularly cars which approach the crosswalk with a high velocity will be enabled to either adjust their velocity with small accelerations or to avoid an unnecessary stop completely. This talk analyzes the behavior of pedestrians by means of machine learning algorithms based on real world trajectories. A basic intention recognition algorithm, that utilizes a large feature set, is introduced. The algorithms predicts the pedestrians’ intention to cross the street at a particular crosswalk. Additionally, the features are analyzed regarding their relevance for this underlying classification task. An evaluation is carried out based on a large dataset containing pedestrian trajectories which have been recorded at different crosswalks. The results will provide a detailed analysis of both typical and challenging (or atypical) pedestrian trajectories and their influence on the prediction performance.
12:00-12:30 Invited talk: “Towards purposeful intention prediction of pedestrians”, Dennis Ludl, David Randler, Björn Browatzki, Cristóbal Curio (Reutlingen University)"
Automated driving requires dedicated perception algorithms to infer the intention of weaker traffic participants, especially those of pedestrians, in order to guarantee safe and seamless navigation through urban environments. In scenarios where vehicles and pedestrians operate in a shared space, such as parking lots or residential areas, the incorporation of communication processes between both parties becomes crucial. We present latest work on the development of visual perception methods for an extended interpretation of pedestrian behavior including the understanding of dynamic gestures with consumer-grade hardware such as monocular cameras.
14:00-14:30  Invited talk: “Developing software architectures for autonomous vehicles”, Sebastian Ohl (Elektrobit)
Lately, many companies entered the field of developing autonomous driving functions. Starting this development from scratch is quite some effort and requires highly qualified resources. To easy this process and help the developers to focus on the user experienceable function, we propose a reference architecture for highly automated driving with standardized open software interfaces. Using this architecture and its interfaces, the developer can reuse software components from other projects or from suppliers on the market to develop their brand’s special experience.
14:30-15:00 Invited talk: “Predictive Video Processing for ADAS”, Rudolf Mester, VSI Lab (Linköping Unversity and Frankfurt University)
Understanding the world around us while we are moving means continuously maintaining a dynamically changing representation of the environment, making predictions about what to see next, and correctly processing those perceptions which were surprising, relative to our predictions. This principle is valid both for animate beings, as well as for technical systems that successfully participate in traffic. The VSI Lab at Frankfurt University puts special emphasis on this recursive / predictive approach to visual perception in ongoing projects for ADAS and autonomous driving. In our opinion, this approach leads to particularly efficient systems, since computational ressources may be focussed on 'surprising' (thus rare) observations, and since this allows for a large reduction of search spaces in typical visual matching and tracking tasks. Furthermore, since the environment representation is actually closely coupled to the measuring process, and not a distant result at the end of a long processing pipeline, it allows for a simplified fusion of information from different sensors. This implies of course a more tight coupling between sensor data processing and interpretation. The talk will present examples for the such predictive / recursive processing structures and put the pros and cons up to discussion.
15:00-15:30  Coffee break
15:30-16:00  Invited talk: “Challenges in vision-based fully automated valet parking”, Ulrich Schwesinger (ETH-Zürich)
Automated valet parking provides great potential to pave the way for driverless vehicles as it provides immediate benefits to customers and enables us to better adapt the complexity of the environment to the technical possibilities. Yet offering automated valet parking services on parking lots shared with other traffic participants at a reasonable price is still a challenging endeavor. This talk will detail the efforts undertaken in the European project "V-Charge" targeting automated valet parking with close-to-market sensors. Robust visual localization under changing weather- and lighting conditions, 360° object detection from monocular cameras and motion planning in mixed-traffic are among the project's achievements and will be presented together with remaining challenges.
16:00-16:30  Discussion Round with all Speakers, Moderators: Cristóbal Curio & Michael Schilling
16:30-16:45  Workshop closing


  • Print
Gefördert vom BMBF