Learn more about our research areas – the future of automotive safety & user experience.
The next decade will bring game-changing innovations to the automotive industry. Within our research we anticipate the needs of the market ahead of time.
RESEARCH AREAS – OVERVIEW
IN-CABIN MONITORING FOR AUTOMATED DRIVING
From Level 3 onwards, the way we use vehicles will change. Likewise, their systems for safety and human interaction will have to change as well. Our research will provide Tier-1s and OEMs with in-cabin analysis software that meets future needs.
Our research in this area focuses on:
- Advanced driver availability monitoring systems for Level 3
- Whole cabin safety & UX during autonomous driving
- Safety in automated shuttles
The system targeted within this project will be based on a realistic and highly complex task from the automotive industry – the interior analysis of vehicles (interior monitoring) – and will enable emotion3D to investigate important system aspects of camera networks (e.g. dependencies of individual processing stages or effects of hardware configurations on the overall performance). The envisaged end result is a system concept optimised in terms of quality, robustness and adaptability for the analysis of 3D scenes using camera networks. The system will be scalable and used for monitoring vehicle interiors of conventional as well as autonomous vehicles.
Read more about the project on the official site of FFG (Austrian Research Promotion Agency) by clicking here.
Funding agency: This project is funded by the FFG within the Early Stage 2019 program.
In the i3DOC project, an embedded vision platform for new applications based on stereo and 360° vision in the industrial / safety-relevant environment is to be researched. Solutions suitable for use in automotive are only available to a limited extent and either only address individual aspects or are based on PC architectures and are therefore largely unsuitable for mobile or even cost-sensitive applications. The aim of this project is to conceptually develop an embedded architecture for 360° 3D vision applications for safety-relevant applications and to validate the selected architectural approaches.
Partners: Mission Embedded | emotion3D
Funding agency: This project is funded by the Wirtschaftsagentur Wien as part of the funding initiative Co-Create 2017 within the FORSCHUNG program.
SAFETY SYSTEMS & TRUSTFUL AI
In-cabin analysis enables a broad range of innovative automotive safety systems. In terms of active safety, monitoring the driver for drowsiness and distraction can hugely increase driving safety. However, not only active safety systems benefit from in-cabin analysis information. Also the performance of passive safety systems such as airbags can be vastly improved by precise real-time information on each occupant.
It is essential that this increased safety is provided for everybody, i.e. works equally well for each person who sits down in a car. Thus, the topic of trustful and ethical AI must be considered during development of the analysis algorithms. The frameworks and tools we currently develop make sure that all the safety systems work unbiased and provide optimal protection for everybody.
Worldwide, over 1.4 million people die each year in road accidents (WHO) and millions more suffer from injuries. Mandatory passive safety systems trigger airbags and tense seat belts in event of a crash to reduce the number of fatalities and heavy injuries. However, these systems follow a ”few-sizes-fit-all” development approach and thus perform best for a small number of specified body physiques – the most common one is the “average male”: 175cm, 78kg. This is suboptimal for everybody who deviates from these averages – children, elderly people and even woman.
Studies have shown: Any seatbelt-wearing female occupant is at 73% more risk to suffer from serious injuries than seatbelt-wearing male occupants (Univ. Virginia). Also, female occupants are at up to 17% higher risk to be killed in an accident than male occupants (NHSTA).
As long as passive safety systems cannot distinguish between the occupant’s individual characteristics, it is impossible to achieve optimal protection for everybody.
For the first time, touchless 3D imaging sensors are used to derive precise real-time information about each occupant, such as body position and pose, body physique, age, gender, etc. Based on this information, the Smart-RCS computes the optimal deployment strategy tailored to each individual occupant.
By taking those relevant factors into account, Smart-RCS optimizes the protective function while simultaneously mitigating the risks of doing unnecessary harm.
Smart-RCS aims to disrupt the passive safety systems market by introducing personalized and situation-aware protection.
Learn more on our project website: www.smart-rcs.eu
Funding agency: This project is funded by the European Commission within the Horizon 2020 FTI program.
The Safe.ICM project deals with the specification and analysis of complex algorithms from the field of machine learning. Several simple detailed tasks are linked together in order to describe more complex tasks in a simple way. For example, it is difficult to describe an algorithm that accurately describes the level of attention of a driver without additional knowledge. However, if the description of attention is broken down to detailed aspects, such as gaze direction, head orientation, detected objects in the field of view, etc., an accurate description of attention is enabled.
In addition to the precise description of a complex context, this also allows a meaningful evaluation of the information. For example, accuracy can be be evaluated under several different conditions. In a further step, this evaluation or analysis of the networks enables exact statements with regard to the diversity, non-discrimination, fairness and robustness of the algorithms.
Accordingly, the goal of this project is to develop a framework for developing and optimizing trustworthy AI applications to increase the transparency, diversity, non-discrimination, fairness, and robustness of machine learning algorithms. Since traceability is essential for the use of algorithms in the automotive industry and in safety-critical environments, the framework developed in this project will be applied, tested, continuously improved and optimized using algorithms for vehicle in-cabin analysis.
Funding agency: This project has received funding from aws, by the means of the “Nationalstiftung für Forschung, Technologie und Entwicklung”.
INTERIOR/EXTERIOR SENSOR FUSION
In-cabin analysis and exterior analysis are a powerful combination for providing innovative safety and driving experience features. In the future it will be important for ADAS like automatic emergency breaking systems to know the status of the driver and the passengers.
Within this field, we are working with large Tier-1 partners on innovative approaches to provide occupant-aware advanced driver assistance systems.
The concept of the resultant integrated, intelligent general vision system to be developed delivers outcomes and findings about in-depth automation steps as follows:
– Aimed at strongly improved image- and therefore environment perception, the headlamps provide a tailored adaptive scenery illumination to assigned lamp-cameras.
– The stereoscopic, high resolution camera system provides significantly improved information about data related to glare free high beam control, as object classification, positioning (horizontal, vertical), distance, direction of movement, lane,
– The stereoscopic camera system enables highly improved information processing based on data relevant for autonomous driving like addressed and labelled lanes, topology, free space, obstacles, traffic participants, pedestrians, etc.
Read more about the project on the official site of TU Vienna by clicking here.
Partners: ZKW | TU Vienna | emotion3D
Funding agency: This project is funded by the FFG within the IKT der Zukunft program.
The safe detection of vulnerable road users (pedestrians, cyclists) is an important social goal as well as a demanding technical challenge, which is currently insufficiently solved, especially in poor visibility conditions (night, fog), at long distances (>100m) and when approaching from the side (crossing situations). The SmartProtect project addresses this problem by means of a holistic, integrative approach that includes software, a combination of imaging and distance-detecting sensor technology and component design with interactive modular architecture. The main characteristics of the novel automotive perception system are fused sensor data with prediction function and automated, problem-specific control of headlights and infrared lighting, supported by reactive camera and LIDAR systems, which are installed in the same component housing and cover different detection angles. The envisaged multimodal systems will be evaluated in real driving conditions and in representative, reproducible test scenarios.
Partners: ZKW | TU Vienna | emotion3D
Funding agency: This project is funded by the FFG within the MdZ-2019 program.
SIMULATION-BASED AI DEVELOPMENT
For training, testing algorithm improvement and validation a large amount of high-quality data is required. Generating data in real-life is highly effortful and raises questions of data privacy. Thus, we are heavily investigating alternative ways to create data such as from computer graphics simulations.
The aim of the project is to provide a cost- and time-efficient simulation workflow for the development of environment analysis applications for vehicle interiors. Our innovative approach simulates complex dynamic vehicle interior scenarios (e.g. generation and animation of 3D person models, materials and surfaces, matching sensor configurations, etc.) and in particular enables the production of synthetic data for training, validation and testing purposes. This avoids the need for costly data acquisition and manual annotation and enables the replication of a variety of scenarios, environmental conditions and sensor modalities. In contrast to real (i.e. non-simulated) data sets, the planned simulation workflow can also capture rare or potentially dangerous scenarios (e.g. situations during a collision or microsleep). In addition to applications in the automotive sector, we also see operator safety in the transport sector (e.g. bus, train) and commercial vehicle sector (e.g. trucks, construction machinery) as promising areas of application for the proposed simulation workflow.
Partners: BECOM Systems | Rechenraum | TU Vienna | emotion3D
Funding agency: This project is funded by the FFG within the MdZ-2020 program.