INNOVATIONS

Learn more about our innovation areas, all aiming for the next level of automotive safety & user experience.

The next decade will bring game-changing innovations to the automotive industry. Within our research we anticipate the needs of the market ahead of time.

 

RESEARCH AREAS – OVERVIEW

In-Cabin Analysis
(DMS & OMS)

In-Cabin Monitoring for Automated Driving

Safety Systems
& Trustful AI

Simulation-based
AI Development

Sensor Fusion &
Interior/Exterior Fusion

IN-CABIN ANALYSIS (DMS & OMS)
We are constantly working on new innovations in the field of general in-cabin analysis for driver and occupant monitoring. emotion3D aims to be at the forefront of innovation to offer our custmers and partners new possibilities for their vehicles.

RESEARCH PROJECTS

UNISCOPE 3D

The UNISCOPE-3D project aims to develop highly innovative software algorithms that enable robust and accurate three-dimensional (3D) human body tracking and analysis using a single camera. By leveraging advanced computer vision techniques and Deep Learning (DL) algorithms, the software will (i) extract 3D keypoints inside and on the surface of human bodies in 3D space from single (monocular) camera images and (ii) exploit this information to generate a comprehensive understanding of complex 3D human actions, movements, and interactions in real-world environments.

This understanding of humans is essential for a wide range of industries and applications. In this project, we apply and evaluate it in emotion3D’s core business area of automotive Occupant Monitoring where it can enable advanced human-machine interaction and user experience, intelligent and personalized safety (e.g. personalized airbag deployment) and automation. 

Partners: TU Vienna 

Funding agency: This project is funded by the FFG within the Basisprogramm. 

Roadguard

There is a growing consensus among experts that, while DMS is a valuable safety feature, it should be complemented by comprehensive, context-aware solutions and adherence to rigorous regulations to truly achieve the ambitious goals set by initiatives like the EU Vision Zero for road fatalities. 

Currently, the available systems in European series vehicles are limited in their ability to detect and respond to different types of driver inattention. Traditional vehicle sensors are not yet equipped to meet the requirements of partially automated driving or contextual warning mechanisms. As a result, there is still a lot of room for improvement in terms of developing more advanced systems that can better assist drivers in various driving scenarios. More specifically, roadguard is working towards combining interior with exterior sensing capabilities. 

The system utilizes advanced AI-empowered edge devices to monitor drivers and road users, setting new safety benchmarks. The project’s key contribution involves developing a holistic perception and assessment system that seamlessly integrates data from diverse sensors. This comprehensive approach addresses regulatory challenges and establishes our USP—providing unparalleled safety for all road users, especially the vulnerable, and contributing to responsible and technologically advanced mobility in the EU. 

Partners: Virtual Vehicle | ZKW | leiwand.ai | motobit

Funding agency: This project is funded by the FFG within the Digital Technologies-2023 program. 

MOSAIC

MOSAIC’s goal is to push the technological independence further in the landscape of automated systems by tackling the challenge of integrating diverse perception hardware configurations, ensuring that automated systems can perceive their surroundings in a non-invasive manner, avoiding a single point of failure, with unparalleled accuracy and decreased complexity. 

emotion3D will contribute to the MOSAIC project by developing advanced perception technologies to enhance cognitive system intelligence for automated systems. This effort involves integrating multiple state-of-the-art sensor technologies, such as camera and radar sensors, and accompanying, sophisticated algorithms to significantly improve the system’s capacity to perceive and interpret its environment with high accuracy. This approach reduces the risk of single points of failure and contributes to the increase of the overall robustness and reliability of automated systems. 

Partners: 48 European partners including Infineon | Ford Otomotiv | TTTech | TTControl | NXP | AVL 

Funding agency: This project is funded by the HORIZON-JU-Chips-2024-2-RIA Program. 

A-IQ Ready
A-IQ Ready addresses two major Trends: “Internet of Things (IoT): From 1 Billion $ to 1 Trillion $ revenues ” and “From Cloud to Edge”. A-IQ Ready will apply 3 disruptive technologies: Quantum Sensor, neuromorphic acceleration, AI in multi-agent systems to build the edge continuum as the digital backbone for the Society 5.0. A-IQ Ready proposes cutting-edge quantum sensing, edge continuum orchestration of AI and distributed collaborative intelligence technologies to implement the vision of intelligent and autonomous ECS and to deliver A-IQ IoT for the digital age. 

Within this project, emotion3D is working towards the optimization of driver monitoring functions towards planning the driver’s work to predictively help avoiding dangerous situations. 

 Partners: 49 European partners including Huawei | TTTech | Synopsis | Mercedes Benz | AVL 

Funding agency: This project is funded by the HORIZON-JU-Chips-2024-2-RIA Program. 

More information available at https://www.aiqready.eu/ 

Empathic Vehicle
The aim of this project is to develop an innovative system that has the ability to recognize the emotional states and body language of a vehicle’s occupants. By using modern technologies such as computer vision and machine learning (ML), the vehicle should be able to understand the emotions, moods and needs of the occupants and react accordingly. As part of the project, extensive user studies are being carried out in laboratory environments, but also in real vehicles and using several additional sensors. On the one hand, this will create data sets that are currently not available and, on the other hand, new methods of artificial intelligence for emotion-based support of human driving behavior will be researched and evaluated. By integrating emotion and body language recognition into the vehicle system, the project contributes both to improving road safety and to enhancing the user experience (UX), two key selling points of vehicles. 

Partners: TU Vienna | University of Applied Sciences Technikum Vienna 

Funding agency: This project is funded by Wirtschaftsagentur Vienna within the program

Within this project, emotion3D is working towards the optimization of driver monitoring functions towards planning the driver’s work to predictively help avoiding dangerous situations. 

 Partners: 49 European partners including Huawei | TTTech | Synopsis | Mercedes Benz | AVL 

Funding agency: This project is funded by the HORIZON-JU-Chips-2024-2-RIA Program. 

More information available at https://www.aiqready.eu/ 

IN-CABIN MONITORING FOR AUTOMATED DRIVING
From Level 3 onwards, the way we use vehicles will change. Likewise, their systems for safety and human interaction will have to change as well. Our research will provide Tier-1s and OEMs with in-cabin analysis software that meets future needs.

Our research in this area focuses on:

  • Advanced driver availability monitoring systems for Level 3
  • Whole cabin safety & UX during autonomous driving
  • Safety in automated shuttles

RESEARCH PROJECTS

SafePassenger3D
The system targeted within this project will be based on a realistic and highly complex task from the automotive industry – the interior analysis of vehicles (interior monitoring) – and will enable emotion3D to investigate important system aspects of camera networks (e.g. dependencies of individual processing stages or effects of hardware configurations on the overall performance). The envisaged end result is a system concept optimised in terms of quality, robustness and adaptability for the analysis of 3D scenes using camera networks. The system will be scalable and used for monitoring vehicle interiors of conventional as well as autonomous vehicles.

Read more about the project on the official site of FFG (Austrian Research Promotion Agency) by clicking here.

Funding agency: This project is funded by the FFG within the Early Stage 2019 program.

i3DOC
In the i3DOC project, an embedded vision platform for new applications based on stereo and 360° vision in the industrial / safety-relevant environment is to be researched. Solutions suitable for use in automotive are only available to a limited extent and either only address individual aspects or are based on PC architectures and are therefore largely unsuitable for mobile or even cost-sensitive applications. The aim of this project is to conceptually develop an embedded architecture for 360° 3D vision applications for safety-relevant applications and to validate the selected architectural approaches.

Partners: Mission Embedded | emotion3D

Funding agency: This project is funded by the Wirtschaftsagentur Wien as part of the funding initiative Co-Create 2017 within the FORSCHUNG program.

SAFETY SYSTEMS & TRUSTFUL AI
In-cabin analysis enables a broad range of innovative automotive safety systems. In terms of active safety, monitoring the driver for drowsiness and distraction can hugely increase driving safety. However, not only active safety systems benefit from in-cabin analysis information. Also the performance of passive safety systems such as airbags can be vastly improved by precise real-time information on each occupant.

It is essential that this increased safety is provided for everybody, i.e. works equally well for each person who sits down in a car. Thus, the topic of trustful and ethical AI must be considered during development of the analysis algorithms. The frameworks and tools we currently develop make sure that all the safety systems work unbiased and provide optimal protection for everybody.

RESEARCH PROJECTS

Smart-RCS
Worldwide, over 1.4 million people die each year in road accidents (WHO) and millions more suffer from injuries. Mandatory passive safety systems trigger airbags and tense seat belts in event of a crash to reduce the number of fatalities and heavy injuries. However, these systems follow ”few-sizes-fit-all” development approach and thus perform best for a small number of specified body physiques – the most common one is the “average male”: 175cm, 78kg. This is suboptimal for everybody who deviates from these averages – children, elderly people and even woman.

Studies have shown: Any seatbelt-wearing female occupant is at 73% more risk to suffer from serious injuries than seatbelt-wearing male occupants (Univ. Virginia). Also, female occupants are at up to 17% higher risk to be killed in an accident than male occupants (NHSTA).

As long as passive safety systems cannot distinguish between the occupant’s individual characteristics, it is impossible to achieve optimal protection for everybody.

For the first time, touchless 3D imaging sensors are used to derive precise real-time information about each occupant, such as body position and pose, body physique, age, gender, etc. Based on this information, the Smart-RCS computes the optimal deployment strategy tailored to each individual occupant.

By taking those relevant factors into account, Smart-RCS optimizes the protective function while simultaneously mitigating the risks of doing unnecessary harm.

Smart-RCS aims to disrupt the passive safety systems market by introducing personalized and situation-aware protection.

Learn more on our project website: www.smart-rcs.eu

Funding agency: This project is funded by the European Commission within the Horizon 2020 FTI program.

Safe.ICM
The Safe.ICM project deals with the specification and analysis of complex algorithms from the field of machine learning. Several simple detailed tasks are linked together in order to describe more complex tasks in a simple way. For example, it is difficult to describe an algorithm that accurately describes the level of attention of a driver without additional knowledge. However, if the description of attention is broken down to detailed aspects, such as gaze direction, head orientation, detected objects in the field of view, etc., an accurate description of attention is enabled.

In addition to the precise description of a complex context, this also allows a meaningful evaluation of the information. For example, accuracy can be be evaluated under several different conditions. In a further step, this evaluation or analysis of the networks enables exact statements with regard to the diversity, non-discrimination, fairness and robustness of the algorithms.

Accordingly, the goal of this project is to develop a framework for developing and optimizing trustworthy AI applications to increase the transparency, diversity, non-discrimination, fairness, and robustness of machine learning algorithms. Since traceability is essential for the use of algorithms in the automotive industry and in safety-critical environments, the framework developed in this project will be applied, tested, continuously improved and optimized using algorithms for vehicle in-cabin analysis.

Funding agency: This project has received funding from aws, by the means of the “Nationalstiftung für Forschung, Technologie und Entwicklung”.

ACCOMPLISH
ACCOMPLISH aims at increasing the readiness of enterprises of any size to face an era of unprecedented regulatory scrutiny by simplifying, integrating, and automating compliance in their data/AI operations, their data/AI assets, their solutions/platforms, and their overall organisations. ACCOMPLISH brings forward a one-stop-shop mentality in multifaceted compliance from the regulatory, ethics, environmental, and industry perspectives, effectively steering enterprises through the maze of regulations and standards while bringing end-to-end visibility on the immutable and constantly up-to-date compliance/certification status of an enterprise, its solutions, and its data/AI assets to any interested stakeholder.

ACCOMPLISH practically contributes to the current research and advance the state-of-the-art techniques and technologies across a number of research paths, including AI-based, automated assessment, recommendation and certification of compliance at different levels ranging from organisation and data/AI operations, to integrated systems/solutions and selected datasets/models; compliant-by-design and compliant-by-default (on-the-job) data/AI operations from data harvesting, retention, security, bias detection and quality assurance to AI/ML model design, training, evaluation, execution and observability/monitoring, that act as exemplary compliance technology enablers and complement the underlying data spaces.

ACCOMPLISH designs a novel AI-based compliance and certification framework cross-cutting the different regulatory/legal, environmental, cybersecurity and business/industry-specific compliance perspectives while always ensuring a human-in-the-loop approach. 

Partners: 24 European partners including MAN | Ubitech | Tofas Turk Otomobil | Whirlpool | PWC 

Funding agency: This project is funded by the HORIZON-CL4-2024-DATA-01-01 Program. 

INTERIOR/EXTERIOR FUSION
In-cabin analysis and exterior analysis are a powerful combination for providing innovative safety and driving experience features. In the future it will be important for ADAS like automatic emergency breaking systems to know the status of the driver and the passengers.

Within this field, we are working with large Tier-1 partners on innovative approaches to provide occupant-aware advanced driver assistance systems.

RESEARCH PROJECTS

CarVisionLight
The concept of the resultant integrated, intelligent general vision system to be developed delivers outcomes and findings about in-depth automation steps as follows:

– Aimed at strongly improved image- and therefore environment perception, the headlamps provide a tailored adaptive scenery illumination to assigned lamp-cameras.

– The stereoscopic, high resolution camera system provides significantly improved information about data related to glare free high beam control, as object classification, positioning (horizontal, vertical), distance, direction of movement, lane,

– The stereoscopic camera system enables highly improved information processing based on data relevant for autonomous driving like addressed and labelled lanes, topology, free space, obstacles, traffic participants, pedestrians, etc.

Read more about the project on the official site of TU Vienna by clicking here.

Partners: ZKW | TU Vienna | emotion3D

Funding agency: This project is funded by the FFG within the IKT der Zukunft program.

SmartProtect
The safe detection of vulnerable road users (pedestrians, cyclists) is an important social goal as well as a demanding technical challenge, which is currently insufficiently solved, especially in poor visibility conditions (night, fog), at long distances (>100m) and when approaching from the side (crossing situations). The SmartProtect project addresses this problem by means of a holistic, integrative approach that includes software, a combination of imaging and distance-detecting sensor technology and component design with interactive modular architecture. The main characteristics of the novel automotive perception system are fused sensor data with prediction function and automated, problem-specific control of headlights and infrared lighting, supported by reactive camera and LIDAR systems, which are installed in the same component housing and cover different detection angles. The envisaged multimodal systems will be evaluated in real driving conditions and in representative, reproducible test scenarios.

Partners: ZKW | TU Vienna | emotion3D

Funding agency: This project is funded by the FFG within the MdZ-2019 program.

SIMULATION-BASED AI DEVELOPMENT
For training, testing algorithm improvement and validation a large amount of high-quality data is required. Generating data in real-life is highly effortful and raises questions of data privacy. Thus, we are heavily investigating alternative ways to create data such as from computer graphics simulations.

RESEARCH PROJECTS

SyntheticCabin
The aim of the project is to provide a cost- and time-efficient simulation workflow for the development of environment analysis applications for vehicle interiors. Our innovative approach simulates complex dynamic vehicle interior scenarios (e.g. generation and animation of 3D person models, materials and surfaces, matching sensor configurations, etc.) and in particular enables the production of synthetic data for training, validation and testing purposes. This avoids the need for costly data acquisition and manual annotation and enables the replication of a variety of scenarios, environmental conditions and sensor modalities. In contrast to real (i.e. non-simulated) data sets, the planned simulation workflow can also capture rare or potentially dangerous scenarios (e.g. situations during a collision or microsleep). In addition to applications in the automotive sector, we also see operator safety in the transport sector (e.g. bus, train) and commercial vehicle sector (e.g. trucks, construction machinery) as promising areas of application for the proposed simulation workflow.

Partners: BECOM Systems | Rechenraum | TU Vienna | emotion3D

Funding agency: This project is funded by the FFG within the MdZ-2020 program.

SUPPORTERS