37th Digital Avionics System Conference (DASC)

From 23th to 27th of September 2018 the 37th Digital Avionics System Conference (DASC) took place in London, England. The MALORCA team presented some of the project outcomes in the Human Factors & Performance for Aerospace track. We are proud to report that our paper titeled “Semi-supervised Adaptation of Assistant Based Speech Recognition Models for different Approach Areas” was awarded best paper in the Human Factors & Performance for Aerospace track.

Paper content:
Air Navigation Service Providers (ANSPs) replace paper flight strips through different digital solutions. The instructed commands from an air traffic controller (ATCos) are then available in computer readable form. However, those systems require manual controller inputs, i.e. ATCos’ workload increases. The Active Listening Assistant (AcListant®) project has shown that Assistant Based Speech Recognition (ABSR) is a potential solution to reduce this additional workload. However, the development of an ABSR application for a specific targetdomain usually requires a large amount of manually transcribed audio data in order to achieve task-sufficient recognition accuracies. MALORCA project developed an initial basic ABSR system and semi-automatically tailored its recognition models for both Prague and Vienna approaches by machine learning from automatically transcribed audio data. Command recognition error rates were reduced from 7.9% to under 0.6% for Prague and from 18.9% to 3.2% for Vienna.

The full paper can be found here.
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

Another Paper related to similar topics as MALORCA and involvement of the DLR was presented by the team of the PJ16-04 project.

Paper content:
Nowadays Automatic Speech Recognition (ASR) applications are increasingly successful in the air traffic (ATC) domain. Paramount to achieving this is collecting enough data for speech recognition model training. Thousands of hours of ATC communication are recorded every day. However, the transcription of these data sets is resource intense, i.e. writing down the sequence of spoken words, and more importantly, interpreting the relevant semantics. Many different approaches including CPDLC (Controller Pilot Data Link Communications) currently exist in the ATC community for command transcription, a fact that e.g. complicates exchange of transcriptions. The partners of the SESAR funded solution PJ.16-04 are currently developing on a common ontology for transcription of controller-pilot communications, which will harmonize integration of ASR into controller working positions. The resulting ontology is presented in this paper.

The full paper can be found here .
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works