Paper in "Frontiers in Big Data"

Die Verbesserung der Datenqualität von Anwendungen, die Maschinelles Lernen (ML) nutzen, hilft nicht nur deren Performance zu erhöhen, sondern ermöglicht es auch, dass effizientere Modelle verwendet werden können. Eines der häufigsten Datenqualitätsproblem sind fehlende Werte (missing values). In diesem peer-reviewed Artikel haben Sebastian Jäger, Arndt Allhorn und Felix Bießmann auf Basis unterschiedlichster Datensätze verschiedene Methoden verglichen, die zum Vorhersagen dieser Fehlstellen, die uns in Bezug auf nachhaltige Produktempfehlungen immer wieder begegnen, verwendet werden können. Abschließend werden Empfehlungen für unterschiedlichste Situationen ausgesprochen, die auf den Ergebnissen unserer Experimente beruhen.

Abstract

With the increasing importance and complexity of data pipelines, data quality became one of the key challenges in modern software applications. The importance of data quality has been recognized beyond the field of data engineering and database management systems (DBMSs). Also, for machine learning (ML) applications, high data quality standards are crucial to ensure robust predictive performance and responsible usage of automated decision making. One of the most frequent data quality problems is missing values. Incomplete datasets can break data pipelines and can have a devastating impact on downstream ML applications when not detected. While statisticians and, more recently, ML researchers have introduced a variety of approaches to impute missing values, comprehensive benchmarks comparing classical and modern imputation approaches under fair and realistic conditions are underrepresented. Here, we aim to fill this gap. We conduct a comprehensive suite of experiments on a large number of datasets with heterogeneous data and realistic missingness conditions, comparing both novel deep learning approaches and classical ML imputation methods when either only test or train and test data are affected by missing data. Each imputation method is evaluated regarding the imputation quality and the impact imputation has on a downstream ML task. Our results provide valuable insights into the performance of a variety of imputation methods under realistic conditions. We hope that our results help researchers and engineers to guide their data preprocessing method selection for automated data quality improvement.