DistRDF2ML - Scalable Distributed In-Memory Machine Learning Pipelines for RDF Knowledge Graphs
Autor: | Carsten Felix Draschner, Farshad Bakhshandegan Moghaddam, Claus Stadler, Jens Lehmann, Hajira Jabeen |
---|---|
Rok vydání: | 2023 |
Předmět: |
Source lines of code
Computer science Feature vector Big data 02 engineering and technology Machine learning computer.software_genre Scalable Semantic Processing RDF 020204 information systems 0202 electrical engineering electronic engineering information engineering Knowledge Graphs Preprocessing business.industry computer.file_format Machine Learning Pipeline (software) Metadata Explainable AI Software design 020201 artificial intelligence & image processing Artificial intelligence Data pre-processing business Distributed Computing computer |
Zdroj: | CIKM Proceedings of the 30th ACM International Conference on Information & Knowledge Management |
Popis: | This paper presents DistRDF2ML, the generic, scalable, and distributed framework for creating in-memory data preprocessing pipelines for Spark-based machine learning on RDF knowledge graphs. This framework introduces software modules that transform large-scale RDF data into ML-ready fixed-length numeric feature vectors. The developed modules are optimized to the multimodal nature of knowledge graphs. DistRDF2ML provides aligned software design and usage principles as common data science stacks that offer an easy-to-use package for creating machine learning pipelines. The modules used in the pipeline, the hyper-parameters and the results are exported as a semantic structure that can be used to enrich the original knowledge graph. The semantic representation of metadata and machine learning results offers the advantage of increasing the machine learning pipelines’ reusability, explainability, and reproducibility. The entire framework of DistRDF2ML is open source, integrated into the holistic SANSA stack, documented in scala-docs, and covered by unit tests. DistRDF2ML demonstrates its scalable design across different processing power configurations and (hyper-)parameter setups within various experiments. The framework brings the three worlds of knowledge graph engineers, distributed computation developers, and data scientists closer together and offers all of them the creation of explainable ML pipelines using a few lines of code. |
Databáze: | OpenAIRE |
Externí odkaz: |