DAT4.ZERO: Training materials
Training and knowledge transfer activities are a joint effort of the partners in the consortium and the training and knowledge transfer team, who support and coordinate the planning and organisation activities, providing an activity and training material overview.
Early in the project, we established a shared knowledge foundation among the project partners. Then, we combined insights and outcomes from project tasks and collaborative efforts among partners in presentations, training sessions, and demonstrations. In the final phase, we transmitted the consolidated knowledge and technical solutions to the predetermined target audiences and stakeholders, which can be further deployed post-project as part of the project exploitation.
We have conducted various training activities at different levels from generic knowledge foundation such as the trainings on “Design of Experiment”, on “Supplier Management Process for ZDM”, to DAT4.ZERO solution-oriented training sessions and technology transfer such as the trainings on “in-line measurements of aluminium profile”, “3D Characterisation of cylindrical runout”, “Nerve Edge computing platform”, “Data Management System-DQM-DMS”, etc.
The following list highlights the main training modules that can be available upon request.
|Training module||Description||Work package||Lead Organization||Lead Organization|
|In-line measurements of aluminium profile||Metrology is an essential part of the manufacturing process to test conformance and ensure reliability. Traditional metrology is often carried out after the parts are manufactured, for example using contact measurement systems, gauge blocks or FARO arms, to measure the form or surface of a part; often thought of as a non-value-added process that consumes time, holds up stocks and results in scraps/reworks. This causes real pain to manufacturers as they try to make faster lunches to the market and carry on-time deliveries. With the recent advances in optical measurement capabilities, a smart-metrology system can be introduced to reduce lead times and provide reliable information for effective decision-making. A smart-metrology system integrates seamlessly with the manufacturing line to capture and analyse data in real-time without the need for stopping the manufacturing line.||WP2||Taraz, UK||Waiel Elmadih |
|3D runout characterisation||Cylindrical rotational errors are typically characterised using runout; a 1D numerical value that defines undesired variation from a rotational centre. Methods have been developed to expand traditional runout tolerancing assessments to three-dimensions, accounting not only for magnitude of error, but also direction. This unlocks the possibility for assembly optimisations that can account for directional error and reduce waste by making more parts fit-for-purpose. This presentation will give an overview of the three-dimensional runout characterisation method, as well as the software validation steps taken to assure its value. This research was performed at the University of Nottingham for the Dentsply use case.||WP2||University of Nottingham||Luke Todhunter |
|3D-DaVa: Enhancing 3D Point Cloud Data Reliability for Industrial Applications||The escalating incorporation of three-dimensional (3D) point cloud data across various industrial applications highlights the necessity of assuring its reliability. However, along with the error-prone process of object digitization, the actual data volume, pattern complexity, environmental disturbances, and equipment inaccuracies can introduce significant challenges in assessing the quality of 3D point cloud data. In light of these challenges, there is a pressing need for a generally applicable, comprehensive, and robust solution to effectively validate the integrity of point cloud data and assess its quality. Such a solution would empower professionals to rely on this data for critical decision-making, covering applications within many domains, such as manufacturing, the automotive industry, and robotics.||WP2||SINTEF Digital||Adela Nedisan Videsjorden (firstname.lastname@example.org)|
|Nerve Edge Platform||This training gives participants a good high-level understanding of the capabilities of an industrial IoT edge platform like Nerve. It’s not intended primarily as a product presentation but rather as an explanation on which capabilities we consider most important (and why) for an industrial edge computing platform, and how the Nerve architecture addresses these capabilities. The training focuses on use cases and trends influencing industrial edge computing, industrial edge computing architecture, Nerve – overview of technical infrastructure and components, and a live demo of a few Nerve GUI elements. A good high-level understanding of the capabilities of an industrial IoT edge platform like Nerve. It’s not intended primarily as a product presentation but rather as an explanation on which capabilities we consider most important (and why) for an industrial edge computing platform, and how the Nerve architecture addresses these capabilities.||WP3||TTTech||Francesca Flamigni |
|Edge-based Data Profiling and Repair as a Service||With the proliferation of IoT devices and the consequent exponential growth in data generation, ensuring data quality has become a critical challenge in IoT applications. Erroneous data can significantly impact the reliability and effectiveness of decision-making processes and downstream analytics. Leveraging the computational abilities of edge devices enables data profiling and repair tasks at the edge, allowing for immediate remediation of erroneous data within the data stream and improved scalability through distributed repair across multiple edge devices. Cloud-based data profiling and repair methods have been extensively researched, but limited computational resources constrain their applicability at edge/fog devices. To overcome this limitation and enhance generalizability, Machine Learning (ML) offers a promising solution, allowing sensor substitution, missing value prediction, and corrupt data replacement. ML-based data repair techniques can be flexibly deployed at the edge using containerized repair services for real-time data repair.||WP3||SINTEF Digital||Phu Nguyen |
|Data Management System (Data modelling and semantic data tagging)||This training gives participants the use of / integration with the module Data Management System (DMS), developed by Holonix for the DQM Platform. The main objective of the DMS system is to allow a structured and time-based ordered acquisition of IOT related data, in particular from advanced industrial machines, gateways or other correlated applications. Particular care is given to the security of the system, as it is taught to uplift data from the field layer to the cloud layer, and to its scalability, as typically IoT data grows large, with a lot of records from many different sources.||WP3||Holonix||Stefano Borgia |
|Training on the Data Visualization Modules||This training will focus on the DQM Core Data Visualization Modules developed in WP3. These modules, integrated in the pilot instances of the DQM Core, consist of Graphical User Interfaces supporting the users in terms of data checking, filtering and visualization. They represent the Front-End of the layer of data management of the DQM platform and provide the basis for improved decision making in the shop floor and in engineering. Data Visualization Modules are built following common IT principles, but each of them is customized according to specific requirements of a DAT4.Zero use case.||WP3||Holonix||Stefano Borgia |
|The DQM Analytics Toolkit||This training focuses on the DQM Analytics Toolkit developed in WP4. This is a modular toolkit for supporting data analytics. The toolkit builds on top of the foundational data layer, the DQM Core platform, and provides the basis for improved decision making in the shop floor and in engineering. The toolkit is a modular system with a set of independent modules that, together, provide support for: Data preparation, Data analytics, and Data visualisation.||WP4||KIT-AR / SkillAugment||Felix Mannhardt (email@example.com)|
|Data fusion and analytics solutions for knowledge extractions, correlation, and pattern analysis.||Manufacturing has enabled the mechanized mass production of the same (or similar) products by replacing craftsmen with assembly lines of machines. The quality of each product in an assembly line greatly hinges on continual observation and error compensation during machining using sensors that measure quantities such as position and torque of a cutting tool and vibrations due to possible imperfections in the cutting tool and raw material. Patterns observed in sensor data from a (near-) optimal production cycle should ideally recur in subsequent production cycles with minimal deviation. Manually labelling and comparing such patterns is an insurmountable task due to the massive amount of streaming data that can be generated from a production process. We present UDAVA, an unsupervised machine learning pipeline that automatically discovers process behaviour patterns in sensor data for a reference production cycle. https://github.com/SINTEF-9012/Udava||WP4||SINTEF Digital||Erik Johannes Husom |
|Machine Learning to predict Product Quality based on Structure Bourne Sound Data||The complexity of products is increasing, requiring high-precision components for key functions. Microgears, with their complex geometry, pose manufacturing challenges as their large geometric deviations impact functionality, leading to unwanted noise and vibrations in the final product. However, there are no readily available integrated measuring methods for quality control of all microgears produced. The approach proposes the integration of an acoustic emission sensor into the hobbing process of microgears to predict process parameters, geometric variables, and functional variables of the produced gears using machine learning. |
|WP4||KIT||Ali Bilen |
|XAI Quality Analytics Module||Monitoring of quality is a key concern in ZDM and digital support tools are a key enabling technology to avoid and amend product faults. By combining inline measurements with automated analysis of process conformity and item variation it becomes possible to provide more rapid feedback on deviations in products or processes. However, when using automated judgements of quality that are based on machine learning and training data, one needs to make sure that uncertainty and decision information is conveyed to the system users to facilitate for understanding, trust, and corrections. This training webinar uses an industrial pilot case, the Benteler extrusion line, to describe the use of an XAI-based quality analytics system. The webinar presents key monitoring tasks, the pilot case, and main methodology, before it goes through a data example following the different screens and drilldowns to facilitate explanations of quality judgements. The goal is to build better understanding of the tool, and how different types for graphics and information can support explain-ability in AI-based systems. For both Benteler and Enki. And can be generalized for other manufacturers. |
|Grinding defects prediction toolkit||This training session will focus on the development of the Grinding Defects Prediction Toolkit within WP5. The toolkit aims to address grinding defects associated with thermal damage and various patterns that emerge on the ground surface, such as waviness. It is composed of multiple stages across the DAT4ZERO work packages, including data acquisition from diverse sources (WP2), communication and architecture for acquired data (WP3), data analytics (WP4), and process modelling to forecast the occurrence of grinding defects (WP5). The training will feature a conceptual demonstration, primarily tailored to the Fersa Use Case, but with applicability to other grinding processes. While thermal damage prediction will be the main topic, a brief overview of other defects will also be provided.||WP5||Ideko||Jorge Alvarez Ruiz |
|Developed supplier selection process||This training shows an adapted supplier management process that can help to ease the implementation of external supplier production data through data exchange support over the entire business lifecycle especially in supplier evaluation (“give suppliers who are capable and/or willing to exchange production data a bonus in evaluation”) as well in supplier qualification (“help suppliers to set up relevant skills for data exchange, like implementing single part traceability”). The overall goal is the improvement of the model accuracy and performance as well as smoothing the causes of defects external to the company. This should be achieved by utilizing shared suppliers’ production data, which is better available through a supplier management process focusing also on supplier data.||WP5||KIT||Rainer Silbernagel (firstname.lastname@example.org)|
|Design of Experiment||Intro to Design of Experiments |
– Why is it not used routinely in R&D?
– Types of designs and purposes
• Short intro to Analysis Of Variance (ANOVA)
• Factorial designs
• Optimization designs
• Designs with constraints
• Case studies
• Hands-on demo with optimization
|WP6||IdleTechs||Frank Westad |
|Tool Demo: REPTILE-a Tool for Replay-driven Continual Learning in IIoT||We present an automated Machine Learning (ML) tool designed as a continual learning pipeline to adapt to evolving data streams in the Industrial Internet of Things (IIoT). This tool creates ML experiences, starting with training a neural network model. It then iteratively refines this model using fresh data while judiciously replaying pertinent historical data segments. When applied to IIoT sensor data, our tool ensures sustained ML performance amid evolving data dynamics while preventing the undue accumulation of obsolete sensor data. We have successfully assessed our tool across three industrial datasets and affirm its efficacy in dynamic knowledge retention and adaptation.||WP6||SINTEF Digital||Erik Johannes Husom |
|Tool Demo: Automated Behavior Labeling for IIoT Data||We present an automated data analysis tool for IIoT applications that discovers process behaviour patterns in sensor data. It takes time-varying sensor data from reference production cycles and performs clustering on summary statistic feature vectors derived from raw sensor data over configurable window sizes. It automatically labels the raw sensor data based on distinct behaviour modes represented by the clusters. The tool wraps, as a web service deployed in a Docker container, the AI model represented by clusters/behaviour modes discovered in the reference sensor data. We have successfully evaluated the tool over four industrial datasets. Demo video: |
|WP6||SINTEF Digital||Erik Johannes Husom |