Contamination detection in industrial 3D printing


AI-based detection of nozzle contamination in industrial 3D printing

Input

Infrared images of 3D printer nozzle

Goal

Optimise manufacturing process by reducing the need for human visual inspections

Output

Detection and classification of nozzle contamination


Starting Point


During 3D printing, the printing material is heated and extruded through a nozzle which deposits the material layer by layer to create the 3D-printed object. Unwanted material underneath the nozzle can contaminate future prints, and to avoid running costly reprints it is necessary to regularly inspect the nozzle visually.

The purpose of this project is to use AI-based algorithms to automatically detect if the area under the printing nozzle is contaminated.


Challenges


Industrial 3D printers are typically housed inside specially designed and heated enclosures. The installation of lights for the monitoring of prints using optical cameras is therefore costly and cumbersome. A more elegant solution is to use infrared cameras to monitor the nozzle, since the heated nozzle is typically at a much higher temperature than its surroundings and the point of interest shows up very clearly on an infrared camera without the need for additional lighting. Training a machine learning model to automatically detect contamination around a printing nozzle requires a specialised dataset. The creation and curation of a custom dataset of infrared 3D printing images presents its own challenges and opportunities:

  • The images obtained are precisely the type of images that the model will be trying to classify, so there is no "redundant data"

  • Image classification models are typically trained on large datasets of millions of images, but datasets of this size are not possible to obtain with self-created datasets.

  • The images collected may have substantial variation if they come from separate sources, for example from different printers.


Solution


A common technique when training machine learning models with small amounts of data is to use data augmentation. This is a trick that involves modifying an image probabilistically before giving it to the model for training. Such modifications can include geometric transformations like rotations and reflections, as well as other augmentations such as brightness and contrast changes. These enhancements are designed to mimic variation seen in the real images, and serve to enlarge the set of trainable data.


Technical Background


To be a bit more technical, our image classification model is based on the popular ResNet model. ResNet is a deep convolutional neural network that has found widespread use in image classification tasks. The simplest such model is ResNet18, consisting of 18 convolutional layers, together with Residual mappings which allow the

Related projects