Microscopic printing analysis for classification of source printer based on deep learning approach
Abstract
Detecting counterfeit printed documents based on scanned evidence can be a challenging
task. The form of microscopic printing is dependent on the printing source and the printing
substance used. This study focuses on a comprehensive analysis of the printing patterns at a
microscopic scale, considering factors such as printing direction, printing substrate (uncoated and
coated paper), and printing method (conventional offset, waterless offset, and electrophotography).
Through the investigation, it is observed that printing direction has a minimal influence, while
shape descriptor indexes prove effective in distinguishing printing materials and processes on a
microscopic scale. To address this identification problem, deep learning techniques are employed,
specifically utilizing a deep neural network architecture called ResNet. Multiple variations of
ResNet, including ResNet50, ResNet101, and ResNet152, are evaluated as the backbone
architecture of a classification model. The models are trained on a comprehensive dataset of
microscopic printed images with various printing patterns from different source printers. The
experimental results demonstrate that the ResNet101 and ResNet152 variants consistently
outperform others in accurately discerning printer sources based on microscopic printed patterns.
The findings of this study lay the foundation for creating a pre-trained model with accurate
identification performance, enabling the detection of printed sources of documents. The potential
applications of this research extend to the fields of printer forensics, document authentication, and
microscopic printing analysis.