Main Article Content
An Explainable Deep Learning Model for Illegal Dress Code Detection and Classification
Abstract
This study introduces an explainable deep learning model for detecting and classifying dress code violations, leveraging a custom dataset of 130 images categorized into four classes: illegal male dressing, illegal female dressing, legal male dressing, and legal female dressing. The proposed model was built on a pre-trained MobileNetV2 architecture, fine-tuned to achieve a training accuracy of 100% and a validation accuracy of 90%. The model's performance is further validated through a confusion matrix, demonstrating robust classification capabilities, particularly for legal male and female dress codes, with minor misclassifications in the illegal categories. To ensure interpretability, SHAP (SHapley Additive exPlanations) and Gradient Magnitude Heatmaps are employed, providing insights into the model’s decision-making process. The SHAP visualizations reveal the pixel-level contributions to the predictions, while the Gradient Magnitude Heatmaps highlight regions of sensitivity, emphasizing the model's focus on distributed patterns across the images. The alignment between these techniques confirms the reliability of the model's feature extraction capabilities and underscores its generalizability. This paper not only achieves high classification accuracy but also integrates explainability techniques to enhance transparency and trust, making it suitable for socially sensitive applications. The results demonstrated the effectiveness of combining high-performance deep learning models with robust explainability frameworks to address complex classification challenges.