Main Article Content
A Lightweight CNN Model for Vision Based Fire Detection on Embedded Systems
Abstract
Existing fire detection systems are generally sensor-based. Such traditional sensor-based systems have limited detection range, slow response, high false alarm and inability to give rich descriptive information. To overcome these shortcomings, computer vision-based methods for fire detection have been proposed. These vision-based systems have advantages of faster response, larger surveillance coverage, affordable cost and less human interference. However, the performance of such systems is affected by the complexity of scene under observation, irregular lighting and low-quality frames. The limitations of both traditional sensor-based and vision-based systems for fire detection can be addressed by using convolutional neural network (CNN). Despite their superior performance in various computer vision tasks, computational complexity of CNN models remain a key concern for deployment on embedded platforms which are characterised by limited resources and memory. This paper presents a computationally less expensive CNN model for fire detection that can be easily deployed on embedded hardware platforms such as FPGA. SqueezeNet, a pre-trained CNN model on ImageNet dataset, was modified and trained using transfer learning approach to classify fire images. SqueezeNet is a computationally less intensive CNN architecture which is 18 layer deep and 5.2 MB in size that makes it a good choice for embedded applications. It yielded an accuracy of 95% on the benchmark dataset which is better than state-of-the-art feature-based approaches. But the performance of the model can still be enhanced through further experimentation by considering a larger dataset.