DEEPLNET: A LIGHTWEIGHT CONVOLUTIONAL NEURAL NETWORK FOR REAL-TIME FACIAL EMOTION RECOGNITION

Authors

  • Khalil Ur Rahman School of Computer Science and Software Engineering, Hohai University, Nanjing, China. Author
  • Saba Yousha Department of Information and Communication Technology, Mehran University of Engineering & Technology Jamshoro Sindh, Pakistan. Author
  • Israr Ahmed Department of Mathematics , Shah Abdul Latif University Khairpur, Sindh, Pakistan. Author
  • Abdul Razaq M.Phil Scholar, Department of Computer Science, National College of Business Administration and Economics, Rahim Yar Khan Campus. Author

DOI:

https://doi.org/10.71146/kjmr913

Keywords:

Deep Learning, CNN, Facial Emotion Recognition, Lightweight Model, Real-Time Processing

Abstract

This paper introduces DeepLNet which is a small convolutional neural network that is capable of recognizing facial emotion in real-time with high accuracy and low computational cost. The model was trained and evaluated using a facial emotion dataset separated into 70 percent training, 15 percent validation and 15 percent testing data. DeepLNet recorded an overall accuracy of 92, exceeding those of traditional CNN (85) and models based on MobileNet (89). The evaluation measurements were good with precision of 91% and a recall of 90% and F1-score of 90.5, which indicated balanced and reliable classification. Analysis of the confusion matrix showed the highest accuracy in the recognition of happiness (94%), and surprise (93%), with slightly lower accuracy in fear (89%) and anger (88%). When it comes to computational efficiency, the proposed model used 2.8 million parameters versus 8.5 million in standard CNNs, and at the same time, it was faster than the traditional CNNs, with a higher processing speed of 35 frames per second (FPS), which is suitable in real-time applications. The performance under varying conditions revealed an accuracy of 93 percent in controlled setting, 88 percent under low-light conditions, and 86 percent in multi-face conditions. These findings indicate that DeepLNet is a good compromise between accuracy and efficiency and is therefore suitable in implementation on the resource-limited devices like mobile phones and embedded systems. The paper identifies the promise of lightweight deep learning models in the development of real-time emotion recognition applications.

Downloads

Download data is not yet available.

References

1. Sukhavasi, S. B., Sukhavasi, S. B., Elleithy, K., El-Sayed, A., &Elleithy, A. (2022). Deep neural network approach for pose, illumination, and occlusion invariant driver emotion detection. International Journal of Environmental Research and Public Health, 19(4), 2352.

2. Ullah, R., Asif, M., Shah, W. A., Anjam, F., Ullah, I., Khurshaid, T., ... &Alibakhshikenari, M. (2023). Speech emotion recognition using convolution neural networks and multi-head convolutional transformer. Sensors, 23(13), 6212.

3. Chinnaiyan, R., Sai, K., & Bharath, P. (2023). Deep learning based CNN model for classification and detection of individuals wearing face mask. arXiv preprint arXiv:2311.10408.

4. Zheng, Y., & Blasch, E. (2023, June). Facial micro-expression recognition using deep spatio-temporal neural networks. In Signal Processing, Sensor/Information Fusion, and Target Recognition XXXII (Vol. 12547, pp. 284-294). SPIE.

5. Mishra, S., Bhatnagar, N., P, P., & T. R, S. (2024). Speech emotion recognition and classification using hybrid deep CNN and BiLSTM model. Multimedia Tools and Applications, 83(13), 37603-37620.

6. Hiremath, S. S., Hiremath, J., Kulkarni, V. V., Harshit, B. C., Kumar, S., & Hiremath, M. S. (2023). Facial expression recognition using transfer learning with ResNet50. In Inventive Systems and Control: Proceedings of ICISC 2023 (pp. 281-300). Singapore: Springer Nature Singapore.

7. Alibakhshikenari, M. (2023). Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer.

8. Sowmya, B., & Guruprasad, S. (2024, December). A Light Weight CNN Algorithm for the Detection of Various Tomato Plant Diseases. In 2024 Fourth International Conference on Multimedia Processing, Communication & Information Technology (MPCIT) (pp. 1-7). IEEE.

9. Stanley, B. F., Saumiya, S., Nisha, S. R., Prabhu, T., &Anushia, R. M. (2024, December). TMG-DeepNet: A Novel Deep Learning Model for Tomato Maturity Grading. In 2024 International Conference on Advancement in Renewable Energy and Intelligent Systems (AREIS) (pp. 1-6). IEEE.

10. Alrawahneh, A. A. M., Abdullah, S. N. H. S., Abuain, T., Abdullah, S. N. A. S., Taylor, S. K., & Suhaimi, N. H. S. (2025). Decision-aid framework for face authentication detection using resnext50 and bilstm to enhance media integrity. IEEe Access.

11. Devi, N. S. (2025, September). DeepTamperNet Identification: A Hybrid Preprocessing and Feature Extraction Fusion Pipeline for Robust Deepfake Detection. In 2025 6th International Conference on Smart Electronics and Communication (ICOSEC) (pp. 1880-1884). IEEE.

12. Divyaa, N., Dharani, S. K., & Dharshan, K. (2026, February). IoT-Based Real-Time Gesture Talk System for Hand Sign Recognition in Speech and Hearing Impairments Using Cloud Computing. In 2026 4th International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT) (pp. 319-327). IEEE.

13. Karthikeyan, C., &Kannimuthu, S. (2026). XAI-DeepNET: a novel interpretable deep learning framework for dynamic hand gesture recognition. Biomedical Signal Processing and Control, 112, 108442.

14. Ladrham, K., &Gueddah, H. (2025). Optimizing Dermatological Image Classification Using Efficient Convolutional Neural Network Architecture. International Journal of Advanced Computer Science & Applications, 16(12).

15. Lin, C., Chen, Y., Zou, X., Deng, X., Dai, F., You, J., & Xiao, J. (2024). An unconstrained palmprint region of interest extraction method based on lightweight networks. Plos one, 19(8), e0307822.

16. Lin, S. M., Lai, K. T., Lin, G. S., Chang, C. W., & Chang, K. Y. (2025). An efficient light-weight convolutional neural network based on split-and-merge strategy and inverted residual structure for resource-constrained devices. Multimedia Tools and Applications, 1-22.

17. Stanley, B. F., Kumar, R. J. R., Shahila, D. F. D., Yamini, K. A. P., Famila, S., & Arun, M. R. (2025, August). MobileNetV3 Architecture Enhanced with Multi-Head Self-Attention for Precise Tomato Maturity Detection. In 2025 8th International Conference on Circuit, Power & Computing Technologies (ICCPCT) (pp. 785-790). IEEE.

18. Oloke, R. O., Akinleye, A. F., &Ogunsakin, O. O. EXPLORING CONVOLUTIONAL NEURAL NETWORKS FOR AUTOMATED DRIVER DROWSINESS DETECTION.

19. Costa, W., Macêdo, D., Zanchettin, C., Talavera, E., Figueiredo, L. S., Teichrieb, V., & Teixeira, J. M. (2022). A fast multiple cue fusing approach for human emotion recognition. Available at SSRN 5205383.

20. Al-funjan, A. W., Al Abboodi, H. M., Hamza, N. A., Abedi, W. M. S., & Abdullah, A. H. (2024). A Lightweight Deep Learning-Based Ocular Disease Prediction Model Using Squeeze-and-Excitation Network Architecture with MobileNet Feature Extraction. International Journal of Intelligent Engineering & Systems, 17(4).

21. Nethravathi, P. S. (2025, April). Early diagnosis of lung diseases with deep learning using EfficientNetV2-S architecture. In 2025 International Conference on Inventive Computation Technologies (ICICT) (pp. 436-441). IEEE.

22. Alqassab, A. I. M., Luque-Nieto, M. Á., & Mohammed, M. A. (2026). Identification of multiple ocular diseases using a hybrid quantum convolutional neural network with fundus images. Scientific Reports.

Published

2026-05-07

Issue

Section

Engineering and Technology

Categories

How to Cite

DEEPLNET: A LIGHTWEIGHT CONVOLUTIONAL NEURAL NETWORK FOR REAL-TIME FACIAL EMOTION RECOGNITION. (2026). Kashf Journal of Multidisciplinary Research, 3(05). https://doi.org/10.71146/kjmr913