gms | German Medical Science

65th Annual Meeting of the German Association for Medical Informatics, Biometry and Epidemiology (GMDS), Meeting of the Central European Network (CEN: German Region, Austro-Swiss Region and Polish Region) of the International Biometric Society (IBS)

06.09. - 09.09.2020, Berlin (online conference)

Automated pain detection: Comparing the performance of multiple convolutional neural networks

Meeting Abstract

Search Medline for

  • Michael Pantförder - Fraunhofer-Institut für Software- und Systemtechnik ISST, Dortmund, Germany
  • Joel Tokple - Technische Universität Dortmund, Dortmund, Germany

Deutsche Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie. 65th Annual Meeting of the German Association for Medical Informatics, Biometry and Epidemiology (GMDS), Meeting of the Central European Network (CEN: German Region, Austro-Swiss Region and Polish Region) of the International Biometric Society (IBS). Berlin, 06.-09.09.2020. Düsseldorf: German Medical Science GMS Publishing House; 2021. DocAbstr. 107

doi: 10.3205/20gmds186, urn:nbn:de:0183-20gmds1868

Published: February 26, 2021

© 2021 Pantförder et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Outline

Text

Background: Communicatively or cognitively impaired individuals may not be able to express the presence of pain adequately, leading to potential underrecognition and undertreatment of pain [1], [2]. Examination of facial expressions has proven to be a valuable objective method to infer the existence of pain [3], but in a clinical setting this may not be convenient for medical staff, because of complexity and time consumption of the task. Automated pain detection can solve these issues. The objective of this work was to evaluate the role Convolutional Neural Networks (CNNs) can play in the development of an image-based method for pain detection.

Methods: The dataset utilized for training and testing CNN models was the UNBC-McMaster Shoulder Pain Expression Archive Database [4]. It contains 48,398 images, each of which depicts one of 25individuals, suffering from chronic shoulder pain. Each image was coded by certified FACS [4] coders and PSPI [4] (pain) levels were calculated subsequently. As the dataset was highly imbalanced, PSPI levels were merged into three classes, a no-pain class, mild-pain class and notable-pain class. The dataset was then split into a training- and testset, by using all images depicting one specific individual for the testset and the remaining for training. This approach enables the evaluation of models, in terms of ability for pain detection for individuals not contained in the training set and provides more data for training. To reduce irrelevant information from the images, everything surrounding an individual's face was cropped out using the Multi Task Cascaded Convolutional Neural Network [5]. To increase the size of the dataset, images were augmented (horizontal flipping, rotation and change of brightness, contrast, saturation and hue). Five CNNs were trained for the task. Two CNNs were based on the AlexNet [6] architecture and three were based on the ResNet [7] architecture. Transfer learning approaches were also used (ImageNet [8] and VGGFace2 [9]).

Results: All models had considerable difficulties distinguishing between no-pain and mild-pain. For the notable-pain class the ResNet-50-ImageNet CNN model achieved promising results for clinical application with a good performance in recall (0.81) and arguably acceptable results for precision (0.6), accumulating in a F1 score of 0.69. ResNet-50-VGGFace2 and AlexNet-plain (plain: no transfer learning) came in close with F1 scores of 0.66 and 0.59, but did not perform well on recall (both 0.51) (precision 0.93 and 0.7). ResNet-50-plain scored far worse with a F1 score of 0.5 (recall 0.38; precision 0.73) and AlexNet-ImageNet scored lowest by completely failing to detect signs of notable pain (F1 score 0.17; recall 0.09; precision 1).

Conclusion: The results show that promising results can be obtained even though only a small amount of data was available for training. Therefore, it can be concluded that CNNs do have the potential to contribute significantly to facial expression based automated pain detection. The models may be improvable by extending transfer learning approaches, cropping out even more of the background and preserving only information contained in the oval shape of the face. It should also be experimented with aligning all faces in the training- and testset.

The authors declare that they have no competing interests.

The authors declare that an ethics committee vote is not required.


References

1.
Achterberg WP, Pieper MJ, van Dalen-Kok AH, et al. Pain management in patients with dementia. Clin Interv Aging. 2013;8:1471-1482. DOI: 10.2147/CIA.S36739 External link
2.
McGuire BE, Daly P, Smyth F. Chronic pain in people with an intellectual disability: under-recognised and under-treated?. J Intellect Disabil Res. 2010;54(3):240-245. DOI: 10.1111/j.1365-2788.2010.01254.x External link
3.
Williams AC. Facial expression of pain: an evolutionary account. Behav Brain Sci. 2002;25(4):439-488. DOI: 10.1017/s0140525x02000080 External link
4.
Lucey P, et al. Painful data: The UNBC-McMaster shoulder pain expression archive database. In: Face and Gesture 2011. IEEE; 2011. p. 57-64. DOI: 10.1109/FG.2011.5771462 External link
5.
Zhang K, et al. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters. 2016;23(10):1499-1503.
6.
Krizhevsky Alex, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012.
7.
He K, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
8.
Deng J, et al. Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE; 2009.
9.
Cao Q, et al. Vggface2: A dataset for recognising faces across pose and age. In: 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE; 2018.