aYanhong Liu
A novel dynamic gesture understanding algorithm fusing convolutional neural networks with hand-crafted features
Liu, aYanhong; Song, Shouan; Yang, Lei; Bian, Guibin; Yu, Hongnian
Abstract
Dynamic gestures have attracted much attention in recent years due to their user-friendly interactive characteristics. However, accurate and efficient dynamic gesture understanding remains a challenge due to complex scenarios and motion information. Conventional handcrafted features are computationally cheap but can only extract low-level image features. This leads to performance degradation when dealing with complex scenes. In contrast, deep learning-based methods have a stronger feature expression ability and hence can capture more abstract and high-level image features. However, they critically rely on a large amount of training data. To address the above issues, a novel dynamic gesture understanding algorithm based on feature fusion is proposed for accurate dynamic gesture prediction. It leverages the advantages of handcrafted features and transfer learning. Aimed at small-scale dynamic gesture data, transfer learning is introduced for capturing effective feature expression. To precisely model the critical temporal information associated with dynamic gestures, a novel feature descriptor, namely, , is proposed for effective feature expression of dynamic gestures from the spatial and temporal domain. On this basis, a decision-level feature fusion framework based on support vector machine (SVM) and Dempster–Shafer (DS) evidence theory is constructed to utilize handcrafted features and to realize high-precision dynamic gesture understanding. To verify the effectiveness and robustness of the proposed recognition algorithm, analysis and comparison experiments are performed on the public Cambridge gesture dataset and Northwestern University hand gesture dataset. The proposed gesture recognition algorithm achieves prediction accuracies of 99.50% and 96.97% on these two datasets. Experimental results show that the proposed recognition framework exhibits a better recognition performance in comparison with related prediction algorithms.
Citation
Liu, A., Song, S., Yang, L., Bian, G., & Yu, H. (2022). A novel dynamic gesture understanding algorithm fusing convolutional neural networks with hand-crafted features. Journal of Visual Communication and Image Representation, 83, Article 103454. https://doi.org/10.1016/j.jvcir.2022.103454
Journal Article Type | Article |
---|---|
Acceptance Date | Feb 5, 2022 |
Online Publication Date | Feb 15, 2022 |
Publication Date | 2022-02 |
Deposit Date | Feb 21, 2022 |
Journal | Journal of Visual Communication and Image Representation |
Print ISSN | 1047-3203 |
Publisher | Elsevier |
Peer Reviewed | Peer Reviewed |
Volume | 83 |
Article Number | 103454 |
DOI | https://doi.org/10.1016/j.jvcir.2022.103454 |
Keywords | Dynamic gesture understanding, Transfer learning, Feature fusion, Dempster–Shafer evidence theory, Support vector machine |
Public URL | http://researchrepository.napier.ac.uk/Output/2847075 |
You might also like
Predicting the relationships between virtual enterprises and agility in supply chains
(2017)
Journal Article
A practical multi-sensor activity recognition system for home-based care
(2014)
Journal Article
Downloadable Citations
About Edinburgh Napier Research Repository
Administrator e-mail: repository@napier.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search