Application of Deep Learning Models Based on EfficientDet and OpenPose in User-Oriented Motion Rehabilitation Robot Control

Authors

  • Mingxiu Sui University of Iowa Author
  • Liheng Jiang New York University Author
  • Tianyi Lyu College of Engineering, Northeastern University, Author
  • Han Wang 1000 Elements Way, Irvine Author
  • Li Zhou Faculty of Management, McGill University Author
  • Peiyuan Chen School of Electrical Engineering and Computer Science, Oregon State University Author
  • Ammar Alhosain Department of Public Health and Community Medicine, Sahlgrenska Academy, University of Gothenburg Author

Keywords:

Computer Vision,, Deep Learning, Robot, Rehabilitation Therapies, EfficientDet

Abstract

This study addresses the critical challenges in rehabilitation robotics, specifically in environmental adaptability, precision in action recognition, and personalized patient care. We introduce the EfficientDet-OpenPose-DRL network, a novel integration of EfficientDet for accurate human and object detection, OpenPose for precise motion tracking, and Deep Reinforcement Learning (DRL) for optimizing rehabilitation strategies. The main contribution of this article lies in the development of this integrated model, which not only enhances environmental perception and action recognition but also improves adaptive decision-making in real-time rehabilitation scenarios. This model enables personalized, adaptable rehabilitation by leveraging advanced computer vision and deep learning techniques. Experimental results demonstrate significant improvements over existing methods, offering enhanced precision, safety, and patient-specific rehabilitation outcomes. This work contributes to the advancement of human-centric rehabilitation technologies, paving the way for more effective and interactive healthcare solutions.

Published

2024-10-21

How to Cite

Application of Deep Learning Models Based on EfficientDet and OpenPose in User-Oriented Motion Rehabilitation Robot Control. (2024). Journal of Intelligence Technology and Innovation, 2(3), 47-77. https://itip-submit.com/index.php/JITI/article/view/68