Off-campus WSU users: To download campus access dissertations, please use the following link to log into our proxy server with your WSU access ID and password, then click the "Off-campus Download" button below.

Non-WSU users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Access Type

WSU Access

Date of Award

January 2025

Degree Type

Dissertation

Degree Name

Ph.D.

Department

Industrial and Manufacturing Engineering

First Advisor

Sara Masoud

Abstract

This dissertation addresses the critical challenge of enabling robots to effectively understand and anticipate human behavior in shared workspaces, a cornerstone of Industry 5.0's human-centric manufacturing. The research developed and evaluated a Virtual Reality (VR)-driven framework for collecting high-fidelity human movement data and applying advanced neural network models for human intention recognition and trajectory prediction.The study pursued three main objectives. First, an immersive VR platform was developed for capturing detailed human movements in simulated manufacturing tasks. An unsupervised classification framework (Dynamic Time Warping and k-means clustering) demonstrated the platform's utility for initial intention analysis, achieving an average accuracy of 85%. Second, supervised deep learning models were investigated for robust human intention recognition. Using VR-collected data, Convolutional Neural Networks (CNNs), hybrid CNN-Long Short-Term Memory (CNN-LSTM) networks, and CNN-Transformer models were trained. The CNN-Transformer model significantly outperformed others, achieving near-perfect F1-scores (0.998) for classifying seven manufacturing-related activities, affirming the potential for high-accuracy intention recognition. Third, a framework for indoor human trajectory prediction was devised. Neural network models trained on diverse walking patterns were evaluated using Average Displacement Error (ADE) and Final Displacement Error (FDE). The CNN-LSTM and CNN-Transformer models demonstrated promising accuracy in forecasting future human positions, crucial for proactive robot navigation. Key contributions include: (1) a validated VR-driven platform for HRI research; (2) novel unsupervised and supervised learning frameworks for intention recognition, with exceptional performance from the CNN-Transformer model; (3) an effective neural network-based framework for human trajectory prediction; and (4) rich datasets and methodological insights for VR-based HRI studies. This work advances collaborative robot capabilities, fostering safer, more intuitive, and efficient human-robot partnerships. Future work will target real-world deployment, model generalization, and richer contextual understanding.

Off-campus Download

Share

COinS