Developing Soft Robotics To Assist Users With Hand-Object Interactions

 

 

For hand robots that assist with activities of daily living, this study presents a paradigm for learning-based intention detection based methodologies that perceive user intentions for wearable robots, by utilizing a first-person-view camera. We introduces a deep learning approach: Vision-based Intention Detection network from the EgOcentric view (VIDEO-Net). VIDEO-Net takes sequential image inputs and generates user intention outputs like “grasping” and “releasing”. To verify the practical deployment, Our approach is tested on both an SCI patient and healthy subjects. This work is collaborateion with Prof. Kyu-Jin Cho’s group at Seoul National University. 

Related publications

1. D Kim, B B Kang, K B Kim, H Choi, J H, K-J Cho, S Jo, Eyes are faster than hands: a soft wearable robot learns user intention from the egocentric view,  Science Robotics, 4, eaav2949, 2019 [LINK]