Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

You searched for id:"oai:etd.ohiolink.edu:osu152207324664654". One record found.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


The Ohio State University

1. Li, Ying. Efficient and Robust Video Understanding for Human-robot Interaction and Detection.

Degree: PhD, Electrical and Computer Engineering, 2018, The Ohio State University

Video understanding is able to accomplish various tasks which are fundamental to human-robot interaction and detection. Such tasks include object tracking, action recognition, object detection, and segmentation. However, due to the large data volume in video sequence and the high complexity of visual algorithms, most visual algorithms suffer from low robustness to maintain a high efficiency, especially when it comes to the real-time application. It is challenging to achieve high robustness with high efficiency for video understanding. In this dissertation, we explore the efficient and robust video understanding for human-robot interaction and detection. Two important applications are the health-risky behavior detection and human tracking for human following robots. As large portions of world population are approaching old age, an increasing number of healthcare issues arise from unsafe abnormal behaviors such as falling and staggering. A system that can detect the health-risky abnormal behavior of the elderly is thus of significant importance. In order to detect the abnormal behvior with high accuracy and timely response, visual action recognition is explored and integrated with inertial sensor based behavior detection. The inertial sensor based behavior detection is integrated with a visual behavior detection algorithm to not only choose a small volume of the video sequence but also provide a likelihood guide for different behaviors. The system works in a trigger-verification manner. An elder-carried mobile devices either by a dedicated design or a smartphone, equipped with inertial sensor is used to trigger the selection of relevant video data. The selected data is then fed into visual verification module, and in this way the selective utilization of video data is achieved and the efficiency is guaranteed. By using selected data, the system is allowed to perform more complex visual analysis and achieve a higher accuracy. A novel tracking approach for robust human tracking by robots is proposed. To ensure a close distance between the human and the robot in human-robot interaction, we propose to track part of the human body, particularly human feet. Since the human feet are two closely located objects with similar appearance, it is challenging to track both of them and maintain high accuracy and robustness. An adaptive model for the human walking pattern is formulated to utilize the natural human body information to guide the tracking of the target. By decomposing the foot motion into local and global motions, a locomotion model is proposed. This model is integrated into an existing tracking algorithm, such as the particle filtering to improve the accuracy and efficiency. Apart from the locomotion model, a phase-labeled exemplar pool, which associates a motion phase with foot appearance, is built to improve the tracking performance. The human-robot interaction in a critical environment, to be specific, the nuclear environment, is also studied. In a nuclear environment, due to the damage by the radiation, the… Advisors/Committee Members: Zheng, Yuan (Advisor).

Subjects/Keywords: Computer Engineering; Computer Science; video understanding; action recognition; object tracking; human-robot interaction

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, Y. (2018). Efficient and Robust Video Understanding for Human-robot Interaction and Detection. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654

Chicago Manual of Style (16th Edition):

Li, Ying. “Efficient and Robust Video Understanding for Human-robot Interaction and Detection.” 2018. Doctoral Dissertation, The Ohio State University. Accessed January 15, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654.

MLA Handbook (7th Edition):

Li, Ying. “Efficient and Robust Video Understanding for Human-robot Interaction and Detection.” 2018. Web. 15 Jan 2019.

Vancouver:

Li Y. Efficient and Robust Video Understanding for Human-robot Interaction and Detection. [Internet] [Doctoral dissertation]. The Ohio State University; 2018. [cited 2019 Jan 15]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654.

Council of Science Editors:

Li Y. Efficient and Robust Video Understanding for Human-robot Interaction and Detection. [Doctoral Dissertation]. The Ohio State University; 2018. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu152207324664654

.