Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"Purdue University" +contributor:("Juan Wachs"). Showing records 1 – 2 of 2 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Purdue University

1. Li, Yu-Ting. Embodied interaction with visualization and spatial navigation in time-sensitive scenarios.

Degree: PhD, Industrial Engineering, 2014, Purdue University

Paraphrasing the theory of embodied cognition, all aspects of our cognition are determined primarily by the contextual information and the means of physical interaction with data and information. In hybrid human-machine systems involving complex decision making, continuously maintaining a high level of attention while employing a deep understanding concerning the task performed as well as its context are essential. Utilizing embodied interaction to interact with machines has the potential to promote thinking and learning according to the theory of embodied cognition proposed by Lakoff. Additionally, the hybrid human-machine system utilizing natural and intuitive communication channels (e.g., gestures, speech, and body stances) should afford an array of cognitive benefits outstripping the more static forms of interaction (e.g., computer keyboard). This research proposes such a computational framework based on a Bayesian approach; this framework infers operator's focus of attention based on the physical expressions of the operators. Specifically, this work aims to assess the effect of embodied interaction on attention during the solution of complex, time-sensitive, spatial navigational problems. Toward the goal of assessing the level of operator's attention, we present a method linking the operator's interaction utility, inference, and reasoning. The level of attention was inferred through networks coined Bayesian Attentional Networks (BANs). BANs are structures describing cause-effect relationships between operator's attention, physical actions and decision-making. The proposed framework also generated a representative BAN, called the Consensus (Majority) Model (CMM); the CMM consists of an iteratively derived and agreed graph among candidate BANs obtained by experts and by the automatic learning process. Finally, the best combinations of interaction modalities and feedback were determined by the use of particular utility functions. This methodology was applied to a spatial navigational scenario; wherein, the operators interacted with dynamic images through a series of decision making processes. Real-world experiments were conducted to assess the framework's ability to infer the operator's levels of attention. Users were instructed to complete a series of spatial-navigational tasks using an assigned pairing of an interaction modality out of five categories (vision-based gesture, glove-based gesture, speech, feet, or body balance) and a feedback modality out of two (visual-based or auditory-based). Experimental results have confirmed that physical expressions are a determining factor in the quality of the solutions in a spatial navigational problem. Moreover, it was found that the combination of foot gestures with visual feedback resulted in the best task performance (p< .001). Results have also shown that embodied interaction-based multimodal interface decreased execution errors that occurred in the cyber-physical scenarios (p < .001). Therefore we conclude that appropriate use… Advisors/Committee Members: Juan Wachs, Juan Wachs, Eugenio Culurciello, Shimon Nof, Brad Duerstock.

Subjects/Keywords: Computer Engineering; Engineering; Industrial Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, Y. (2014). Embodied interaction with visualization and spatial navigation in time-sensitive scenarios. (Doctoral Dissertation). Purdue University. Retrieved from https://docs.lib.purdue.edu/open_access_dissertations/323

Chicago Manual of Style (16th Edition):

Li, Yu-Ting. “Embodied interaction with visualization and spatial navigation in time-sensitive scenarios.” 2014. Doctoral Dissertation, Purdue University. Accessed January 19, 2020. https://docs.lib.purdue.edu/open_access_dissertations/323.

MLA Handbook (7th Edition):

Li, Yu-Ting. “Embodied interaction with visualization and spatial navigation in time-sensitive scenarios.” 2014. Web. 19 Jan 2020.

Vancouver:

Li Y. Embodied interaction with visualization and spatial navigation in time-sensitive scenarios. [Internet] [Doctoral dissertation]. Purdue University; 2014. [cited 2020 Jan 19]. Available from: https://docs.lib.purdue.edu/open_access_dissertations/323.

Council of Science Editors:

Li Y. Embodied interaction with visualization and spatial navigation in time-sensitive scenarios. [Doctoral Dissertation]. Purdue University; 2014. Available from: https://docs.lib.purdue.edu/open_access_dissertations/323


Purdue University

2. Yeum, Chul Min. Computer vision-based structural assessment exploiting large volumes of images.

Degree: PhD, Civil Engineering, 2016, Purdue University

Visual assessment is a process to understand the state of a structure based on evaluations originating from visual information. Recent advances in computer vision to explore new sensors, sensing platforms and high-performance computing have shed light on the potential for vision-based visual assessment in civil engineering structures. The use of low-cost, high-resolution visual sensors in conjunction with mobile and aerial platforms can overcome spatial and temporal limitations typically associated with other forms of sensing in civil structures. Also, GPU-accelerated and parallel computing offer unprecedented speed and performance, accelerating processing the collected visual data. However, despite the enormous endeavor in past research to implement such technologies, there are still many practical challenges to overcome to successfully apply these techniques in real world situations. A major challenge lies in dealing with a large volume of unordered and complex visual data, collected under uncontrolled circumstance (e.g. lighting, cluttered region, and variations in environmental conditions), while just a tiny fraction of them are useful for conducting actual assessment. Such difficulty induces an undesirable high rate of false-positive and false-negative errors, reducing the trustworthiness and efficiency of their implementation. To overcome the inherent challenges in using such images for visual assessment, high-level computer vision algorithms must be integrated with relevant prior knowledge and guidance, thus aiming to have similar performance with those of humans conducting visual assessment. Moreover, the techniques must be developed and validated in the realistic context of a large volume of real-world images, which is likely contain numerous practical challenges. In this dissertation, the novel use of computer vision algorithms is explored to address two promising applications of vision-based visual assessment in civil engineering: visual inspection, and visual data analysis for post-disaster evaluation. For both applications, powerful techniques are developed here to enable reliable and efficient visual assessment for civil structures and demonstrate them using a large volume of real-world images collected from actual structures. State-of-art computer vision techniques, such as structure-from-motion and convolutional neural network techniques, facilitate these tasks. The core techniques derived from this study are scalable and expandable to many other applications in vision-based visual assessment, and will serve to close the existing gaps between past research efforts and real-world implementations. Advisors/Committee Members: Shirley J. Dyke, Shirley J. Dyke, Bedrich Benes, Zygmunt Pizlo, Santiago Pujol, Julio A. Ramirez, Juan Wachs.

Subjects/Keywords: Applied sciences; Computer vision; Convolutional neural network; Multi-view geometry; Structure-from-motion; Visual assessment; Civil Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yeum, C. M. (2016). Computer vision-based structural assessment exploiting large volumes of images. (Doctoral Dissertation). Purdue University. Retrieved from https://docs.lib.purdue.edu/open_access_dissertations/1036

Chicago Manual of Style (16th Edition):

Yeum, Chul Min. “Computer vision-based structural assessment exploiting large volumes of images.” 2016. Doctoral Dissertation, Purdue University. Accessed January 19, 2020. https://docs.lib.purdue.edu/open_access_dissertations/1036.

MLA Handbook (7th Edition):

Yeum, Chul Min. “Computer vision-based structural assessment exploiting large volumes of images.” 2016. Web. 19 Jan 2020.

Vancouver:

Yeum CM. Computer vision-based structural assessment exploiting large volumes of images. [Internet] [Doctoral dissertation]. Purdue University; 2016. [cited 2020 Jan 19]. Available from: https://docs.lib.purdue.edu/open_access_dissertations/1036.

Council of Science Editors:

Yeum CM. Computer vision-based structural assessment exploiting large volumes of images. [Doctoral Dissertation]. Purdue University; 2016. Available from: https://docs.lib.purdue.edu/open_access_dissertations/1036

.