You searched for subject:(Human Robot Interaction)
.
Showing records 1 – 30 of
383 total matches.
◁ [1] [2] [3] [4] [5] … [13] ▶

Mississippi State University
1.
Sun, Yu-Wei.
Do people change their behavior when the handler is next to the robot?.
Degree: MS, Industrial and Systems Engineering, 2018, Mississippi State University
URL: http://sun.library.msstate.edu/ETD-db/theses/available/etd-02152018-152217/
;
► It is increasingly common for people to work alongside robots in a variety of situations. When a robot is completing a task, the handler of…
(more)
▼ It is increasingly common for people to work alongside robots in a variety of situations. When a
robot is completing a task, the handler of the
robot may be present. It is important to know how people interact with the
robot when the handler is next to the
robot. Our study focuses on whether handlers presence can affect humans behavior toward the
robot. Our experiment targets two different scenarios (handler present and handler absent) in order to find out humans behavior change toward the
robot. Results show that in the handler present scenario, people are less willing to interact with the
robot. However, when people do interact with the
robot, they tend to interact with both the handler and the
robot. This suggests that researchers should consider the presence of a handler when designing for
human-
robot interactions.
Advisors/Committee Members: Lesley Strawderman, Ph.D., P.E. (chair).
Subjects/Keywords: Robot; Robotic; Human-robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sun, Y. (2018). Do people change their behavior when the handler is next to the robot?. (Masters Thesis). Mississippi State University. Retrieved from http://sun.library.msstate.edu/ETD-db/theses/available/etd-02152018-152217/ ;
Chicago Manual of Style (16th Edition):
Sun, Yu-Wei. “Do people change their behavior when the handler is next to the robot?.” 2018. Masters Thesis, Mississippi State University. Accessed December 14, 2019.
http://sun.library.msstate.edu/ETD-db/theses/available/etd-02152018-152217/ ;.
MLA Handbook (7th Edition):
Sun, Yu-Wei. “Do people change their behavior when the handler is next to the robot?.” 2018. Web. 14 Dec 2019.
Vancouver:
Sun Y. Do people change their behavior when the handler is next to the robot?. [Internet] [Masters thesis]. Mississippi State University; 2018. [cited 2019 Dec 14].
Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-02152018-152217/ ;.
Council of Science Editors:
Sun Y. Do people change their behavior when the handler is next to the robot?. [Masters Thesis]. Mississippi State University; 2018. Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-02152018-152217/ ;

University of Manitoba
2.
Sanoubari, Elaheh.
A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help.
Degree: Computer Science, 2018, University of Manitoba
URL: http://hdl.handle.net/1993/33679
► Robots are entering public spaces where they use social techniques to interact with people. Robots can nowadays be found in public spaces such as airports,…
(more)
▼ Robots are entering public spaces where they use social techniques to interact with people. Robots can nowadays be found in public spaces such as airports, shopping malls, museums, or hospitals, where they interact with the general public. As these social entities are sharing people’s personal spaces and influencing their perceptions and actions, we must consider how they interact with people.
The impact of robot’s
interaction on a person is mediated by many factors, including personal difference and
interaction context (Young et al., 2011). For both of these factors, the cultural background of the person is a particularly important component. Culture is deeply intertwined with all aspects of our social behaviors and impacts how we perceive our day-to-day interactions. As such, social robots can use culturally-appropriate language to improve how they are perceived by
human users (Wang et al.,2010).
In this work, we investigated if a
robot can use social techniques to adapt to people to get more help from. More specifically, we investigated if a
robot can do so by exploiting knowledge of a person’s culture. We conducted an in-the-wild experiment to investigate whether a
robot adapting to a passerby culture would affect how much help it can get from them. The results of this study indicate that there is a significant increase in duration of their help in cases where the
robot adapts to match their culture than when it mismatches.
The results of this experiment contribute to the design techniques of adaptive social ro-bots by showing that it is possible for an agent to influence users’ actions by adapting to them. However, as this adaptation can happen without a person’s explicit knowledge, it is ethically questionable. By providing this proof of concept, our experiment sheds light on the discussions of the ethical aspects of robots interacting with humans in social contexts. Furthermore, we present the study design used for this experiment as a template for in-the-wild studies with cold-calling robots. We propose that researchers can use this template as a starting point and modify it for conducting their own similar
robot in-the-wild research.
Advisors/Committee Members: Young, James (Computer Science) (supervisor), Bunt, Andrea (Computer Science) (examiningcommittee), Loureiro-Rodríguez, Veronica (Linguistics) (examiningcommittee).
Subjects/Keywords: Human-robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sanoubari, E. (2018). A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help. (Masters Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/33679
Chicago Manual of Style (16th Edition):
Sanoubari, Elaheh. “A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help.” 2018. Masters Thesis, University of Manitoba. Accessed December 14, 2019.
http://hdl.handle.net/1993/33679.
MLA Handbook (7th Edition):
Sanoubari, Elaheh. “A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help.” 2018. Web. 14 Dec 2019.
Vancouver:
Sanoubari E. A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help. [Internet] [Masters thesis]. University of Manitoba; 2018. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/1993/33679.
Council of Science Editors:
Sanoubari E. A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help. [Masters Thesis]. University of Manitoba; 2018. Available from: http://hdl.handle.net/1993/33679

Delft University of Technology
3.
Jonkman, J.A.
Detecting Human Intention in Physical human robot interaction:.
Degree: 2009, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:e86d12c4-3185-499b-b1f5-2767086ca45d
► Human assist robotic systems cooperate with human for performing a task. In human power amplifying systems, human and robot interact physically and the robotic arm…
(more)
▼ Human assist robotic systems cooperate with
human for performing a task. In
human power amplifying systems,
human and
robot interact physically and the robotic arm augments the force of the
human, while he still perceives a part of the mass of the controlled object.
Human skill and robotic power are combined into one system and due to the
human control a flexible robotic systems is obtained.
This research aims at detecting
human intention in static physical
human robot interaction, by observing the changes of manipulator joint torques, due to
human interference, and translates these into an end point force vector called “
human intention”. To avoid expensive 3D force sensors, an eddy current sensor technique is used to observe torque fluctuations in the joint actuators. This enables physical
interaction from anywhere on the
robot instead of solely to the tip, where the force sensor would be placed. The project includes designing a
human-
robot interface system, in which the position controlled robotic manipulator, is physically interacting with its
human controller by adapting to his intended movements.
Advisors/Committee Members: Erden, M.S., Tomiyama, T..
Subjects/Keywords: physical human robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jonkman, J. A. (2009). Detecting Human Intention in Physical human robot interaction:. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:e86d12c4-3185-499b-b1f5-2767086ca45d
Chicago Manual of Style (16th Edition):
Jonkman, J A. “Detecting Human Intention in Physical human robot interaction:.” 2009. Masters Thesis, Delft University of Technology. Accessed December 14, 2019.
http://resolver.tudelft.nl/uuid:e86d12c4-3185-499b-b1f5-2767086ca45d.
MLA Handbook (7th Edition):
Jonkman, J A. “Detecting Human Intention in Physical human robot interaction:.” 2009. Web. 14 Dec 2019.
Vancouver:
Jonkman JA. Detecting Human Intention in Physical human robot interaction:. [Internet] [Masters thesis]. Delft University of Technology; 2009. [cited 2019 Dec 14].
Available from: http://resolver.tudelft.nl/uuid:e86d12c4-3185-499b-b1f5-2767086ca45d.
Council of Science Editors:
Jonkman JA. Detecting Human Intention in Physical human robot interaction:. [Masters Thesis]. Delft University of Technology; 2009. Available from: http://resolver.tudelft.nl/uuid:e86d12c4-3185-499b-b1f5-2767086ca45d

University of Illinois – Urbana-Champaign
4.
Jang Sher, Anum.
An embodied, platform-invariant architecture for robotic spatial commands.
Degree: MS, Mechanical Engineering, 2017, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/97491
► In contexts such as teleoperation, robot reprogramming, and human-robot-interaction, and neural prosthetics, conveying spatial commands to a robotic platform is often a limiting factor. Currently,…
(more)
▼ In contexts such as teleoperation,
robot reprogramming, and
human-
robot-
interaction, and neural prosthetics, conveying spatial commands to a robotic platform is often a limiting factor. Currently, many applications rely on joint-angle-by-joint-angle prescriptions. This inherently requires a large number of parameters to be specified by the user that scales with the number of degrees of freedom on a platform, creating high bandwidth requirements for interfaces. This thesis presents an efficient representation of high-level, spatial commands that specifies many joint angles with relatively few parameters based on a spatial architecture. To this end, an expressive command architecture is proposed that allows pose generation of simple motion primitives. In particular, a general method for labeling connected platform linkages, generating a databank of user-specified poses, and mapping between high-level spatial commands and specific platform static configurations are presented. Further, this architecture is platform- invariant where the same high-level, spatial command can have meaning on any platform. This has the particular advantage that our commands have meaning for
human movers as well. In order to achieve this, we draw inspiration from Laban/Bartenieff Movement Studies, an embodied taxonomy for movement description. The final architecture is implemented for twenty-six spatial directions on a Rethink Robotics Baxter and an Aldebaran NAO. Two user studies have been conducted to validate the effectiveness of the proposed framework. Lastly, a workload metric is proposed to quantitative assess the usability of a machine interface.
Advisors/Committee Members: LaViers, Amy (advisor).
Subjects/Keywords: Teleoperation; Human-robot interaction; Laban
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jang Sher, A. (2017). An embodied, platform-invariant architecture for robotic spatial commands. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/97491
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Jang Sher, Anum. “An embodied, platform-invariant architecture for robotic spatial commands.” 2017. Thesis, University of Illinois – Urbana-Champaign. Accessed December 14, 2019.
http://hdl.handle.net/2142/97491.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Jang Sher, Anum. “An embodied, platform-invariant architecture for robotic spatial commands.” 2017. Web. 14 Dec 2019.
Vancouver:
Jang Sher A. An embodied, platform-invariant architecture for robotic spatial commands. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2017. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/2142/97491.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Jang Sher A. An embodied, platform-invariant architecture for robotic spatial commands. [Thesis]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/97491
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Manitoba
5.
Seo, Stela.
A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses.
Degree: Computer Science, 2015, University of Manitoba
URL: http://hdl.handle.net/1993/30248
► In designing and evaluating human-robot interactions and interfaces, researchers often use simulated robots because of the high cost of physical robots and time required to…
(more)
▼ In designing and evaluating
human-
robot interactions and interfaces, researchers often use simulated robots because of the high cost of physical robots and time required to program them. However, it is important to consider how
interaction with a simulated
robot differs from a real
robot; that is, do simulated robots provide authentic
interaction? We contribute to a growing body of work that explores this question and maps out simulated-versus-real differences, by explicitly investigating empathy: how people empathize with a physical or simulated
robot when something bad happens to it. Empathy is particularly relevant to social
human-
robot interaction (HRI) and is integral to, e.g., companion and care robots.
To explore our question, we develop a convincing HRI scenario that induces people’s empathy toward a
robot, and explore psychology work for an empathy-measuring instrument. To formally evaluate our scenario and the empathy-measuring instrument in HRI scenario, we conduct a comparative user study: in one condition, participants have the scenario which induces empathy, and for the other condition, we remove any empathy inducing activities of the
robot. With the validated scenario and empathy measuring instrument, we conduct another user study to explore the difference between a real and a simulated
robot in terms of people’s empathic response.
Our results suggest that people empathize more with a physical
robot than a simulated one, a finding that has important implications on the generalizability and applicability of simulated HRI work. As part of our exploration, we additionally present an original and reproducible HRI experimental design to induce empathy toward robots, and experimentally validated an empathy-measuring instrument from psychology for use with HRI.
Advisors/Committee Members: Young, James E. (Computer Science) (supervisor), Hemmati, Hadi (Computer Science).
Subjects/Keywords: human-robot interaction; empathy
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Seo, S. (2015). A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses. (Masters Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/30248
Chicago Manual of Style (16th Edition):
Seo, Stela. “A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses.” 2015. Masters Thesis, University of Manitoba. Accessed December 14, 2019.
http://hdl.handle.net/1993/30248.
MLA Handbook (7th Edition):
Seo, Stela. “A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses.” 2015. Web. 14 Dec 2019.
Vancouver:
Seo S. A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses. [Internet] [Masters thesis]. University of Manitoba; 2015. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/1993/30248.
Council of Science Editors:
Seo S. A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses. [Masters Thesis]. University of Manitoba; 2015. Available from: http://hdl.handle.net/1993/30248

Florida Atlantic University
6.
Gonzalez Moya, Iker Javier.
A Collaborative Approach for Real-Time Measurements of Human Trust, Satisfaction and Frustration in Human-Robot Teaming.
Degree: MS, 2018, Florida Atlantic University
URL: http://fau.digital.flvc.org/islandora/object/fau:40720
► This thesis aims at real-time measurements of human trust, satisfaction, and frustration in human-robot teaming. Recent studies suggest that humans are inclined to have a…
(more)
▼ This thesis aims at real-time measurements of human trust, satisfaction, and
frustration in human-robot teaming. Recent studies suggest that humans are inclined
to have a negative attitude towards using autonomous systems. These ndings elevate
the necessity of conducting research to better understand the key factors that a ect
the levels of trust, satisfaction and frustration in Human-Robot Interaction (HRI).
We utilized a new sequential and collaborative approach for HRI data collection that
employed trust, satisfaction and frustration as primarily evaluative metrics. We also
used haptic feedback through a soft actuator armband to help our human subjects
control a robotic hand for grabbing or not grabbing an object during our interaction
scenarios. Three experimental studies were conducted during our research of which
the rst was related to the evaluation of aforementioned metrics through a collabora-
tive approach between the Baxter robot and human subjects. The second experiment
embodied the evaluation of a newly fabricated 3D- nger for the I-Limb robotic hand
through a nuclear-waste glove. The third experiment was based on the two previous
studies that focused on real-time measurements of trust, satisfaction and frustration
in human-robot teaming with the addition of pressure feedback to the system through soft actuators. In the last case, human subjects had more controls over our robotic
systems compared to earlier experiments leading to a more collaborative interaction
and teaming. The results of these experiments illustrated that human subjects can
rebuild their trust and also increase their satisfaction levels while lowering their frus-
tration levels after failures or any faulty behavior. Furthermore, our analyses showed
that our methods are highly e ective for collecting honest and genuine data from hu-
man subjects and lays the foundation for more-involved future research in the domain
of human-robot teaming.
2018
Degree granted: Thesis (M.S.) – Florida Atlantic University, 2018.
Collection: FAU
Advisors/Committee Members: Nojoumian, Mehrdad (Thesis advisor), Florida Atlantic University (Degree grantor), College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science.
Subjects/Keywords: Human-robot interaction.; Haptic devices.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gonzalez Moya, I. J. (2018). A Collaborative Approach for Real-Time Measurements of Human Trust, Satisfaction and Frustration in Human-Robot Teaming. (Masters Thesis). Florida Atlantic University. Retrieved from http://fau.digital.flvc.org/islandora/object/fau:40720
Chicago Manual of Style (16th Edition):
Gonzalez Moya, Iker Javier. “A Collaborative Approach for Real-Time Measurements of Human Trust, Satisfaction and Frustration in Human-Robot Teaming.” 2018. Masters Thesis, Florida Atlantic University. Accessed December 14, 2019.
http://fau.digital.flvc.org/islandora/object/fau:40720.
MLA Handbook (7th Edition):
Gonzalez Moya, Iker Javier. “A Collaborative Approach for Real-Time Measurements of Human Trust, Satisfaction and Frustration in Human-Robot Teaming.” 2018. Web. 14 Dec 2019.
Vancouver:
Gonzalez Moya IJ. A Collaborative Approach for Real-Time Measurements of Human Trust, Satisfaction and Frustration in Human-Robot Teaming. [Internet] [Masters thesis]. Florida Atlantic University; 2018. [cited 2019 Dec 14].
Available from: http://fau.digital.flvc.org/islandora/object/fau:40720.
Council of Science Editors:
Gonzalez Moya IJ. A Collaborative Approach for Real-Time Measurements of Human Trust, Satisfaction and Frustration in Human-Robot Teaming. [Masters Thesis]. Florida Atlantic University; 2018. Available from: http://fau.digital.flvc.org/islandora/object/fau:40720

Vanderbilt University
7.
Heard, Jamison.
An adaptive supervisory-based human-robot teaming architecture.
Degree: PhD, Electrical Engineering, 2019, Vanderbilt University
URL: http://etd.library.vanderbilt.edu/available/etd-08162019-171051/
;
► Changing the ways that robots interact with humans in uncertain, dynamic, and high-intensity environments (e.g., a NASA control room) is needed in order to realize…
(more)
▼ Changing the ways that robots interact with humans in uncertain, dynamic, and high-intensity environments (e.g., a NASA control room) is needed in order to realize effective
human-
robot teams. Dynamic domains require innovative
human-
robot teaming methodologies, which are adaptive in nature, due to varying task demands. These methodologies require mechanisms that can drive the
robot's interactions, such that the
robot provides valuable contributions to achieving the task, while appropriately interacting with, but not hindering the
human. The
human's complete workload state can be used to determine
robot interactions that may augment team performance, due to the relationship between workload and task performance. This dissertation developed a workload assessment algorithm capable of estimating overall workload and each workload component (e.g., cognitive, auditory, visual, speech, and physical) in order to provide meaningful information to an adaptive system. The developed algorithm estimated overall workload and each workload component accurately using data from two
human-
robot teaming evaluations: a peer-based and supervisory-based. A non-stationary evaluation was conducted in order to validate the algorithmâs real-time capabilities. The workload assessment algorithm was incorporated into an adaptive
human-
robot teaming system architecture, which targeted adaptations towards a workload component. A pilot study demonstrated the adaptive systemâs ability to improve task performance by adapting system autonomy and interactions.
Advisors/Committee Members: Alan Peters (committee member), Matthew Weinger (committee member), Terrence Fong (committee member), Julie A. Adams (chair), D. Mitchell Wilkes (committee member).
Subjects/Keywords: Human-Robot Interaction; Robotics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Heard, J. (2019). An adaptive supervisory-based human-robot teaming architecture. (Doctoral Dissertation). Vanderbilt University. Retrieved from http://etd.library.vanderbilt.edu/available/etd-08162019-171051/ ;
Chicago Manual of Style (16th Edition):
Heard, Jamison. “An adaptive supervisory-based human-robot teaming architecture.” 2019. Doctoral Dissertation, Vanderbilt University. Accessed December 14, 2019.
http://etd.library.vanderbilt.edu/available/etd-08162019-171051/ ;.
MLA Handbook (7th Edition):
Heard, Jamison. “An adaptive supervisory-based human-robot teaming architecture.” 2019. Web. 14 Dec 2019.
Vancouver:
Heard J. An adaptive supervisory-based human-robot teaming architecture. [Internet] [Doctoral dissertation]. Vanderbilt University; 2019. [cited 2019 Dec 14].
Available from: http://etd.library.vanderbilt.edu/available/etd-08162019-171051/ ;.
Council of Science Editors:
Heard J. An adaptive supervisory-based human-robot teaming architecture. [Doctoral Dissertation]. Vanderbilt University; 2019. Available from: http://etd.library.vanderbilt.edu/available/etd-08162019-171051/ ;

University of Cambridge
8.
Burke, Michael Glen.
Fast upper body pose estimation for human-robot interaction.
Degree: PhD, 2015, University of Cambridge
URL: https://www.repository.cam.ac.uk/handle/1810/256305
;
http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883
► This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in…
(more)
▼ This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in human-robot interaction in mind. A novel mixture of Ornstein-Uhlenbeck processes model, trained in a reduced dimensional subspace and designed for analytical tractability, is introduced. This model acts as a collection of mean-reverting random walks that pull towards more commonly observed poses. Pose tracking using this model can be Rao-Blackwellised, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. The model is used within a recursive Bayesian framework to provide reliable estimates of upper body pose when only a subset of body joints can be detected. Model training data can be extended through a retargeting process, and better pose coverage obtained through the use of Poisson disk sampling in the model training stage. Results on a number of test datasets show that the proposed approach provides pose estimation accuracy comparable with the state of the art in real time (30 fps) and can be extended to the multiple user case. As a motivating example, this work also introduces a pantomimic gesture recognition interface. Traditional approaches to gesture recognition for robot control make use of predefined codebooks of gestures, which are mapped directly to the robot behaviours they are intended to elicit. These gesture codewords are typically recognised using algorithms trained on multiple recordings of people performing the predefined gestures. Obtaining these recordings can be expensive and time consuming, and the codebook of gestures may not be particularly intuitive. This thesis presents arguments that pantomimic gestures, which mimic the intended robot behaviours directly, are potentially more intuitive, and proposes a transfer learning approach to recognition, where human hand gestures are mapped to recordings of robot behaviour by extracting temporal and spatial features that are inherently present in both pantomimed actions and robot behaviours. A Bayesian bias compensation scheme is introduced to compensate for potential classification bias in features. Results from a quadrotor behaviour selection problem show that good classification accuracy can be obtained when human hand gestures are recognised using behaviour recordings, and that classification using these behaviour recordings is more robust than using human hand recordings when users are allowed complete freedom over their choice of input gestures.
Subjects/Keywords: 629.8; Technology; robot; human-robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Burke, M. G. (2015). Fast upper body pose estimation for human-robot interaction. (Doctoral Dissertation). University of Cambridge. Retrieved from https://www.repository.cam.ac.uk/handle/1810/256305 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883
Chicago Manual of Style (16th Edition):
Burke, Michael Glen. “Fast upper body pose estimation for human-robot interaction.” 2015. Doctoral Dissertation, University of Cambridge. Accessed December 14, 2019.
https://www.repository.cam.ac.uk/handle/1810/256305 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883.
MLA Handbook (7th Edition):
Burke, Michael Glen. “Fast upper body pose estimation for human-robot interaction.” 2015. Web. 14 Dec 2019.
Vancouver:
Burke MG. Fast upper body pose estimation for human-robot interaction. [Internet] [Doctoral dissertation]. University of Cambridge; 2015. [cited 2019 Dec 14].
Available from: https://www.repository.cam.ac.uk/handle/1810/256305 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883.
Council of Science Editors:
Burke MG. Fast upper body pose estimation for human-robot interaction. [Doctoral Dissertation]. University of Cambridge; 2015. Available from: https://www.repository.cam.ac.uk/handle/1810/256305 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883

Mississippi State University
9.
Crumpton, Joseph John.
Use of vocal prosody to express emotions in robotic speech.
Degree: PhD, Computer Science and Engineering, 2015, Mississippi State University
URL: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/
;
► Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential components of speech communication between humans. The objective of this…
(more)
▼ Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential
components of speech communication between humans. The objective of this dissertation
research was to determine the efficacy of using varying vocal prosody in robotic speech
to convey emotion. Two pilot studies and two experiments were performed to address the
shortcomings of previous HRI research in this area.
The pilot studies were used to determine a set of vocal prosody modification values
for a female voice model using the MARY speech synthesizer to convey the emotions:
anger, fear, happiness, and sadness. Experiment 1 validated that participants perceived
these emotions along with a neutral vocal prosody at rates significantly higher than chance.
Four of the vocal prosodies (anger, fear, neutral, and sadness) were recognized at rates
approaching the recognition rate (60%) of emotions in person to person speech.
During Experiment 2 the
robot led participants through a creativity test while making
statements using one of the validated emotional vocal prosodies. The ratings of the
robots positive qualities and the creativity scores by the participant group that heard nonnegative
vocal prosodies (happiness, neutral) did not significantly differ from the ratings
and scores of the participant group that heard the negative vocal prosodies (anger, fear,
sadness). Therefore, Experiment 2 failed to show that the use of emotional vocal prosody
in a robots speech influenced the participants appraisal of the
robot or the participants
performance on this specific task.
At this time
robot designers and programmers should not expect that vocal prosody
alone will have a significant impact on the acceptability or the quality of
human-
robot
interactions. Further research is required to show that multi-modal (vocal prosody along
with facial expressions, body language, or linguistic content) expressions of emotions by
robots will be effective at improving
human-
robot interactions.
Advisors/Committee Members: Cindy L. Bethel (chair), Derek T. Anderson (committee member), J. Edward Swan II (committee member), Byron J. Williams (committee member).
Subjects/Keywords: human-robot interaction; robot; speech synthesizer
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Crumpton, J. J. (2015). Use of vocal prosody to express emotions in robotic speech. (Doctoral Dissertation). Mississippi State University. Retrieved from http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/ ;
Chicago Manual of Style (16th Edition):
Crumpton, Joseph John. “Use of vocal prosody to express emotions in robotic speech.” 2015. Doctoral Dissertation, Mississippi State University. Accessed December 14, 2019.
http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/ ;.
MLA Handbook (7th Edition):
Crumpton, Joseph John. “Use of vocal prosody to express emotions in robotic speech.” 2015. Web. 14 Dec 2019.
Vancouver:
Crumpton JJ. Use of vocal prosody to express emotions in robotic speech. [Internet] [Doctoral dissertation]. Mississippi State University; 2015. [cited 2019 Dec 14].
Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/ ;.
Council of Science Editors:
Crumpton JJ. Use of vocal prosody to express emotions in robotic speech. [Doctoral Dissertation]. Mississippi State University; 2015. Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/ ;

Brigham Young University
10.
Ashcraft, C Chace.
Moderating Influence as a Design Principle for Human-Swarm Interaction.
Degree: MS, 2019, Brigham Young University
URL: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd
► Robot swarms have recently become of interest in both industry and academia for their potential to perform various difficult or dangerous tasks efficiently. As real…
(more)
▼ Robot swarms have recently become of interest in both industry and academia for their potential to perform various difficult or dangerous tasks efficiently. As real robot swarms become more of a possibility, many desire swarms to be controlled or directed by a human, which raises questions regarding how that should be done. Part of the challenge of human-swarm interaction is the difficulty of understanding swarm state and how to drive the swarm to produce emergent behaviors. Human input could inhibit desirable swarm behaviors if their input is poor and has sufficient influence over swarm agents, affecting its overall performance. Thus, with too little influence, human input is useless, but with too much, it can be destructive. We suggest that there is some middle level, or interval, of human influence that allows the swarm to take advantage of useful human input while minimizing the effect of destructive input. Further, we propose that human-swarm interaction schemes can be designed to maintain an appropriate level of human influence over the swarm and maintain or improve swarm performance in the presence of both useful and destructive human input. We test this theory by implementing a piece of software to dynamically moderate influence and then testing it with a simulated honey bee colony performing nest site selection, simulated human input, and actual human input via a user study. The results suggest that moderating influence, as suggested, is important for maintaining high performance in the presence of both useful and destructive human input. However, while our software seems to successfully moderate influence with simulated human input, it fails to do so with actual human input.
Subjects/Keywords: Human swarm interaction; human robot interaction; swarms; robot swarms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ashcraft, C. C. (2019). Moderating Influence as a Design Principle for Human-Swarm Interaction. (Masters Thesis). Brigham Young University. Retrieved from https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd
Chicago Manual of Style (16th Edition):
Ashcraft, C Chace. “Moderating Influence as a Design Principle for Human-Swarm Interaction.” 2019. Masters Thesis, Brigham Young University. Accessed December 14, 2019.
https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd.
MLA Handbook (7th Edition):
Ashcraft, C Chace. “Moderating Influence as a Design Principle for Human-Swarm Interaction.” 2019. Web. 14 Dec 2019.
Vancouver:
Ashcraft CC. Moderating Influence as a Design Principle for Human-Swarm Interaction. [Internet] [Masters thesis]. Brigham Young University; 2019. [cited 2019 Dec 14].
Available from: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd.
Council of Science Editors:
Ashcraft CC. Moderating Influence as a Design Principle for Human-Swarm Interaction. [Masters Thesis]. Brigham Young University; 2019. Available from: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd

Rutgers University
11.
Liu, Jingjing, 1985-.
Exploiting multispectral and contextual information to improve human detection.
Degree: PhD, Computer Science, 2017, Rutgers University
URL: https://rucore.libraries.rutgers.edu/rutgers-lib/55564/
► Human detection has various applications, e.g., autonomous driving car, surveillance system, and retail. In this dissertation, we first exploit multispectral images (i.e., RGB and thermal…
(more)
▼ Human detection has various applications, e.g., autonomous driving car, surveillance system, and retail. In this dissertation, we first exploit multispectral images (i.e., RGB and thermal images) for human detection. We extensively analyze Faster R-CNN for the detection task and then model multispectral human detection into a fusion problem of convolutional networks (ConvNets). We design four distinct ConvNet fusion architectures that integrate two-branch ConvNets on different stages of neural networks, all of which yield better performance compared with the baseline detector. In the second part of this dissertation, we leverage instance-level contextual information in crowded scenes to boost performance of human detection. Based on a context graph that incorporates both geometric and social contextual patterns from crowds, we apply progressive potential propagation algorithm to discover weak detections that are contextually compatible with true detections while suppressing irrelevant false alarms. The method significantly improves the performance of any shallow human detectors, obtaining comparable results to deep learning based methods.
Advisors/Committee Members: Metaxas, Dimitris N. (chair), Bekris, Kostas (internal member), Yu, Jingjin (internal member), Ratha, Nalini K. (outside member), School of Graduate Studies.
Subjects/Keywords: Robotics – Human factors; Human-robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, Jingjing, 1. (2017). Exploiting multispectral and contextual information to improve human detection. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/55564/
Chicago Manual of Style (16th Edition):
Liu, Jingjing, 1985-. “Exploiting multispectral and contextual information to improve human detection.” 2017. Doctoral Dissertation, Rutgers University. Accessed December 14, 2019.
https://rucore.libraries.rutgers.edu/rutgers-lib/55564/.
MLA Handbook (7th Edition):
Liu, Jingjing, 1985-. “Exploiting multispectral and contextual information to improve human detection.” 2017. Web. 14 Dec 2019.
Vancouver:
Liu, Jingjing 1. Exploiting multispectral and contextual information to improve human detection. [Internet] [Doctoral dissertation]. Rutgers University; 2017. [cited 2019 Dec 14].
Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/55564/.
Council of Science Editors:
Liu, Jingjing 1. Exploiting multispectral and contextual information to improve human detection. [Doctoral Dissertation]. Rutgers University; 2017. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/55564/

Université Montpellier II
12.
Bussy, Antoine.
Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction.
Degree: Docteur es, SYAM - Systèmes Automatiques et Microélectroniques, 2013, Université Montpellier II
URL: http://www.theses.fr/2013MON20090
► Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et…
(more)
▼ Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques.
Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions.
Advisors/Committee Members: Kheddar, Abderrahmane (thesis director), Crosnier, André (thesis director).
Subjects/Keywords: Haptique; Interaction Physique; Interaction Homme-Robot; Robot Humanoïde; Haptics; Physical Interaction; Human-Robot Interaction; Humanoid Robot
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bussy, A. (2013). Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction. (Doctoral Dissertation). Université Montpellier II. Retrieved from http://www.theses.fr/2013MON20090
Chicago Manual of Style (16th Edition):
Bussy, Antoine. “Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction.” 2013. Doctoral Dissertation, Université Montpellier II. Accessed December 14, 2019.
http://www.theses.fr/2013MON20090.
MLA Handbook (7th Edition):
Bussy, Antoine. “Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction.” 2013. Web. 14 Dec 2019.
Vancouver:
Bussy A. Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction. [Internet] [Doctoral dissertation]. Université Montpellier II; 2013. [cited 2019 Dec 14].
Available from: http://www.theses.fr/2013MON20090.
Council of Science Editors:
Bussy A. Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction. [Doctoral Dissertation]. Université Montpellier II; 2013. Available from: http://www.theses.fr/2013MON20090

University of Illinois – Chicago
13.
Javaid, Maria.
Communication through Physical Interaction: Robot Assistants for the Elderly.
Degree: 2015, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/19370
► This research work is a part of a broader research project which has the aim to build an effective and user friendly communication interface for…
(more)
▼ This research work is a part of a broader research project which has the aim to build an effective and user friendly communication interface for assistive robots that can help the elderly
to have an independent life at home. Such communication interface should incorporate multiple modalities of communication, since collaborative task-oriented
human-human communication is inherently multimodal. For this purpose, data was collected from twenty collaborative task-oriented
human-human communication sessions between a helper and an elderly person in a realistic setting (fully functional studio apartment).
My research mainly focus on collecting physical
interaction data in an unobtrusive way during
human-human
interaction and analyzing that data to determine how it can be implemented to communication interface for assistive robots particularly in elderly care domain. Thus a pressure sensors equipped data glove was developed. Based on the data collected from this glove, communication through physical
interaction during collaborative manipulation of planar objects was studied. Subsequently, an algorithm was developed based on the laboratory data analysis which can classify four di erent stages of collaborative manipulation of planar object.
This algorithm was later successfully validated on experiments performed in a realistic setting with subjects involved in performing activities of elderly care and determining
human-human hand-over of planar object in real-time. Other than understanding the communication through physical
interaction, this research also presents the methods for recognizing various physical
manipulation actions that take place when an elderly is helped by a care-giver in cooking and setting of dinning table. This particular work was motivated by the natural language analysis
of the data collected with helper and an elderly person which showed that the knowledge of such physical manipulation actions helps to improve communication through natural language.
The physical
interaction based classification methods are first developed through laboratory experiments and later successfully validated on the experiments performed in a realistic setting.
Advisors/Committee Members: Zefran, Milos (advisor).
Subjects/Keywords: Human-Robot Interaction; Robot Assistants; Physical Interaction; Multimodal Communication; Haptic Communication
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Javaid, M. (2015). Communication through Physical Interaction: Robot Assistants for the Elderly. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/19370
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Javaid, Maria. “Communication through Physical Interaction: Robot Assistants for the Elderly.” 2015. Thesis, University of Illinois – Chicago. Accessed December 14, 2019.
http://hdl.handle.net/10027/19370.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Javaid, Maria. “Communication through Physical Interaction: Robot Assistants for the Elderly.” 2015. Web. 14 Dec 2019.
Vancouver:
Javaid M. Communication through Physical Interaction: Robot Assistants for the Elderly. [Internet] [Thesis]. University of Illinois – Chicago; 2015. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/10027/19370.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Javaid M. Communication through Physical Interaction: Robot Assistants for the Elderly. [Thesis]. University of Illinois – Chicago; 2015. Available from: http://hdl.handle.net/10027/19370
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
14.
Ekhtiarabadi, Afshin Ameri.
Unified Incremental Multimodal Interface for Human-Robot Interaction.
Degree: Design and Engineering, 2011, Mälardalen University
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
► Face-to-face human communication is a multimodal and incremental process. Humans employ different information channels (modalities) for their communication. Since some of these modalities are…
(more)
▼ Face-to-face human communication is a multimodal and incremental process. Humans employ different information channels (modalities) for their communication. Since some of these modalities are more error-prone to specic type of data, a multimodal communication can benefit from strengths of each modality and therefore reduce ambiguities during the interaction. Such interfaces can be applied to intelligent robots who operate in close relation with humans. With this approach, robots can communicate with their human colleagues in the same way they communicate with each other, thus leading to an easier and more robust human-robot interaction (HRI).In this work we suggest a new method for implementing multimodal interfaces in HRI domain and present the method employed on an industrial robot. We show that operating the system is made easier by using this interface.
Robot Colleague
Subjects/Keywords: Multimodal Interaction; Human-Robot Interaction; Human Computer Interaction; Människa-datorinteraktion (interaktionsdesign)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ekhtiarabadi, A. A. (2011). Unified Incremental Multimodal Interface for Human-Robot Interaction. (Thesis). Mälardalen University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ekhtiarabadi, Afshin Ameri. “Unified Incremental Multimodal Interface for Human-Robot Interaction.” 2011. Thesis, Mälardalen University. Accessed December 14, 2019.
http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ekhtiarabadi, Afshin Ameri. “Unified Incremental Multimodal Interface for Human-Robot Interaction.” 2011. Web. 14 Dec 2019.
Vancouver:
Ekhtiarabadi AA. Unified Incremental Multimodal Interface for Human-Robot Interaction. [Internet] [Thesis]. Mälardalen University; 2011. [cited 2019 Dec 14].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ekhtiarabadi AA. Unified Incremental Multimodal Interface for Human-Robot Interaction. [Thesis]. Mälardalen University; 2011. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Georgia Tech
15.
IJtsma, Martijn.
Computational simulation of adaptation of work strategies in human-robot teams.
Degree: PhD, Aerospace Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/61783
► Human-robot teams operating in complex work domains, such as space operations, need to adapt to maintain performance under a wide variety of work conditions. This…
(more)
▼ Human-
robot teams operating in complex work domains, such as space operations, need to adapt to maintain performance under a wide variety of work conditions. This thesis argues that from the start team design needs to establish team structures that allow flexibility in strategies for conducting the team’s collective work. In addition, team design needs to facilitate fluent coordination of work, fostering the interweaving of team members’ dependent actions in ways that accounts for the dynamic characteristics of the work and the work environment. This thesis establishes a methodology to analyze a team’s strategies based on computational modeling of a team’s collective work, including the teamwork required to coordinate dependent work between multiple team members. This approach consists of the systematic identification of feasible work strategies and the simulation of work models to address the dynamic and emergent nature of a team’s work. It provides a formative analysis tool to help designers predict and understand the effects of their design choices on a team’s feasible work strategies. Two case studies on space operations demonstrate how this approach can predict how work allocation and
human-
robot interaction modes can foster and/or limit the availability of appropriate work strategies.
Advisors/Committee Members: Pritchett, Amy R. (advisor), Feigh, Karen M. (advisor), Clarke, John-Paul B. (committee member), Lightsey, Glenn (committee member), Johnson, Matthew (committee member).
Subjects/Keywords: Human-robot teaming; Work allocation; Human-robot interaction; Adaptation; Coordination
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
IJtsma, M. (2019). Computational simulation of adaptation of work strategies in human-robot teams. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61783
Chicago Manual of Style (16th Edition):
IJtsma, Martijn. “Computational simulation of adaptation of work strategies in human-robot teams.” 2019. Doctoral Dissertation, Georgia Tech. Accessed December 14, 2019.
http://hdl.handle.net/1853/61783.
MLA Handbook (7th Edition):
IJtsma, Martijn. “Computational simulation of adaptation of work strategies in human-robot teams.” 2019. Web. 14 Dec 2019.
Vancouver:
IJtsma M. Computational simulation of adaptation of work strategies in human-robot teams. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/1853/61783.
Council of Science Editors:
IJtsma M. Computational simulation of adaptation of work strategies in human-robot teams. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61783

Tampere University
16.
Bejarano, Ronal.
Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
.
Degree: 2019, Tampere University
URL: https://trepo.tuni.fi/handle/10024/117392
► Over the last decades, the Industrial Automation domain at factory shop floors experienced an exponential growth in the use of robots. The objective of such…
(more)
▼ Over the last decades, the Industrial Automation domain at factory shop floors experienced an exponential growth in the use of robots. The objective of such change aims to increase the efficiency at reasonable cost. However, not all the tasks formerly performed by humans in factories, are fully substituted by robots nowadays, specially the ones requiring high-level of dexterity. In fact, Europe is moving towards implementing efficient work spaces were humans can work safely, aided by robots. In this context, industrial and research sectors have ambitious plans to achieve solutions that involve coexistence and simultaneity at work between humans and collaborative robots, a.k.a. “cobots” or co-robots, for permitting a safe interaction for the same or interrelated manufacturing processes. Many cobot producers started to present their products, but those arrived before the industry have clear and several needs of this particular technology. This work presents an approach about how to demonstrate human-robot collaborative manufacturing? How to implement a dual-arm human-robot collaborative workstation? How to integrate a human-robot collaborative workstation into a modular interconnected production line? and What are the advantages and challenges of current HRC technologies at the shop floor? by documenting the formulation of a human-robot collaborative assembly process, implemented by designing and building an assembly workstation that exemplifies a scenario of interaction between a dual arm cobot and a human operator, in order to assembly a product box, as a part of a large-scale modular robotized production line. The model produced by this work is part of the research facilities at the Future Automation Systems and Technologies Laboratory in Tampere University.
Subjects/Keywords: Human-Robot Interaction;
Human-Robot Collaboration;
Industrial applications;
Cobots
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bejarano, R. (2019). Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
. (Masters Thesis). Tampere University. Retrieved from https://trepo.tuni.fi/handle/10024/117392
Chicago Manual of Style (16th Edition):
Bejarano, Ronal. “Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
.” 2019. Masters Thesis, Tampere University. Accessed December 14, 2019.
https://trepo.tuni.fi/handle/10024/117392.
MLA Handbook (7th Edition):
Bejarano, Ronal. “Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
.” 2019. Web. 14 Dec 2019.
Vancouver:
Bejarano R. Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
. [Internet] [Masters thesis]. Tampere University; 2019. [cited 2019 Dec 14].
Available from: https://trepo.tuni.fi/handle/10024/117392.
Council of Science Editors:
Bejarano R. Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
. [Masters Thesis]. Tampere University; 2019. Available from: https://trepo.tuni.fi/handle/10024/117392

Rochester Institute of Technology
17.
Hazbar, Tuly.
Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace.
Degree: MS, Electrical Engineering, 2019, Rochester Institute of Technology
URL: https://scholarworks.rit.edu/theses/10198
► A cyber-physical system is developed to enable a human-robot team to perform a shared task in a shared workspace. The system setup is suitable…
(more)
▼ A cyber-physical system is developed to enable a
human-
robot team to perform a shared task in a shared workspace. The system setup is suitable for the implementation of a tabletop manipulation task, a common
human-
robot collaboration scenario. The system integrates elements that exist in the physical (real) and the virtual world. In this work, we report the insights we gathered throughout our exploration in understanding and implementing task planning and execution for
human-
robot team.
Advisors/Committee Members: Ferat Sahin, Gill R Tsouri, Dan Phillips.
Subjects/Keywords: Cognitive robotics; Digital twin; Human-robot collaboration; Human robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hazbar, T. (2019). Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace. (Masters Thesis). Rochester Institute of Technology. Retrieved from https://scholarworks.rit.edu/theses/10198
Chicago Manual of Style (16th Edition):
Hazbar, Tuly. “Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace.” 2019. Masters Thesis, Rochester Institute of Technology. Accessed December 14, 2019.
https://scholarworks.rit.edu/theses/10198.
MLA Handbook (7th Edition):
Hazbar, Tuly. “Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace.” 2019. Web. 14 Dec 2019.
Vancouver:
Hazbar T. Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace. [Internet] [Masters thesis]. Rochester Institute of Technology; 2019. [cited 2019 Dec 14].
Available from: https://scholarworks.rit.edu/theses/10198.
Council of Science Editors:
Hazbar T. Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace. [Masters Thesis]. Rochester Institute of Technology; 2019. Available from: https://scholarworks.rit.edu/theses/10198

Universiteit Utrecht
18.
Wigdor, N.R.
Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction.
Degree: 2014, Universiteit Utrecht
URL: http://dspace.library.uu.nl:8080/handle/1874/298433
► Conversation Fillers (CFs) such as ”um”, ”hmm”, and ”ah” were tested alongside iconic pensive or acknowledging gestures for their ef- fectiveness at mitigating the negative…
(more)
▼ Conversation Fillers (CFs) such as ”um”, ”hmm”, and ”ah” were tested alongside iconic pensive or acknowledging gestures for their ef- fectiveness at mitigating the negative effects associated with unwanted anthropomorphic
robot response delay. Employing CFs in interac- tions with nine- and ten-year-old children was found to be effective at improving perceived speediness, aliveness, humanness, and likability without decreasing perceptions of intelligence, trustworthiness, or au- tonomy. The results also show that an experimenter covertly crafting a robot’s vocalized response has a slower heart rate and a higher heart rate variability, an indication of a lower stress level, when the
robot is filling the associated delay with CFs than when not.
Advisors/Committee Members: Marcel van Aken, John Jules Meyer.
Subjects/Keywords: Robot; conversational fillers; child-robot interaction; human-robot interaction; delay mitigation; filling
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wigdor, N. R. (2014). Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction. (Masters Thesis). Universiteit Utrecht. Retrieved from http://dspace.library.uu.nl:8080/handle/1874/298433
Chicago Manual of Style (16th Edition):
Wigdor, N R. “Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction.” 2014. Masters Thesis, Universiteit Utrecht. Accessed December 14, 2019.
http://dspace.library.uu.nl:8080/handle/1874/298433.
MLA Handbook (7th Edition):
Wigdor, N R. “Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction.” 2014. Web. 14 Dec 2019.
Vancouver:
Wigdor NR. Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction. [Internet] [Masters thesis]. Universiteit Utrecht; 2014. [cited 2019 Dec 14].
Available from: http://dspace.library.uu.nl:8080/handle/1874/298433.
Council of Science Editors:
Wigdor NR. Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction. [Masters Thesis]. Universiteit Utrecht; 2014. Available from: http://dspace.library.uu.nl:8080/handle/1874/298433

Cranfield University
19.
Tang, Gilbert.
The development of a human-robot interface for industrial collaborative system.
Degree: PhD, 2016, Cranfield University
URL: http://dspace.lib.cranfield.ac.uk/handle/1826/10213
► Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number…
(more)
▼ Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future.
A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”.
In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance.
The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user
ii
effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.
Subjects/Keywords: Human-robot interface; gesture control; human-robot interaction; system communication; teleoperation; automation; robot assistant
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tang, G. (2016). The development of a human-robot interface for industrial collaborative system. (Doctoral Dissertation). Cranfield University. Retrieved from http://dspace.lib.cranfield.ac.uk/handle/1826/10213
Chicago Manual of Style (16th Edition):
Tang, Gilbert. “The development of a human-robot interface for industrial collaborative system.” 2016. Doctoral Dissertation, Cranfield University. Accessed December 14, 2019.
http://dspace.lib.cranfield.ac.uk/handle/1826/10213.
MLA Handbook (7th Edition):
Tang, Gilbert. “The development of a human-robot interface for industrial collaborative system.” 2016. Web. 14 Dec 2019.
Vancouver:
Tang G. The development of a human-robot interface for industrial collaborative system. [Internet] [Doctoral dissertation]. Cranfield University; 2016. [cited 2019 Dec 14].
Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/10213.
Council of Science Editors:
Tang G. The development of a human-robot interface for industrial collaborative system. [Doctoral Dissertation]. Cranfield University; 2016. Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/10213

Cranfield University
20.
Tang, Gilbert.
The development of a human-robot interface for industrial collaborative system.
Degree: PhD, 2016, Cranfield University
URL: http://dspace.lib.cranfield.ac.uk/handle/1826/10213
;
http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026
► Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number…
(more)
▼ Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.
Subjects/Keywords: 629.8; Human-robot interface; gesture control; human-robot interaction; system communication; teleoperation; automation; robot assistant
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tang, G. (2016). The development of a human-robot interface for industrial collaborative system. (Doctoral Dissertation). Cranfield University. Retrieved from http://dspace.lib.cranfield.ac.uk/handle/1826/10213 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026
Chicago Manual of Style (16th Edition):
Tang, Gilbert. “The development of a human-robot interface for industrial collaborative system.” 2016. Doctoral Dissertation, Cranfield University. Accessed December 14, 2019.
http://dspace.lib.cranfield.ac.uk/handle/1826/10213 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026.
MLA Handbook (7th Edition):
Tang, Gilbert. “The development of a human-robot interface for industrial collaborative system.” 2016. Web. 14 Dec 2019.
Vancouver:
Tang G. The development of a human-robot interface for industrial collaborative system. [Internet] [Doctoral dissertation]. Cranfield University; 2016. [cited 2019 Dec 14].
Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/10213 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026.
Council of Science Editors:
Tang G. The development of a human-robot interface for industrial collaborative system. [Doctoral Dissertation]. Cranfield University; 2016. Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/10213 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026
21.
高橋, 達.
高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット.
Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学
URL: http://hdl.handle.net/10061/8719
Subjects/Keywords: Human-Robot Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
高橋, . (n.d.). 高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/8719
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
高橋, 達. “高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed December 14, 2019.
http://hdl.handle.net/10061/8719.
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
高橋, 達. “高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット.” Web. 14 Dec 2019.
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Vancouver:
高橋 . 高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Dec 14].
Available from: http://hdl.handle.net/10061/8719.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.
Council of Science Editors:
高橋 . 高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/8719
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

University of Ottawa
22.
Zhi, Da.
Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language
.
Degree: 2018, University of Ottawa
URL: http://hdl.handle.net/10393/37768
► This thesis presents a novel depth camera-based real-time hand gesture recognition system for training a human-like robot hand to interact with humans through sign language.…
(more)
▼ This thesis presents a novel depth camera-based real-time hand gesture recognition system for training a human-like robot hand to interact with humans through sign language.
We developed a modular real-time Hand Gesture Recognition (HGR) system, which uses multiclass Support Vector Machine (SVM) for training and recognition of the static hand postures and N-Dimensional Dynamic Time Warping (ND-DTW) for dynamic hand gestures recognition. A 3D hand gestures training/testing dataset was recorded using a depth camera tailored to accommodate the kinematic constructive limitations of the human-like robotic hand.
Experimental results show that the multiclass SVM method has an overall 98.34% recognition rate in the HRI (Human-Robot Interaction) mode and 99.94% recognition rate in the RRI (Robot-Robot Interaction) mode, as well as the lowest average run time compared to the k-NN (k-Nearest Neighbour) and ANBC (Adaptive Naïve Bayes Classifier) approaches. In dynamic gestures recognition, the ND-DTW classifier displays a better performance than DHMM (Discrete Hidden Markov Model) with a 97% recognition rate and significantly shorter run time.
In conclusion, the combination of multiclass SVM and ND-DTW provides an efficient solution for the real-time recognition of the hand gesture used for training a robot arm to perform sign language.
Subjects/Keywords: Human-Robot Interaction;
Hand Gesture Recognition
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhi, D. (2018). Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language
. (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/37768
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Zhi, Da. “Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language
.” 2018. Thesis, University of Ottawa. Accessed December 14, 2019.
http://hdl.handle.net/10393/37768.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Zhi, Da. “Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language
.” 2018. Web. 14 Dec 2019.
Vancouver:
Zhi D. Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language
. [Internet] [Thesis]. University of Ottawa; 2018. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/10393/37768.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Zhi D. Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language
. [Thesis]. University of Ottawa; 2018. Available from: http://hdl.handle.net/10393/37768
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Clemson University
23.
Spencer, David.
Analysis and Synthesis of Effective Human-Robot Interaction at Varying Levels in Control Hierarchy.
Degree: MS, Mechanical Engineering, 2015, Clemson University
URL: https://tigerprints.clemson.edu/all_theses/2289
► Robot controller design is usually hierarchical with both high-level task and motion planning and low-level control law design. In the presented works, we investigate methods…
(more)
▼ Robot controller design is usually hierarchical with both high-level task and motion planning and low-level control law design. In the presented works, we investigate methods for low-level and high-level control designs to guarantee joint performance of
human-
robot interaction (HRI). In the first work, a low-level method using the switched linear quadratic regulator (SLQR), an optimal control policy based on a quadratic cost function, is used. By incorporating measures of
robot performance and
human workload, it can be determined when to utilize the
human operator in a method that improves overall task performance while reducing operator workload. This method is demonstrated via simulation using the complex dynamics of an autonomous underwater vehicle (AUV), showing this method can successfully overcome such scenarios while maintaining reduced workload. An extension of this work to path planning is also presented for the purposes of obstacle avoidance with simulation showing
human planning successfully guiding the AUV around obstacles to reach its goals. In the high-level approach, formal methods are applied to a scenario where an operator oversees a group of mobile robots as they navigate an unknown environment. Autonomy in this scenario uses specifications written in linear temporal logic (LTL) to conduct symbolic motion planning in a guaranteed safe, though very conservative, approach. A
human operator, using gathered environmental data, is able to produce a more efficient path. To aid in task decomposition and real-time switching, a dynamic
human trust model is used. Simulations are given showing the successful implementation of this method.
Advisors/Committee Members: Wang, Yue, Wagner, John, Humphrey, Laura.
Subjects/Keywords: Controls; Human-Robot Interaction; Trust; Mechanical Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Spencer, D. (2015). Analysis and Synthesis of Effective Human-Robot Interaction at Varying Levels in Control Hierarchy. (Masters Thesis). Clemson University. Retrieved from https://tigerprints.clemson.edu/all_theses/2289
Chicago Manual of Style (16th Edition):
Spencer, David. “Analysis and Synthesis of Effective Human-Robot Interaction at Varying Levels in Control Hierarchy.” 2015. Masters Thesis, Clemson University. Accessed December 14, 2019.
https://tigerprints.clemson.edu/all_theses/2289.
MLA Handbook (7th Edition):
Spencer, David. “Analysis and Synthesis of Effective Human-Robot Interaction at Varying Levels in Control Hierarchy.” 2015. Web. 14 Dec 2019.
Vancouver:
Spencer D. Analysis and Synthesis of Effective Human-Robot Interaction at Varying Levels in Control Hierarchy. [Internet] [Masters thesis]. Clemson University; 2015. [cited 2019 Dec 14].
Available from: https://tigerprints.clemson.edu/all_theses/2289.
Council of Science Editors:
Spencer D. Analysis and Synthesis of Effective Human-Robot Interaction at Varying Levels in Control Hierarchy. [Masters Thesis]. Clemson University; 2015. Available from: https://tigerprints.clemson.edu/all_theses/2289

Vanderbilt University
24.
Young, Eric Michael.
Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb.
Degree: MS, Mechanical Engineering, 2015, Vanderbilt University
URL: http://etd.library.vanderbilt.edu/available/etd-08192015-082043/
;
► In recent years, robotic rehabilitation has proven to be beneficial for individuals with impaired limbs, particularly due to the potential of robotic therapists to be…
(more)
▼ In recent years, robotic rehabilitation has proven to be beneficial for individuals with impaired limbs, particularly due to the potential of robotic therapists to be more accessible, consistent and cost-effective than their
human counterparts. While pursuing better rehabilitation methods is a crucial endeavor, it is also important to acknowledge that many people need an alternative form of assistance for physical impairments, both while undergoing rehabilitation and in the unfortunate but common scenario of rehabilitation providing insufficient improvements. The aim of this thesis is to present a low-cost robotic assistive device which may serve as a complement to rehabilitation procedures. The proposed system determines the intended movement of a userâs upper arm via eye-gaze inputs and force inputs, and physically assists said movement. In this manner, the system may provide immediate relief for someone suffering from physical impairments in their upper limbs, either as a complement to ongoing rehabilitation therapy or as a partial solution in the case of insufficient improvements from rehabilitation.
Advisors/Committee Members: Nilanjan Sarkar (chair), Thomas Withrow (committee member), Zachary Warren (committee member).
Subjects/Keywords: human-robot interaction; intention detection; robotic assistance
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Young, E. M. (2015). Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb. (Masters Thesis). Vanderbilt University. Retrieved from http://etd.library.vanderbilt.edu/available/etd-08192015-082043/ ;
Chicago Manual of Style (16th Edition):
Young, Eric Michael. “Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb.” 2015. Masters Thesis, Vanderbilt University. Accessed December 14, 2019.
http://etd.library.vanderbilt.edu/available/etd-08192015-082043/ ;.
MLA Handbook (7th Edition):
Young, Eric Michael. “Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb.” 2015. Web. 14 Dec 2019.
Vancouver:
Young EM. Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb. [Internet] [Masters thesis]. Vanderbilt University; 2015. [cited 2019 Dec 14].
Available from: http://etd.library.vanderbilt.edu/available/etd-08192015-082043/ ;.
Council of Science Editors:
Young EM. Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb. [Masters Thesis]. Vanderbilt University; 2015. Available from: http://etd.library.vanderbilt.edu/available/etd-08192015-082043/ ;

University of Illinois – Urbana-Champaign
25.
Heimerdinger, Madison Suzanne.
Influence of environmental context on affect recognition of stylized movements.
Degree: MS, Mechanical Engineering, 2017, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/99365
► Modifying the style of movements will be an important component of robotic interaction as more and more robots move into human-facing scenarios where humans are…
(more)
▼ Modifying the style of movements will be an important component of robotic
interaction as more and more robots move into
human-facing scenarios where humans are (consciously or unconsciously) constantly monitoring the motion profile of counterparts in order to make judgments about the state of these counterparts. This thesis includes two main contributions: (1) the development of two MATLAB tools that are designed to aid in the creation and simulation of stylized movement trajectories in varied contexts and (2) three user studies that explore the effects of environmental context on a human’s perception of stylized movement.
First and foremost, the results from all of the user studies indicate that environmental contexts and stylized walking sequences both impact affect recognition. In the first two studies, participants were asked to categorize stimuli as one of seven affective labels. The results show that the labels were not applied consistently and so it was concluded that the affect of a multi-dimensional stimuli cannot be adequately categorized using a single affective label. In the third study the stimuli were evaluated on multiple scales and classified using ratings of valence and arousal rather than affective labels. The results were used to create a least squares model for the dataset that decomposed the affect ratings of animations to display the compound effects of stylized walking sequences and environmental contexts on affective ratings.
Advisors/Committee Members: LaViers Minnick, Amy (advisor).
Subjects/Keywords: Affect; Gait; Perception; Human-robot interaction (HRI)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Heimerdinger, M. S. (2017). Influence of environmental context on affect recognition of stylized movements. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/99365
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Heimerdinger, Madison Suzanne. “Influence of environmental context on affect recognition of stylized movements.” 2017. Thesis, University of Illinois – Urbana-Champaign. Accessed December 14, 2019.
http://hdl.handle.net/2142/99365.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Heimerdinger, Madison Suzanne. “Influence of environmental context on affect recognition of stylized movements.” 2017. Web. 14 Dec 2019.
Vancouver:
Heimerdinger MS. Influence of environmental context on affect recognition of stylized movements. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2017. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/2142/99365.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Heimerdinger MS. Influence of environmental context on affect recognition of stylized movements. [Thesis]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/99365
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Rice University
26.
Losey, Dylan Patrick.
Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation.
Degree: MS, Engineering, 2016, Rice University
URL: http://hdl.handle.net/1911/96555
► This thesis discusses the use of adaptive control within human-robot interaction, and in particular rehabilitation robots, in order to change the perceived closed-loop system dynamics…
(more)
▼ This thesis discusses the use of adaptive control within
human-
robot interaction, and in particular rehabilitation robots, in order to change the perceived closed-loop system dynamics and compensate for unexpected and changing
subject behaviors. I first motivate the use of controllers during robotic rehabilitation through a
human-subjects study, in which I juxtapose
interaction controllers and a novel motor learning protocol, and find that haptic guidance and error augmentation can improve the retention of trained behavior after feedback is removed. Next, I develop an adaptive controller for rigid upper-limb rehabilitation robots, which uses sensorless force estimation to minimize the amount of robotic assistance while also bounding the
subject's trajectory errors. Finally, I discuss the use of time domain adaptive control in the context of physically compliant rehabilitation robots – in particular, series elastic actuators – where I discover that adaptive techniques enable passively rendering virtual environments not achievable using existing practices. Each of these adaptive controllers is developed using the theoretical framework of Lyapunov stability analysis, and is tested on single degree-of-freedom robotic hardware. I conclude that adaptive control provides an avenue for safe robotic
interaction, both through stability analysis and physical compliance, and can adjust to subjects of various impairment levels to ensure that training is meaningful, in the sense that desired trajectories, interactions, and long-term effects are achieved.
Advisors/Committee Members: O'Malley, Marcia K (advisor).
Subjects/Keywords: adaptive control; human-robot interaction; rehabilitation robotics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Losey, D. P. (2016). Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/96555
Chicago Manual of Style (16th Edition):
Losey, Dylan Patrick. “Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation.” 2016. Masters Thesis, Rice University. Accessed December 14, 2019.
http://hdl.handle.net/1911/96555.
MLA Handbook (7th Edition):
Losey, Dylan Patrick. “Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation.” 2016. Web. 14 Dec 2019.
Vancouver:
Losey DP. Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation. [Internet] [Masters thesis]. Rice University; 2016. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/1911/96555.
Council of Science Editors:
Losey DP. Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation. [Masters Thesis]. Rice University; 2016. Available from: http://hdl.handle.net/1911/96555

University of Sydney
27.
Kaupp, Tobias.
Probabilistic Human-Robot Information Fusion
.
Degree: 2008, University of Sydney
URL: http://hdl.handle.net/2123/2554
► This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a…
(more)
▼ This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human
and robotic information sources have the potential to build complex world models, essential for both automated and human decision making.
In this work, humans and robots are regarded as equal team members who interact and
communicate on a peer-to-peer basis. Human-robot communication is addressed using
probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations
with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into
the machine representation via Human Sensor Models. A common mathematical framework
for humans and robots reinforces the notion of true peer-to-peer interaction.
Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans
and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion.
The second application domain adds decision making to the human-robot task. Rational
decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the
adjustable autonomy system experimentally. Results from two experiments are reported: a
quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
Subjects/Keywords: human-robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kaupp, T. (2008). Probabilistic Human-Robot Information Fusion
. (Thesis). University of Sydney. Retrieved from http://hdl.handle.net/2123/2554
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Kaupp, Tobias. “Probabilistic Human-Robot Information Fusion
.” 2008. Thesis, University of Sydney. Accessed December 14, 2019.
http://hdl.handle.net/2123/2554.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Kaupp, Tobias. “Probabilistic Human-Robot Information Fusion
.” 2008. Web. 14 Dec 2019.
Vancouver:
Kaupp T. Probabilistic Human-Robot Information Fusion
. [Internet] [Thesis]. University of Sydney; 2008. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/2123/2554.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Kaupp T. Probabilistic Human-Robot Information Fusion
. [Thesis]. University of Sydney; 2008. Available from: http://hdl.handle.net/2123/2554
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Rice University
28.
Losey, Dylan P.
Responding to Physical Human-Robot Interaction: Theory and Approximations.
Degree: PhD, Engineering, 2018, Rice University
URL: http://hdl.handle.net/1911/105912
► This thesis explores how robots should respond to physical human interactions. From surgical devices to assistive arms, robots are becoming an important aspect of our…
(more)
▼ This thesis explores how robots should respond to physical
human interactions. From surgical devices to assistive arms, robots are becoming an important aspect of our everyday lives. Unlike earlier robots – which were developed for carefully regulated factory settings – today's robots must work alongside
human end-users, and even facilitate physical interactions between the
robot and the
human. Within the current state-of-the-art, the
human's intentionally applied forces are treated as unwanted disturbances that the
robot should avoid, reject, or ignore: once the
human stops interacting, these robots simply return to their original behavior. By contrast, we recognize that physical interactions are really an implicit form of communication: the
human is applying forces and torques to correct the
robot's behavior, and teach the
robot how it should complete its task. Within this work, we demonstrate that optimally responding to physical
human interactions results in robots that learn from these corrections and change their underlying behavior.
We first formalize physical
human-
robot interaction as a partially observable dynamical system, where the
human's applied forces and torques are observations about the objective function that the
robot should be optimizing, and, more specifically, the
human's preferences for how the
robot should behave. Solving this system defines the right way for a
robot to respond to physical corrections. We derive three approximate solutions for real-time implementation on robotic hardware: these different approximations assume increasing amounts of structure, and consider cases where the
robot is given (a) an arbitrary initial trajectory, (b) a parameterized initial trajectory, or (c) the task-related features. We next extend our approximations to account for noisy and imperfect end-users, who may accidentally correct the
robot more or less than they intended. We enable robots to reason over what aspects of the
human's
interaction were intentional, and which of the
human's preferences are still unclear. Our overall approach to physical
human-
robot interaction provides a theoretical basis for robots that both realize why the
human is interacting and personalize their behavior in response to that end-user. The feasibility of our theoretical contributions is demonstrated through simulations and user studies.
Advisors/Committee Members: O'Malley, Marcia K (advisor).
Subjects/Keywords: human-robot interaction; machine learning; optimal control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Losey, D. P. (2018). Responding to Physical Human-Robot Interaction: Theory and Approximations. (Doctoral Dissertation). Rice University. Retrieved from http://hdl.handle.net/1911/105912
Chicago Manual of Style (16th Edition):
Losey, Dylan P. “Responding to Physical Human-Robot Interaction: Theory and Approximations.” 2018. Doctoral Dissertation, Rice University. Accessed December 14, 2019.
http://hdl.handle.net/1911/105912.
MLA Handbook (7th Edition):
Losey, Dylan P. “Responding to Physical Human-Robot Interaction: Theory and Approximations.” 2018. Web. 14 Dec 2019.
Vancouver:
Losey DP. Responding to Physical Human-Robot Interaction: Theory and Approximations. [Internet] [Doctoral dissertation]. Rice University; 2018. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/1911/105912.
Council of Science Editors:
Losey DP. Responding to Physical Human-Robot Interaction: Theory and Approximations. [Doctoral Dissertation]. Rice University; 2018. Available from: http://hdl.handle.net/1911/105912
29.
Weistroffer, Vincent.
Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle : Assessing the acceptability of human-robot collaboration using virtual reality.
Degree: Docteur es, Informatique temps réel, robotique et automatique, 2014, Paris, ENMP
URL: http://www.theses.fr/2014ENMP0057
► Que ce soit dans un contexte industriel ou quotidien, les robots deviennent de plus en plus présents dans notre environnement et sont désormais capables d'interagir…
(more)
▼ Que ce soit dans un contexte industriel ou quotidien, les robots deviennent de plus en plus présents dans notre environnement et sont désormais capables d'interagir avec des humains. Dans les milieux industriels, des robots viennent notamment assister les opérateurs des chaînes d'assemblage pour des tâches fatigantes et dangereuses. Robots et opérateurs sont alors amenés à partager le même espace physique (coprésence) et à effectuer des tâches en commun (collaboration). Alors que la sécurité des humains à proximité des robots doit être garantie à tout instant, il convient également de déterminer si le travail collaboratif est accepté par les opérateurs, en termes d'utilisabilité et d'utilité.La première problématique de la thèse consiste à déterminer quelles sont les composantes importantes rentrant en jeu dans l'acceptabilité de la collaboration homme-robot, du point de vue des opérateurs. Différents facteurs peuvent influencer cette acceptabilité : l'apparence des robots et leurs mouvements, la distance de sécurité ou encore le mode d'interaction avec le robot.Afin d'étudier le maximum de facteurs, nous proposons d'utiliser la réalité virtuelle pour mener des tests utilisateurs en environnement virtuel. Nous utilisons des questionnaires pour recueillir les impressions subjectives des opérateurs et des mesures physiologiques pour estimer leur état affectif (stress, effort). La deuxième problématique de la thèse consiste à déterminer si une méthodologie utilisant la réalité virtuelle est pertinente pour cette évaluation : les résultats issus des tests en environnement virtuel rendent-ils bien compte de la situation réelle ?Pour répondre aux problématiques de la thèse, trois cas d'étude ont été mis en place et quatre expérimentations ont été menées. Deux de ces expérimentations ont été reproduites à la fois en environnements réel et virtuel afin d'évaluer la pertinence des résultats issus de la situation virtuelle par rapport à la situation réelle.
Either in the context of the industry or of the everyday life, robots are becoming more and more present in our environment and are nowadays able to interact with humans. In industrial environments, robots now assist operators on the assembly lines for difficult and dangerous tasks. Then, robots and operators need to share the same physical space (copresence) and to manage common tasks (collaboration). On the one side, the safety of humans working near robots has to be guaranteed at all time. On the other hand, it is necessary to determine if such a collaborative work is accepted by the operators, in terms of usability and utility.The first problematic of the thesis consists in determining the important criteria that play a role in the acceptability, from the operators' point of view. Different factors can influence this acceptability: robot appearance, robot movements, safety distance or interaction modes with the robot.In order to study as many factors as possible, we intend to use virtual reality to perform user studies in virtual environments. We are using…
Advisors/Committee Members: Fuchs, Philippe (thesis director).
Subjects/Keywords: Collaboration homme-Robot; Réalité virtuelle; Acceptabilité; Facteurs humains; Interaction homme-Robot; Immersion-Interaction; Human-Robot collaboration; Virtual reality; Acceptability; Human factors; Human-Robot interaction; Immersion-Interaction; 004
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Weistroffer, V. (2014). Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle : Assessing the acceptability of human-robot collaboration using virtual reality. (Doctoral Dissertation). Paris, ENMP. Retrieved from http://www.theses.fr/2014ENMP0057
Chicago Manual of Style (16th Edition):
Weistroffer, Vincent. “Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle : Assessing the acceptability of human-robot collaboration using virtual reality.” 2014. Doctoral Dissertation, Paris, ENMP. Accessed December 14, 2019.
http://www.theses.fr/2014ENMP0057.
MLA Handbook (7th Edition):
Weistroffer, Vincent. “Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle : Assessing the acceptability of human-robot collaboration using virtual reality.” 2014. Web. 14 Dec 2019.
Vancouver:
Weistroffer V. Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle : Assessing the acceptability of human-robot collaboration using virtual reality. [Internet] [Doctoral dissertation]. Paris, ENMP; 2014. [cited 2019 Dec 14].
Available from: http://www.theses.fr/2014ENMP0057.
Council of Science Editors:
Weistroffer V. Étude des conditions d'acceptabilité de la collaboration homme-robot en utilisant la réalité virtuelle : Assessing the acceptability of human-robot collaboration using virtual reality. [Doctoral Dissertation]. Paris, ENMP; 2014. Available from: http://www.theses.fr/2014ENMP0057

University of Technology, Sydney
30.
Piyathilaka, Jayaweera Mudiyanselage Lasitha Chandana.
Affordance-map : learning hidden human context in 3D scenes through virtual human models.
Degree: 2016, University of Technology, Sydney
URL: http://hdl.handle.net/10453/43499
► Ability to learn human context in an environment could be one of the most desired fundamental abilities that a robot should possess when sharing workspaces…
(more)
▼ Ability to learn human context in an environment could be one of the most desired fundamental abilities that a robot should possess when sharing workspaces with human co-workers. Arguably, a robot with appropriate human context awareness could lead to a better human robot interaction. This thesis addresses the problem of learning human context in indoor environments by only looking at geometrics features of the environment. The novelty of this concept is, it does not require to observe real humans to learn human context. Instead, it uses virtual human models and their relationships with the environment to map hidden human affordances in 3D scenes.
The problem of affordance mapping is formulated as a multi label classification problem with a binary classifier for each affordance type. The initial experiments proved that the SVM classifier is ideally suited for affordance mapping. However, SVM classifier recorded sub-optimum results when trained with imbalanced datasets. This imbalance occurs because in all 3D scenes in the dataset, the number of negative examples outnumbered positive examples by a great margin. As a solution to this, a number of SVM learners that are designed to tolerate class imbalance problem are tested for learning the affordance-map. These algorithms showed some tolerance to moderate class imbalances, but failed to perform well in some affordance types.
To mitigate these drawbacks, this thesis proposes the use of Structured SVM (S-SVM) optimized for F1-score. This approach defines the affordance-map building problems as a structured learning problem and outputs the most optimum affordance-map for a given set of features (3D-Images). In addition, S-SVM can be learned efficiently even on a large extremely imbalanced dataset. Further, experimental results of the S-SVM method outperformed previously used classifiers for mapping affordances.
Finally, this thesis presents two applications of the affordance-map. In the first application, affordance-map is used by a mobile robot to actively search for computer monitors in an office environment. The orientation and location information of humans models inferred by the affordance-map is used in this application to predict probable locations of computer monitors. The experimental results in a large office environment proved that the affordance-map concept simplifies the search strategy of the robot. In the second application, affordance-map is used for context aware path planning. In this application, human context information of the affordance-map is used by a service robot to plan paths with minimal distractions to office workers.
Subjects/Keywords: Robot.; Human context awareness.; Human robot interaction.; Virtual human models.; Affordance-map.; 3D scenes.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Piyathilaka, J. M. L. C. (2016). Affordance-map : learning hidden human context in 3D scenes through virtual human models. (Thesis). University of Technology, Sydney. Retrieved from http://hdl.handle.net/10453/43499
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Piyathilaka, Jayaweera Mudiyanselage Lasitha Chandana. “Affordance-map : learning hidden human context in 3D scenes through virtual human models.” 2016. Thesis, University of Technology, Sydney. Accessed December 14, 2019.
http://hdl.handle.net/10453/43499.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Piyathilaka, Jayaweera Mudiyanselage Lasitha Chandana. “Affordance-map : learning hidden human context in 3D scenes through virtual human models.” 2016. Web. 14 Dec 2019.
Vancouver:
Piyathilaka JMLC. Affordance-map : learning hidden human context in 3D scenes through virtual human models. [Internet] [Thesis]. University of Technology, Sydney; 2016. [cited 2019 Dec 14].
Available from: http://hdl.handle.net/10453/43499.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Piyathilaka JMLC. Affordance-map : learning hidden human context in 3D scenes through virtual human models. [Thesis]. University of Technology, Sydney; 2016. Available from: http://hdl.handle.net/10453/43499
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] … [13] ▶
.