You searched for subject:(Human robot interaction)
.
Showing records 1 – 30 of
476 total matches.
◁ [1] [2] [3] [4] [5] … [16] ▶

University of Manitoba
1.
Sanoubari, Elaheh.
A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help.
Degree: Computer Science, 2018, University of Manitoba
URL: http://hdl.handle.net/1993/33679
► Robots are entering public spaces where they use social techniques to interact with people. Robots can nowadays be found in public spaces such as airports,…
(more)
▼ Robots are entering public spaces where they use social techniques to interact with people. Robots can nowadays be found in public spaces such as airports, shopping malls, museums, or hospitals, where they interact with the general public. As these social entities are sharing people’s personal spaces and influencing their perceptions and actions, we must consider how they interact with people.
The impact of robot’s
interaction on a person is mediated by many factors, including personal difference and
interaction context (Young et al., 2011). For both of these factors, the cultural background of the person is a particularly important component. Culture is deeply intertwined with all aspects of our social behaviors and impacts how we perceive our day-to-day interactions. As such, social robots can use culturally-appropriate language to improve how they are perceived by
human users (Wang et al.,2010).
In this work, we investigated if a
robot can use social techniques to adapt to people to get more help from. More specifically, we investigated if a
robot can do so by exploiting knowledge of a person’s culture. We conducted an in-the-wild experiment to investigate whether a
robot adapting to a passerby culture would affect how much help it can get from them. The results of this study indicate that there is a significant increase in duration of their help in cases where the
robot adapts to match their culture than when it mismatches.
The results of this experiment contribute to the design techniques of adaptive social ro-bots by showing that it is possible for an agent to influence users’ actions by adapting to them. However, as this adaptation can happen without a person’s explicit knowledge, it is ethically questionable. By providing this proof of concept, our experiment sheds light on the discussions of the ethical aspects of robots interacting with humans in social contexts. Furthermore, we present the study design used for this experiment as a template for in-the-wild studies with cold-calling robots. We propose that researchers can use this template as a starting point and modify it for conducting their own similar
robot in-the-wild research.
Advisors/Committee Members: Young, James (Computer Science) (supervisor), Bunt, Andrea (Computer Science) (examiningcommittee), Loureiro-Rodríguez, Veronica (Linguistics) (examiningcommittee).
Subjects/Keywords: Human-robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sanoubari, E. (2018). A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help. (Masters Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/33679
Chicago Manual of Style (16th Edition):
Sanoubari, Elaheh. “A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help.” 2018. Masters Thesis, University of Manitoba. Accessed March 05, 2021.
http://hdl.handle.net/1993/33679.
MLA Handbook (7th Edition):
Sanoubari, Elaheh. “A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help.” 2018. Web. 05 Mar 2021.
Vancouver:
Sanoubari E. A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help. [Internet] [Masters thesis]. University of Manitoba; 2018. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/1993/33679.
Council of Science Editors:
Sanoubari E. A Machiavellian robot in the wild, exploiting the culture of passersby to gain more help. [Masters Thesis]. University of Manitoba; 2018. Available from: http://hdl.handle.net/1993/33679

Mississippi State University
2.
Wuisan, Stephanie Julike.
Knowledge domains where robots are trusted.
Degree: MS, Computer Science and Engineering, 2015, Mississippi State University
URL: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05202015-115731/
;
► The general public is being exposed to robots more often every day. This thesis focused on the advancement of research by analyzing whether or…
(more)
▼ The general public is being exposed to robots more often every day. This thesis focused on the advancement of research by analyzing whether or not the type of information provided by a
robot determined the level of trust humans have for a
robot.
A study was conducted where the participants were asked to answer two different types of questions: mathematical/logical and ethical/social. The participants were divided into two different conditions: controlled and misinformed. A humanoid
robot provided its own spoken answer after the participants said their answers. The participants then had the chance to select whose answers they would like to keep. During the misinformed condition, there were times when the
robot purposely gave incorrect answers. The results of the study support the hypothesis that the participants were more likely to select the robots answers when the question type was mathematical/logical, whether the
robot provided a correct or incorrect response.
Advisors/Committee Members: Cindy L. Bethel (chair), Christopher Archibald (committee member), Deborah K. Eakin (committee member).
Subjects/Keywords: human robot interaction; information source
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wuisan, S. J. (2015). Knowledge domains where robots are trusted. (Masters Thesis). Mississippi State University. Retrieved from http://sun.library.msstate.edu/ETD-db/theses/available/etd-05202015-115731/ ;
Chicago Manual of Style (16th Edition):
Wuisan, Stephanie Julike. “Knowledge domains where robots are trusted.” 2015. Masters Thesis, Mississippi State University. Accessed March 05, 2021.
http://sun.library.msstate.edu/ETD-db/theses/available/etd-05202015-115731/ ;.
MLA Handbook (7th Edition):
Wuisan, Stephanie Julike. “Knowledge domains where robots are trusted.” 2015. Web. 05 Mar 2021.
Vancouver:
Wuisan SJ. Knowledge domains where robots are trusted. [Internet] [Masters thesis]. Mississippi State University; 2015. [cited 2021 Mar 05].
Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05202015-115731/ ;.
Council of Science Editors:
Wuisan SJ. Knowledge domains where robots are trusted. [Masters Thesis]. Mississippi State University; 2015. Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05202015-115731/ ;

Vanderbilt University
3.
Heard, Jamison.
An adaptive supervisory-based human-robot teaming architecture.
Degree: PhD, Electrical Engineering, 2019, Vanderbilt University
URL: http://hdl.handle.net/1803/13921
► Changing the ways that robots interact with humans in uncertain, dynamic, and high-intensity environments (e.g., a NASA control room) is needed in order to realize…
(more)
▼ Changing the ways that robots interact with humans in uncertain, dynamic, and high-intensity environments (e.g., a NASA control room) is needed in order to realize effective
human-
robot teams. Dynamic domains require innovative
human-
robot teaming methodologies, which are adaptive in nature, due to varying task demands. These methodologies require mechanisms that can drive the
robot's interactions, such that the
robot provides valuable contributions to achieving the task, while appropriately interacting with, but not hindering the
human. The
human's complete workload state can be used to determine
robot interactions that may augment team performance, due to the relationship between workload and task performance. This dissertation developed a workload assessment algorithm capable of estimating overall workload and each workload component (e.g., cognitive, auditory, visual, speech, and physical) in order to provide meaningful information to an adaptive system. The developed algorithm estimated overall workload and each workload component accurately using data from two
human-
robot teaming evaluations: a peer-based and supervisory-based. A non-stationary evaluation was conducted in order to validate the algorithm’s real-time capabilities. The workload assessment algorithm was incorporated into an adaptive
human-
robot teaming system architecture, which targeted adaptations towards a workload component. A pilot study demonstrated the adaptive system’s ability to improve task performance by adapting system autonomy and interactions.
Advisors/Committee Members: Alan Peters (committee member), Matthew Weinger (committee member), Terrence Fong (committee member), D. Mitchell Wilkes (committee member), Julie A. Adams (Committee Chair).
Subjects/Keywords: Human-Robot Interaction; Robotics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Heard, J. (2019). An adaptive supervisory-based human-robot teaming architecture. (Doctoral Dissertation). Vanderbilt University. Retrieved from http://hdl.handle.net/1803/13921
Chicago Manual of Style (16th Edition):
Heard, Jamison. “An adaptive supervisory-based human-robot teaming architecture.” 2019. Doctoral Dissertation, Vanderbilt University. Accessed March 05, 2021.
http://hdl.handle.net/1803/13921.
MLA Handbook (7th Edition):
Heard, Jamison. “An adaptive supervisory-based human-robot teaming architecture.” 2019. Web. 05 Mar 2021.
Vancouver:
Heard J. An adaptive supervisory-based human-robot teaming architecture. [Internet] [Doctoral dissertation]. Vanderbilt University; 2019. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/1803/13921.
Council of Science Editors:
Heard J. An adaptive supervisory-based human-robot teaming architecture. [Doctoral Dissertation]. Vanderbilt University; 2019. Available from: http://hdl.handle.net/1803/13921

University of Manitoba
4.
Seo, Stela.
A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses.
Degree: Computer Science, 2015, University of Manitoba
URL: http://hdl.handle.net/1993/30248
► In designing and evaluating human-robot interactions and interfaces, researchers often use simulated robots because of the high cost of physical robots and time required to…
(more)
▼ In designing and evaluating
human-
robot interactions and interfaces, researchers often use simulated robots because of the high cost of physical robots and time required to program them. However, it is important to consider how
interaction with a simulated
robot differs from a real
robot; that is, do simulated robots provide authentic
interaction? We contribute to a growing body of work that explores this question and maps out simulated-versus-real differences, by explicitly investigating empathy: how people empathize with a physical or simulated
robot when something bad happens to it. Empathy is particularly relevant to social
human-
robot interaction (HRI) and is integral to, e.g., companion and care robots.
To explore our question, we develop a convincing HRI scenario that induces people’s empathy toward a
robot, and explore psychology work for an empathy-measuring instrument. To formally evaluate our scenario and the empathy-measuring instrument in HRI scenario, we conduct a comparative user study: in one condition, participants have the scenario which induces empathy, and for the other condition, we remove any empathy inducing activities of the
robot. With the validated scenario and empathy measuring instrument, we conduct another user study to explore the difference between a real and a simulated
robot in terms of people’s empathic response.
Our results suggest that people empathize more with a physical
robot than a simulated one, a finding that has important implications on the generalizability and applicability of simulated HRI work. As part of our exploration, we additionally present an original and reproducible HRI experimental design to induce empathy toward robots, and experimentally validated an empathy-measuring instrument from psychology for use with HRI.
Advisors/Committee Members: Young, James E. (Computer Science) (supervisor), Hemmati, Hadi (Computer Science).
Subjects/Keywords: human-robot interaction; empathy
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Seo, S. (2015). A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses. (Masters Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/30248
Chicago Manual of Style (16th Edition):
Seo, Stela. “A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses.” 2015. Masters Thesis, University of Manitoba. Accessed March 05, 2021.
http://hdl.handle.net/1993/30248.
MLA Handbook (7th Edition):
Seo, Stela. “A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses.” 2015. Web. 05 Mar 2021.
Vancouver:
Seo S. A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses. [Internet] [Masters thesis]. University of Manitoba; 2015. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/1993/30248.
Council of Science Editors:
Seo S. A simulated robot versus a real robot: an exploration of how robot embodiment impacts people's empathic responses. [Masters Thesis]. University of Manitoba; 2015. Available from: http://hdl.handle.net/1993/30248

University of Illinois – Urbana-Champaign
5.
Jang Sher, Anum.
An embodied, platform-invariant architecture for robotic spatial commands.
Degree: MS, Mechanical Engineering, 2017, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/97491
► In contexts such as teleoperation, robot reprogramming, and human-robot-interaction, and neural prosthetics, conveying spatial commands to a robotic platform is often a limiting factor. Currently,…
(more)
▼ In contexts such as teleoperation,
robot reprogramming, and
human-
robot-
interaction, and neural prosthetics, conveying spatial commands to a robotic platform is often a limiting factor. Currently, many applications rely on joint-angle-by-joint-angle prescriptions. This inherently requires a large number of parameters to be specified by the user that scales with the number of degrees of freedom on a platform, creating high bandwidth requirements for interfaces. This thesis presents an efficient representation of high-level, spatial commands that specifies many joint angles with relatively few parameters based on a spatial architecture. To this end, an expressive command architecture is proposed that allows pose generation of simple motion primitives. In particular, a general method for labeling connected platform linkages, generating a databank of user-specified poses, and mapping between high-level spatial commands and specific platform static configurations are presented. Further, this architecture is platform- invariant where the same high-level, spatial command can have meaning on any platform. This has the particular advantage that our commands have meaning for
human movers as well. In order to achieve this, we draw inspiration from Laban/Bartenieff Movement Studies, an embodied taxonomy for movement description. The final architecture is implemented for twenty-six spatial directions on a Rethink Robotics Baxter and an Aldebaran NAO. Two user studies have been conducted to validate the effectiveness of the proposed framework. Lastly, a workload metric is proposed to quantitative assess the usability of a machine interface.
Advisors/Committee Members: LaViers, Amy (advisor).
Subjects/Keywords: Teleoperation; Human-robot interaction; Laban
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jang Sher, A. (2017). An embodied, platform-invariant architecture for robotic spatial commands. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/97491
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Jang Sher, Anum. “An embodied, platform-invariant architecture for robotic spatial commands.” 2017. Thesis, University of Illinois – Urbana-Champaign. Accessed March 05, 2021.
http://hdl.handle.net/2142/97491.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Jang Sher, Anum. “An embodied, platform-invariant architecture for robotic spatial commands.” 2017. Web. 05 Mar 2021.
Vancouver:
Jang Sher A. An embodied, platform-invariant architecture for robotic spatial commands. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2017. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/2142/97491.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Jang Sher A. An embodied, platform-invariant architecture for robotic spatial commands. [Thesis]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/97491
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Mississippi State University
6.
Crumpton, Joseph John.
Use of vocal prosody to express emotions in robotic speech.
Degree: PhD, Computer Science and Engineering, 2015, Mississippi State University
URL: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/
;
► Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential components of speech communication between humans. The objective of this…
(more)
▼ Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential
components of speech communication between humans. The objective of this dissertation
research was to determine the efficacy of using varying vocal prosody in robotic speech
to convey emotion. Two pilot studies and two experiments were performed to address the
shortcomings of previous HRI research in this area.
The pilot studies were used to determine a set of vocal prosody modification values
for a female voice model using the MARY speech synthesizer to convey the emotions:
anger, fear, happiness, and sadness. Experiment 1 validated that participants perceived
these emotions along with a neutral vocal prosody at rates significantly higher than chance.
Four of the vocal prosodies (anger, fear, neutral, and sadness) were recognized at rates
approaching the recognition rate (60%) of emotions in person to person speech.
During Experiment 2 the
robot led participants through a creativity test while making
statements using one of the validated emotional vocal prosodies. The ratings of the
robots positive qualities and the creativity scores by the participant group that heard nonnegative
vocal prosodies (happiness, neutral) did not significantly differ from the ratings
and scores of the participant group that heard the negative vocal prosodies (anger, fear,
sadness). Therefore, Experiment 2 failed to show that the use of emotional vocal prosody
in a robots speech influenced the participants appraisal of the
robot or the participants
performance on this specific task.
At this time
robot designers and programmers should not expect that vocal prosody
alone will have a significant impact on the acceptability or the quality of
human-
robot
interactions. Further research is required to show that multi-modal (vocal prosody along
with facial expressions, body language, or linguistic content) expressions of emotions by
robots will be effective at improving
human-
robot interactions.
Advisors/Committee Members: Cindy L. Bethel (chair), Derek T. Anderson (committee member), J. Edward Swan II (committee member), Byron J. Williams (committee member).
Subjects/Keywords: human-robot interaction; robot; speech synthesizer
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Crumpton, J. J. (2015). Use of vocal prosody to express emotions in robotic speech. (Doctoral Dissertation). Mississippi State University. Retrieved from http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/ ;
Chicago Manual of Style (16th Edition):
Crumpton, Joseph John. “Use of vocal prosody to express emotions in robotic speech.” 2015. Doctoral Dissertation, Mississippi State University. Accessed March 05, 2021.
http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/ ;.
MLA Handbook (7th Edition):
Crumpton, Joseph John. “Use of vocal prosody to express emotions in robotic speech.” 2015. Web. 05 Mar 2021.
Vancouver:
Crumpton JJ. Use of vocal prosody to express emotions in robotic speech. [Internet] [Doctoral dissertation]. Mississippi State University; 2015. [cited 2021 Mar 05].
Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/ ;.
Council of Science Editors:
Crumpton JJ. Use of vocal prosody to express emotions in robotic speech. [Doctoral Dissertation]. Mississippi State University; 2015. Available from: http://sun.library.msstate.edu/ETD-db/theses/available/etd-05182015-152915/ ;

University of Cambridge
7.
Burke, Michael Glen.
Fast upper body pose estimation for human-robot interaction.
Degree: PhD, 2015, University of Cambridge
URL: https://doi.org/10.17863/CAM.203
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883
► This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in…
(more)
▼ This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in human-robot interaction in mind. A novel mixture of Ornstein-Uhlenbeck processes model, trained in a reduced dimensional subspace and designed for analytical tractability, is introduced. This model acts as a collection of mean-reverting random walks that pull towards more commonly observed poses. Pose tracking using this model can be Rao-Blackwellised, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. The model is used within a recursive Bayesian framework to provide reliable estimates of upper body pose when only a subset of body joints can be detected. Model training data can be extended through a retargeting process, and better pose coverage obtained through the use of Poisson disk sampling in the model training stage. Results on a number of test datasets show that the proposed approach provides pose estimation accuracy comparable with the state of the art in real time (30 fps) and can be extended to the multiple user case. As a motivating example, this work also introduces a pantomimic gesture recognition interface. Traditional approaches to gesture recognition for robot control make use of predefined codebooks of gestures, which are mapped directly to the robot behaviours they are intended to elicit. These gesture codewords are typically recognised using algorithms trained on multiple recordings of people performing the predefined gestures. Obtaining these recordings can be expensive and time consuming, and the codebook of gestures may not be particularly intuitive. This thesis presents arguments that pantomimic gestures, which mimic the intended robot behaviours directly, are potentially more intuitive, and proposes a transfer learning approach to recognition, where human hand gestures are mapped to recordings of robot behaviour by extracting temporal and spatial features that are inherently present in both pantomimed actions and robot behaviours. A Bayesian bias compensation scheme is introduced to compensate for potential classification bias in features. Results from a quadrotor behaviour selection problem show that good classification accuracy can be obtained when human hand gestures are recognised using behaviour recordings, and that classification using these behaviour recordings is more robust than using human hand recordings when users are allowed complete freedom over their choice of input gestures.
Subjects/Keywords: 629.8; Technology; robot; human-robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Burke, M. G. (2015). Fast upper body pose estimation for human-robot interaction. (Doctoral Dissertation). University of Cambridge. Retrieved from https://doi.org/10.17863/CAM.203 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883
Chicago Manual of Style (16th Edition):
Burke, Michael Glen. “Fast upper body pose estimation for human-robot interaction.” 2015. Doctoral Dissertation, University of Cambridge. Accessed March 05, 2021.
https://doi.org/10.17863/CAM.203 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883.
MLA Handbook (7th Edition):
Burke, Michael Glen. “Fast upper body pose estimation for human-robot interaction.” 2015. Web. 05 Mar 2021.
Vancouver:
Burke MG. Fast upper body pose estimation for human-robot interaction. [Internet] [Doctoral dissertation]. University of Cambridge; 2015. [cited 2021 Mar 05].
Available from: https://doi.org/10.17863/CAM.203 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883.
Council of Science Editors:
Burke MG. Fast upper body pose estimation for human-robot interaction. [Doctoral Dissertation]. University of Cambridge; 2015. Available from: https://doi.org/10.17863/CAM.203 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690883

Texas A&M University
8.
Dufek, Jan.
Best Viewpoints for External Robots or Sensors Assisting Other Robots.
Degree: PhD, Computer Science, 2020, Texas A&M University
URL: http://hdl.handle.net/1969.1/192260
► This dissertation creates a model of the value of different external viewpoints of a robot performing tasks. The current state of the practice is to…
(more)
▼ This dissertation creates a model of the value of different external viewpoints of a
robot performing tasks. The current state of the practice is to use a teleoperated assistant
robot to provide a view of a task being performed by a primary
robot. However, there is no existing model of the value of different external viewpoints and the choice of viewpoints is ad hoc not always leading to improved performance. This research develops the model using a psychomotor approach based on the cognitive science concept of Gibsonian affordances. The two central tenets of the approach are that the value of a viewpoint depends on the affordances for each action in a task and that viewpoints in the space surrounding the action can be rated and adjacent viewpoints with similar ratings can be clustered into manifolds of viewpoints with the equivalent value. In this approach, viewpoints for the affordances are rated based on the psychomotor behavior of
human operators and clustered into manifolds of viewpoints with the equivalent value. The value of 30 viewpoints is quantified in a study with 31 expert
robot operators for 4 affordances (Reachability, Passability, Manipulability, and Traversability) using a computer-based simulator of two robots (PackBot and TALON). The adjacent viewpoints with similar values are clustered into ranked manifolds using agglomerative hierarchical clustering with average linkages. The results support the two central tenets showing the validity of the affordance-based approach by confirming that there are manifolds of statistically significantly different viewpoint values, viewpoint values are statistically significantly dependent on the affordances, and viewpoint values are independent of a
robot. Furthermore, the best manifold for each affordance provides a statistically significant improvement with a large Cohen's d effect size (1.1-2.3) in performance (improving time by 14%-59% and reducing errors by 87%-100%) and improvement in performance variation over the worst manifold. This model creates the fundamental, principled understanding of external viewpoints utility based on psychomotor behavior and contributes ranked manifolds of viewpoints for four common affordances. This model will enable autonomous selection of the best possible viewpoint and path planning for the assistant
robot.
Advisors/Committee Members: Murphy, Robin R (advisor), Caverlee, James B (committee member), Peres, Camille S (committee member), Shell, Dylan A (committee member).
Subjects/Keywords: Human-Robot Interaction; Telerobotics; Multi-Robot Systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Dufek, J. (2020). Best Viewpoints for External Robots or Sensors Assisting Other Robots. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/192260
Chicago Manual of Style (16th Edition):
Dufek, Jan. “Best Viewpoints for External Robots or Sensors Assisting Other Robots.” 2020. Doctoral Dissertation, Texas A&M University. Accessed March 05, 2021.
http://hdl.handle.net/1969.1/192260.
MLA Handbook (7th Edition):
Dufek, Jan. “Best Viewpoints for External Robots or Sensors Assisting Other Robots.” 2020. Web. 05 Mar 2021.
Vancouver:
Dufek J. Best Viewpoints for External Robots or Sensors Assisting Other Robots. [Internet] [Doctoral dissertation]. Texas A&M University; 2020. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/1969.1/192260.
Council of Science Editors:
Dufek J. Best Viewpoints for External Robots or Sensors Assisting Other Robots. [Doctoral Dissertation]. Texas A&M University; 2020. Available from: http://hdl.handle.net/1969.1/192260

Brigham Young University
9.
Ashcraft, C Chace.
Moderating Influence as a Design Principle for Human-Swarm Interaction.
Degree: MS, 2019, Brigham Young University
URL: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd
► Robot swarms have recently become of interest in both industry and academia for their potential to perform various difficult or dangerous tasks efficiently. As real…
(more)
▼ Robot swarms have recently become of interest in both industry and academia for their potential to perform various difficult or dangerous tasks efficiently. As real robot swarms become more of a possibility, many desire swarms to be controlled or directed by a human, which raises questions regarding how that should be done. Part of the challenge of human-swarm interaction is the difficulty of understanding swarm state and how to drive the swarm to produce emergent behaviors. Human input could inhibit desirable swarm behaviors if their input is poor and has sufficient influence over swarm agents, affecting its overall performance. Thus, with too little influence, human input is useless, but with too much, it can be destructive. We suggest that there is some middle level, or interval, of human influence that allows the swarm to take advantage of useful human input while minimizing the effect of destructive input. Further, we propose that human-swarm interaction schemes can be designed to maintain an appropriate level of human influence over the swarm and maintain or improve swarm performance in the presence of both useful and destructive human input. We test this theory by implementing a piece of software to dynamically moderate influence and then testing it with a simulated honey bee colony performing nest site selection, simulated human input, and actual human input via a user study. The results suggest that moderating influence, as suggested, is important for maintaining high performance in the presence of both useful and destructive human input. However, while our software seems to successfully moderate influence with simulated human input, it fails to do so with actual human input.
Subjects/Keywords: Human swarm interaction; human robot interaction; swarms; robot swarms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ashcraft, C. C. (2019). Moderating Influence as a Design Principle for Human-Swarm Interaction. (Masters Thesis). Brigham Young University. Retrieved from https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd
Chicago Manual of Style (16th Edition):
Ashcraft, C Chace. “Moderating Influence as a Design Principle for Human-Swarm Interaction.” 2019. Masters Thesis, Brigham Young University. Accessed March 05, 2021.
https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd.
MLA Handbook (7th Edition):
Ashcraft, C Chace. “Moderating Influence as a Design Principle for Human-Swarm Interaction.” 2019. Web. 05 Mar 2021.
Vancouver:
Ashcraft CC. Moderating Influence as a Design Principle for Human-Swarm Interaction. [Internet] [Masters thesis]. Brigham Young University; 2019. [cited 2021 Mar 05].
Available from: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd.
Council of Science Editors:
Ashcraft CC. Moderating Influence as a Design Principle for Human-Swarm Interaction. [Masters Thesis]. Brigham Young University; 2019. Available from: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=8406&context=etd

Rutgers University
10.
Liu, Jingjing, 1985-.
Exploiting multispectral and contextual information to improve human detection.
Degree: PhD, Computer Science, 2017, Rutgers University
URL: https://rucore.libraries.rutgers.edu/rutgers-lib/55564/
► Human detection has various applications, e.g., autonomous driving car, surveillance system, and retail. In this dissertation, we first exploit multispectral images (i.e., RGB and thermal…
(more)
▼ Human detection has various applications, e.g., autonomous driving car, surveillance system, and retail. In this dissertation, we first exploit multispectral images (i.e., RGB and thermal images) for human detection. We extensively analyze Faster R-CNN for the detection task and then model multispectral human detection into a fusion problem of convolutional networks (ConvNets). We design four distinct ConvNet fusion architectures that integrate two-branch ConvNets on different stages of neural networks, all of which yield better performance compared with the baseline detector. In the second part of this dissertation, we leverage instance-level contextual information in crowded scenes to boost performance of human detection. Based on a context graph that incorporates both geometric and social contextual patterns from crowds, we apply progressive potential propagation algorithm to discover weak detections that are contextually compatible with true detections while suppressing irrelevant false alarms. The method significantly improves the performance of any shallow human detectors, obtaining comparable results to deep learning based methods.
Advisors/Committee Members: Metaxas, Dimitris N. (chair), Bekris, Kostas (internal member), Yu, Jingjin (internal member), Ratha, Nalini K. (outside member), School of Graduate Studies.
Subjects/Keywords: Robotics – Human factors; Human-robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, Jingjing, 1. (2017). Exploiting multispectral and contextual information to improve human detection. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/55564/
Chicago Manual of Style (16th Edition):
Liu, Jingjing, 1985-. “Exploiting multispectral and contextual information to improve human detection.” 2017. Doctoral Dissertation, Rutgers University. Accessed March 05, 2021.
https://rucore.libraries.rutgers.edu/rutgers-lib/55564/.
MLA Handbook (7th Edition):
Liu, Jingjing, 1985-. “Exploiting multispectral and contextual information to improve human detection.” 2017. Web. 05 Mar 2021.
Vancouver:
Liu, Jingjing 1. Exploiting multispectral and contextual information to improve human detection. [Internet] [Doctoral dissertation]. Rutgers University; 2017. [cited 2021 Mar 05].
Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/55564/.
Council of Science Editors:
Liu, Jingjing 1. Exploiting multispectral and contextual information to improve human detection. [Doctoral Dissertation]. Rutgers University; 2017. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/55564/

University of Illinois – Chicago
11.
Castagneri, Stefano.
Recognizing Collaboration Intent to Control Physical Human-Robot Interaction.
Degree: 2018, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/23053
► In recent years, there has been intense interest in collaborative robots, both for industry and household applications. While significant progress has been made, physical human-robot…
(more)
▼ In recent years, there has been intense interest in collaborative robots, both for industry and household applications. While significant progress has been made, physical
human-
robot interaction is still presenting a challenging problem that has not been satisfactorily solved. When a
human is interacting with another
human, the forces they exchange represent a communication channel and a continuous stream of information flows between them. When a
human is interacting with a
robot, the forces applied by the
robot are interpreted by the
human that in turn reacts to them; obviously, people are expecting the
robot to also react to the forces they are applying. In this research, we identify different types of collaboration during collaborative manipulation and use this information to better control
human-
robot interaction. We propose a new metric for the identification of the cooperation intent and study how to best compute the
interaction force, on which our metric is based, in a real time application. We also propose a control framework that uses a set of
robot controllers that are selected using the identified collaboration intent to control the
robot during collaborative tasks. Finally, we present our preliminary experiments with the Baxter
robot. The experiments have been performed in order to understand the precision, repeatability and safety of the
robot using different control approaches. These experiments informed the proposed controllers and are the key for their future implementation of the Baxter
robot.
Advisors/Committee Members: Zefran, Milos (advisor), Di Eugenio, Barbara (committee member), Indri, Marina (committee member), Zefran, Milos (chair).
Subjects/Keywords: Physical Human-Robot Interaction; Robot Control; Baxter Robot
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Castagneri, S. (2018). Recognizing Collaboration Intent to Control Physical Human-Robot Interaction. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/23053
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Castagneri, Stefano. “Recognizing Collaboration Intent to Control Physical Human-Robot Interaction.” 2018. Thesis, University of Illinois – Chicago. Accessed March 05, 2021.
http://hdl.handle.net/10027/23053.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Castagneri, Stefano. “Recognizing Collaboration Intent to Control Physical Human-Robot Interaction.” 2018. Web. 05 Mar 2021.
Vancouver:
Castagneri S. Recognizing Collaboration Intent to Control Physical Human-Robot Interaction. [Internet] [Thesis]. University of Illinois – Chicago; 2018. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/10027/23053.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Castagneri S. Recognizing Collaboration Intent to Control Physical Human-Robot Interaction. [Thesis]. University of Illinois – Chicago; 2018. Available from: http://hdl.handle.net/10027/23053
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Oklahoma State University
12.
Mahi, S. M. Al.
Algorithm for distributed heterogeneous robot-human teams.
Degree: Computer Science, 2020, Oklahoma State University
URL: http://hdl.handle.net/11244/325532
► This dissertation presents a set of three closely related studies conducted by me during my Doctoral studies. The studies focus on two immensely important aspects…
(more)
▼ This dissertation presents a set of three closely related studies conducted by me during my Doctoral studies. The studies focus on two immensely important aspects of Robotics; control of the cooperative multi-
robot system and
human-
robot interactions. I have taken several attempts to understand these two aspects. In particular, I have investigated the autonomous control of the multi-
robot system which uses a distributed algorithm for autonomous decision making and also facilitating
human interaction with the robots. I have found that it could provide a lot of insights from the field of robotic perception and control. I have also used these examples to apply them in different heterogeneous multi-
robot systems. The result of my research work is an integrated control model for a
human and multi-
robot team system. My study discovered an important knowledge gap. My research also innovated novel tools with a good theoretical foundation to address the research gap. My study also validated the result with real-world data collected during different thoroughly executed experiments. My research can be carefully organized into three major studies that have been well documented and published in renowned scientific venues. These three studies together cover a multitude of dimensions of control of multi-
robot systems and
human-
robot interactions with multi-
robot systems. These studies involve extensive research, application design, engineering, and development of heterogeneous multi-
robot systems with a focus on
human-
robot interaction. In this dissertation, I have comprehensively documented my studies. Therefore, I firmly believe my research has been contributed to the field of Robotics and improved our understanding of multi-
robot teams and
human interaction.
Advisors/Committee Members: Crick, Christopher John (advisor), Thomas, Johnson P. (committee member), Park, Nohpill (committee member), Fan, Guoliang (committee member).
Subjects/Keywords: coordination; factor graph; human-robot interaction; multi-robot; perception; robot
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mahi, S. M. A. (2020). Algorithm for distributed heterogeneous robot-human teams. (Thesis). Oklahoma State University. Retrieved from http://hdl.handle.net/11244/325532
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Mahi, S M Al. “Algorithm for distributed heterogeneous robot-human teams.” 2020. Thesis, Oklahoma State University. Accessed March 05, 2021.
http://hdl.handle.net/11244/325532.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Mahi, S M Al. “Algorithm for distributed heterogeneous robot-human teams.” 2020. Web. 05 Mar 2021.
Vancouver:
Mahi SMA. Algorithm for distributed heterogeneous robot-human teams. [Internet] [Thesis]. Oklahoma State University; 2020. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/11244/325532.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Mahi SMA. Algorithm for distributed heterogeneous robot-human teams. [Thesis]. Oklahoma State University; 2020. Available from: http://hdl.handle.net/11244/325532
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Université Montpellier II
13.
Bussy, Antoine.
Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction.
Degree: Docteur es, SYAM - Systèmes Automatiques et Microélectroniques, 2013, Université Montpellier II
URL: http://www.theses.fr/2013MON20090
► Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et…
(more)
▼ Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques.
Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions.
Advisors/Committee Members: Kheddar, Abderrahmane (thesis director), Crosnier, André (thesis director).
Subjects/Keywords: Haptique; Interaction Physique; Interaction Homme-Robot; Robot Humanoïde; Haptics; Physical Interaction; Human-Robot Interaction; Humanoid Robot
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bussy, A. (2013). Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction. (Doctoral Dissertation). Université Montpellier II. Retrieved from http://www.theses.fr/2013MON20090
Chicago Manual of Style (16th Edition):
Bussy, Antoine. “Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction.” 2013. Doctoral Dissertation, Université Montpellier II. Accessed March 05, 2021.
http://www.theses.fr/2013MON20090.
MLA Handbook (7th Edition):
Bussy, Antoine. “Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction.” 2013. Web. 05 Mar 2021.
Vancouver:
Bussy A. Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction. [Internet] [Doctoral dissertation]. Université Montpellier II; 2013. [cited 2021 Mar 05].
Available from: http://www.theses.fr/2013MON20090.
Council of Science Editors:
Bussy A. Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde : Cognitive approach for representing the haptic physical human-humanoid interaction. [Doctoral Dissertation]. Université Montpellier II; 2013. Available from: http://www.theses.fr/2013MON20090

University of Illinois – Chicago
14.
Javaid, Maria.
Communication through Physical Interaction: Robot Assistants for the Elderly.
Degree: 2015, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/19370
► This research work is a part of a broader research project which has the aim to build an effective and user friendly communication interface for…
(more)
▼ This research work is a part of a broader research project which has the aim to build an effective and user friendly communication interface for assistive robots that can help the elderly
to have an independent life at home. Such communication interface should incorporate multiple modalities of communication, since collaborative task-oriented
human-human communication is inherently multimodal. For this purpose, data was collected from twenty collaborative task-oriented
human-human communication sessions between a helper and an elderly person in a realistic setting (fully functional studio apartment).
My research mainly focus on collecting physical
interaction data in an unobtrusive way during
human-human
interaction and analyzing that data to determine how it can be implemented to communication interface for assistive robots particularly in elderly care domain. Thus a pressure sensors equipped data glove was developed. Based on the data collected from this glove, communication through physical
interaction during collaborative manipulation of planar objects was studied. Subsequently, an algorithm was developed based on the laboratory data analysis which can classify four di erent stages of collaborative manipulation of planar object.
This algorithm was later successfully validated on experiments performed in a realistic setting with subjects involved in performing activities of elderly care and determining
human-human hand-over of planar object in real-time. Other than understanding the communication through physical
interaction, this research also presents the methods for recognizing various physical
manipulation actions that take place when an elderly is helped by a care-giver in cooking and setting of dinning table. This particular work was motivated by the natural language analysis
of the data collected with helper and an elderly person which showed that the knowledge of such physical manipulation actions helps to improve communication through natural language.
The physical
interaction based classification methods are first developed through laboratory experiments and later successfully validated on the experiments performed in a realistic setting.
Advisors/Committee Members: Zefran, Milos (advisor), Ben-Arie, Jazekiel (committee member), Di Eugenio, Barbara (committee member), Patton, James (committee member), Steinberg, Arnold (committee member).
Subjects/Keywords: Human-Robot Interaction; Robot Assistants; Physical Interaction; Multimodal Communication; Haptic Communication
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Javaid, M. (2015). Communication through Physical Interaction: Robot Assistants for the Elderly. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/19370
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Javaid, Maria. “Communication through Physical Interaction: Robot Assistants for the Elderly.” 2015. Thesis, University of Illinois – Chicago. Accessed March 05, 2021.
http://hdl.handle.net/10027/19370.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Javaid, Maria. “Communication through Physical Interaction: Robot Assistants for the Elderly.” 2015. Web. 05 Mar 2021.
Vancouver:
Javaid M. Communication through Physical Interaction: Robot Assistants for the Elderly. [Internet] [Thesis]. University of Illinois – Chicago; 2015. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/10027/19370.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Javaid M. Communication through Physical Interaction: Robot Assistants for the Elderly. [Thesis]. University of Illinois – Chicago; 2015. Available from: http://hdl.handle.net/10027/19370
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
15.
Ekhtiarabadi, Afshin Ameri.
Unified Incremental Multimodal Interface for Human-Robot Interaction.
Degree: Design and Engineering, 2011, Mälardalen University
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
► Face-to-face human communication is a multimodal and incremental process. Humans employ different information channels (modalities) for their communication. Since some of these modalities are…
(more)
▼ Face-to-face human communication is a multimodal and incremental process. Humans employ different information channels (modalities) for their communication. Since some of these modalities are more error-prone to specic type of data, a multimodal communication can benefit from strengths of each modality and therefore reduce ambiguities during the interaction. Such interfaces can be applied to intelligent robots who operate in close relation with humans. With this approach, robots can communicate with their human colleagues in the same way they communicate with each other, thus leading to an easier and more robust human-robot interaction (HRI).In this work we suggest a new method for implementing multimodal interfaces in HRI domain and present the method employed on an industrial robot. We show that operating the system is made easier by using this interface.
Robot Colleague
Subjects/Keywords: Multimodal Interaction; Human-Robot Interaction; Human Computer Interaction; Människa-datorinteraktion (interaktionsdesign)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ekhtiarabadi, A. A. (2011). Unified Incremental Multimodal Interface for Human-Robot Interaction. (Thesis). Mälardalen University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ekhtiarabadi, Afshin Ameri. “Unified Incremental Multimodal Interface for Human-Robot Interaction.” 2011. Thesis, Mälardalen University. Accessed March 05, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ekhtiarabadi, Afshin Ameri. “Unified Incremental Multimodal Interface for Human-Robot Interaction.” 2011. Web. 05 Mar 2021.
Vancouver:
Ekhtiarabadi AA. Unified Incremental Multimodal Interface for Human-Robot Interaction. [Internet] [Thesis]. Mälardalen University; 2011. [cited 2021 Mar 05].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ekhtiarabadi AA. Unified Incremental Multimodal Interface for Human-Robot Interaction. [Thesis]. Mälardalen University; 2011. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Rochester Institute of Technology
16.
Hazbar, Tuly.
Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace.
Degree: MS, Electrical Engineering, 2019, Rochester Institute of Technology
URL: https://scholarworks.rit.edu/theses/10198
► A cyber-physical system is developed to enable a human-robot team to perform a shared task in a shared workspace. The system setup is suitable…
(more)
▼ A cyber-physical system is developed to enable a
human-
robot team to perform a shared task in a shared workspace. The system setup is suitable for the implementation of a tabletop manipulation task, a common
human-
robot collaboration scenario. The system integrates elements that exist in the physical (real) and the virtual world. In this work, we report the insights we gathered throughout our exploration in understanding and implementing task planning and execution for
human-
robot team.
Advisors/Committee Members: Ferat Sahin.
Subjects/Keywords: Cognitive robotics; Digital twin; Human-robot collaboration; Human robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hazbar, T. (2019). Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace. (Masters Thesis). Rochester Institute of Technology. Retrieved from https://scholarworks.rit.edu/theses/10198
Chicago Manual of Style (16th Edition):
Hazbar, Tuly. “Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace.” 2019. Masters Thesis, Rochester Institute of Technology. Accessed March 05, 2021.
https://scholarworks.rit.edu/theses/10198.
MLA Handbook (7th Edition):
Hazbar, Tuly. “Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace.” 2019. Web. 05 Mar 2021.
Vancouver:
Hazbar T. Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace. [Internet] [Masters thesis]. Rochester Institute of Technology; 2019. [cited 2021 Mar 05].
Available from: https://scholarworks.rit.edu/theses/10198.
Council of Science Editors:
Hazbar T. Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace. [Masters Thesis]. Rochester Institute of Technology; 2019. Available from: https://scholarworks.rit.edu/theses/10198

Georgia Tech
17.
Ma, Mingyue (Lanssie).
Furthering human-robot teaming, interaction, and metrics through computational methods and analysis.
Degree: PhD, Aerospace Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62655
► Human-robot teaming is a complex design trade space with dynamic aspects and particulars. In order to support future day human-robot teams and scenarios, we need…
(more)
▼ Human-
robot teaming is a complex design trade space with dynamic aspects and particulars. In order to support future day
human-
robot teams and scenarios, we need to assist team designers and evaluators in understanding core teaming components. This work is centered around teams that complete space missions and operations. The central scope and theme of this work target the way users should design, evaluate, and think about
human-
robot teams. This work attempts to do so by defining a framework, conceptual methodology, and operationalized metrics for
human-
robot teams. We begin by scoping and distilling common components from
human-only teaming and
human-
robot teaming research based in areas such as
human factors, cognitive psychology, robotics, and
human-
robot interaction. Taking these constructs, we derive a framework that describes and organizes the factors, as well as relationships between them. This work also presents a theoretical methodology to support designers to understand the impact teaming components have on expected
interaction. This methodology is implemented for four case studies of distinct team types and scenarios including moving furniture, a SWAT team operation, a rover recon, and an in-orbit maintenance mission. After assessing various existing methodologies and perspectives, we derive metrics operationalized from work allocation.
To test these learnings, this work modeled and simulated
human-
robot teams in action, specifically in an in-orbit maintenance scenario. In addition to analyzing simulation results given different team configurations, task allocations, and teamwork modes, a HITL experiment confirmed a
human perspective of robotic team members. This experiment also refines the modeling of teams and validates our performance metrics. This dissertation makes the following contributions to the field of
human-
robot teaming and
interaction: 1) Created a new comprehensive framework for
human-
robot teaming by combining key components of team design and
interaction, 2) Developed a method to identify distinct archetypes of
interaction in
human-
robot teams (and showed how they fit into a universal framework), 3) Derived metrics from the HRT framework to capture the teaming elements beyond performance and efficiency; operationalized the method and metrics in a computational framework for simulation and analysis, 4) Extended existing computational framework for function allocation to include the metrics, 5) Demonstrated the sensitivity of effective teams to attributes of both teamwork and taskwork.
Advisors/Committee Members: Feigh, Karen M. (advisor), Fong, Terry (committee member), Chernova, Sonia (committee member), Goel, Ashok (committee member), Fujimoto, Richard (committee member).
Subjects/Keywords: Human-robot teaming; Human-robot interaction; Work allocation; Modeling; Simulation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ma, M. (. (2019). Furthering human-robot teaming, interaction, and metrics through computational methods and analysis. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62655
Chicago Manual of Style (16th Edition):
Ma, Mingyue (Lanssie). “Furthering human-robot teaming, interaction, and metrics through computational methods and analysis.” 2019. Doctoral Dissertation, Georgia Tech. Accessed March 05, 2021.
http://hdl.handle.net/1853/62655.
MLA Handbook (7th Edition):
Ma, Mingyue (Lanssie). “Furthering human-robot teaming, interaction, and metrics through computational methods and analysis.” 2019. Web. 05 Mar 2021.
Vancouver:
Ma M(. Furthering human-robot teaming, interaction, and metrics through computational methods and analysis. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/1853/62655.
Council of Science Editors:
Ma M(. Furthering human-robot teaming, interaction, and metrics through computational methods and analysis. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62655

Tampere University
18.
Bejarano, Ronal.
Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
.
Degree: 2019, Tampere University
URL: https://trepo.tuni.fi/handle/10024/117392
► Over the last decades, the Industrial Automation domain at factory shop floors experienced an exponential growth in the use of robots. The objective of such…
(more)
▼ Over the last decades, the Industrial Automation domain at factory shop floors experienced an exponential growth in the use of robots. The objective of such change aims to increase the efficiency at reasonable cost. However, not all the tasks formerly performed by humans in factories, are fully substituted by robots nowadays, specially the ones requiring high-level of dexterity. In fact, Europe is moving towards implementing efficient work spaces were humans can work safely, aided by robots. In this context, industrial and research sectors have ambitious plans to achieve solutions that involve coexistence and simultaneity at work between humans and collaborative robots, a.k.a. “cobots” or co-robots, for permitting a safe interaction for the same or interrelated manufacturing processes. Many cobot producers started to present their products, but those arrived before the industry have clear and several needs of this particular technology. This work presents an approach about how to demonstrate human-robot collaborative manufacturing? How to implement a dual-arm human-robot collaborative workstation? How to integrate a human-robot collaborative workstation into a modular interconnected production line? and What are the advantages and challenges of current HRC technologies at the shop floor? by documenting the formulation of a human-robot collaborative assembly process, implemented by designing and building an assembly workstation that exemplifies a scenario of interaction between a dual arm cobot and a human operator, in order to assembly a product box, as a part of a large-scale modular robotized production line. The model produced by this work is part of the research facilities at the Future Automation Systems and Technologies Laboratory in Tampere University.
Subjects/Keywords: Human-Robot Interaction;
Human-Robot Collaboration;
Industrial applications;
Cobots
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bejarano, R. (2019). Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
. (Masters Thesis). Tampere University. Retrieved from https://trepo.tuni.fi/handle/10024/117392
Chicago Manual of Style (16th Edition):
Bejarano, Ronal. “Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
.” 2019. Masters Thesis, Tampere University. Accessed March 05, 2021.
https://trepo.tuni.fi/handle/10024/117392.
MLA Handbook (7th Edition):
Bejarano, Ronal. “Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
.” 2019. Web. 05 Mar 2021.
Vancouver:
Bejarano R. Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
. [Internet] [Masters thesis]. Tampere University; 2019. [cited 2021 Mar 05].
Available from: https://trepo.tuni.fi/handle/10024/117392.
Council of Science Editors:
Bejarano R. Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line
. [Masters Thesis]. Tampere University; 2019. Available from: https://trepo.tuni.fi/handle/10024/117392
19.
Katsila, Athanasia.
Active Peer Pressure in Human-Robot Interaction.
Degree: 2018, University of Nevada – Reno
URL: http://hdl.handle.net/11714/4885
► This work investigates how conformity in human-robot groups can be manipulated by the robots’ ability to actively apply coordinated peer pressure via voicing their agreement…
(more)
▼ This work investigates how conformity in
human-
robot groups can be manipulated by the robots’ ability to actively apply coordinated peer pressure via voicing their agreement or disagreement with the human’s selections in a visual task. Through a combination of kinesics and vocalics, we created an environment where the
human subject became aware of the fact that they were not just actively being observed (by employing synchronized
robot body and camera gaze motion), but were also under the active judgment and criticism of their
robot peers. Our experiments consisted of two studies, a 2×2 and a 2×1 that considered a combination of possible conditions such as the sequencing of expressing opinions in the group and the number of
robot peers, as well as the difference between unambiguous, ambiguous, and even duplicate selections in the used visual indicators. Using statistical analysis we show that the application of the new idea of “active peer pressure” manages to achieve increased conformity across many of the considered conditions.
Advisors/Committee Members: Feil-Seifer, David (advisor), Nicolescu, Monica (committee member), Kirn, Adam (committee member).
Subjects/Keywords: Active Peer Pressure; Asch; Conformity; Human Robot Groups; Human Robot Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Katsila, A. (2018). Active Peer Pressure in Human-Robot Interaction. (Thesis). University of Nevada – Reno. Retrieved from http://hdl.handle.net/11714/4885
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Katsila, Athanasia. “Active Peer Pressure in Human-Robot Interaction.” 2018. Thesis, University of Nevada – Reno. Accessed March 05, 2021.
http://hdl.handle.net/11714/4885.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Katsila, Athanasia. “Active Peer Pressure in Human-Robot Interaction.” 2018. Web. 05 Mar 2021.
Vancouver:
Katsila A. Active Peer Pressure in Human-Robot Interaction. [Internet] [Thesis]. University of Nevada – Reno; 2018. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/11714/4885.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Katsila A. Active Peer Pressure in Human-Robot Interaction. [Thesis]. University of Nevada – Reno; 2018. Available from: http://hdl.handle.net/11714/4885
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Colorado School of Mines
20.
Zhu, Lixiao.
Effects of proactive explanations by autonomous systems on human-robot trust.
Degree: MS(M.S.), Computer Science, 2020, Colorado School of Mines
URL: http://hdl.handle.net/11124/174193
► Human-Robot Interaction (HRI) seeks understanding, designing, and evaluating of robots for human-robot teams. Previous research has indicated that the performance of human-robot teams depends on…
(more)
▼ Human-
Robot Interaction (HRI) seeks understanding, designing, and evaluating of robots for
human-
robot teams. Previous research has indicated that the performance of
human-
robot teams depends on
human-
robot trust, which in turn depends on appropriate
robot-to-
human transparency. In this thesis, we explore one strategy for improving
robot transparency, proactive explanations, and its effect on the
human-
robot trust. We also introduce a resource management testbed, in which
human participants engage in a resource management exercise while a
robot teammate performs an assistive task. Our results suggest that there is a positive relationship between providing proactive explanations and
human-
robot trust.
Advisors/Committee Members: Williams, Thomas (advisor), Zhang, Hao (committee member), Mehta, Dinesh P. (committee member).
Subjects/Keywords: human-robot trust; human-robot interaction; proactive explanations
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhu, L. (2020). Effects of proactive explanations by autonomous systems on human-robot trust. (Masters Thesis). Colorado School of Mines. Retrieved from http://hdl.handle.net/11124/174193
Chicago Manual of Style (16th Edition):
Zhu, Lixiao. “Effects of proactive explanations by autonomous systems on human-robot trust.” 2020. Masters Thesis, Colorado School of Mines. Accessed March 05, 2021.
http://hdl.handle.net/11124/174193.
MLA Handbook (7th Edition):
Zhu, Lixiao. “Effects of proactive explanations by autonomous systems on human-robot trust.” 2020. Web. 05 Mar 2021.
Vancouver:
Zhu L. Effects of proactive explanations by autonomous systems on human-robot trust. [Internet] [Masters thesis]. Colorado School of Mines; 2020. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/11124/174193.
Council of Science Editors:
Zhu L. Effects of proactive explanations by autonomous systems on human-robot trust. [Masters Thesis]. Colorado School of Mines; 2020. Available from: http://hdl.handle.net/11124/174193

Universiteit Utrecht
21.
Wigdor, N.R.
Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction.
Degree: 2014, Universiteit Utrecht
URL: http://dspace.library.uu.nl:8080/handle/1874/298433
► Conversation Fillers (CFs) such as ”um”, ”hmm”, and ”ah” were tested alongside iconic pensive or acknowledging gestures for their ef- fectiveness at mitigating the negative…
(more)
▼ Conversation Fillers (CFs) such as ”um”, ”hmm”, and ”ah” were tested alongside iconic pensive or acknowledging gestures for their ef- fectiveness at mitigating the negative effects associated with unwanted anthropomorphic
robot response delay. Employing CFs in interac- tions with nine- and ten-year-old children was found to be effective at improving perceived speediness, aliveness, humanness, and likability without decreasing perceptions of intelligence, trustworthiness, or au- tonomy. The results also show that an experimenter covertly crafting a robot’s vocalized response has a slower heart rate and a higher heart rate variability, an indication of a lower stress level, when the
robot is filling the associated delay with CFs than when not.
Advisors/Committee Members: Marcel van Aken, John Jules Meyer.
Subjects/Keywords: Robot; conversational fillers; child-robot interaction; human-robot interaction; delay mitigation; filling
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wigdor, N. R. (2014). Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction. (Masters Thesis). Universiteit Utrecht. Retrieved from http://dspace.library.uu.nl:8080/handle/1874/298433
Chicago Manual of Style (16th Edition):
Wigdor, N R. “Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction.” 2014. Masters Thesis, Universiteit Utrecht. Accessed March 05, 2021.
http://dspace.library.uu.nl:8080/handle/1874/298433.
MLA Handbook (7th Edition):
Wigdor, N R. “Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction.” 2014. Web. 05 Mar 2021.
Vancouver:
Wigdor NR. Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction. [Internet] [Masters thesis]. Universiteit Utrecht; 2014. [cited 2021 Mar 05].
Available from: http://dspace.library.uu.nl:8080/handle/1874/298433.
Council of Science Editors:
Wigdor NR. Conversational Fillers for Response Delay Amelioration in Child-Robot Interaction. [Masters Thesis]. Universiteit Utrecht; 2014. Available from: http://dspace.library.uu.nl:8080/handle/1874/298433

Cranfield University
22.
Tang, Gilbert.
The development of a human-robot interface for industrial collaborative system.
Degree: PhD, 2016, Cranfield University
URL: http://dspace.lib.cranfield.ac.uk/handle/1826/10213
► Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number…
(more)
▼ Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future.
A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”.
In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance.
The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user
ii
effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.
Subjects/Keywords: Human-robot interface; gesture control; human-robot interaction; system communication; teleoperation; automation; robot assistant
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tang, G. (2016). The development of a human-robot interface for industrial collaborative system. (Doctoral Dissertation). Cranfield University. Retrieved from http://dspace.lib.cranfield.ac.uk/handle/1826/10213
Chicago Manual of Style (16th Edition):
Tang, Gilbert. “The development of a human-robot interface for industrial collaborative system.” 2016. Doctoral Dissertation, Cranfield University. Accessed March 05, 2021.
http://dspace.lib.cranfield.ac.uk/handle/1826/10213.
MLA Handbook (7th Edition):
Tang, Gilbert. “The development of a human-robot interface for industrial collaborative system.” 2016. Web. 05 Mar 2021.
Vancouver:
Tang G. The development of a human-robot interface for industrial collaborative system. [Internet] [Doctoral dissertation]. Cranfield University; 2016. [cited 2021 Mar 05].
Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/10213.
Council of Science Editors:
Tang G. The development of a human-robot interface for industrial collaborative system. [Doctoral Dissertation]. Cranfield University; 2016. Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/10213

Cranfield University
23.
Tang, Gilbert.
The development of a human-robot interface for industrial collaborative system.
Degree: PhD, 2016, Cranfield University
URL: http://dspace.lib.cranfield.ac.uk/handle/1826/10213
;
http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026
► Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number…
(more)
▼ Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.
Subjects/Keywords: 629.8; Human-robot interface; gesture control; human-robot interaction; system communication; teleoperation; automation; robot assistant
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tang, G. (2016). The development of a human-robot interface for industrial collaborative system. (Doctoral Dissertation). Cranfield University. Retrieved from http://dspace.lib.cranfield.ac.uk/handle/1826/10213 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026
Chicago Manual of Style (16th Edition):
Tang, Gilbert. “The development of a human-robot interface for industrial collaborative system.” 2016. Doctoral Dissertation, Cranfield University. Accessed March 05, 2021.
http://dspace.lib.cranfield.ac.uk/handle/1826/10213 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026.
MLA Handbook (7th Edition):
Tang, Gilbert. “The development of a human-robot interface for industrial collaborative system.” 2016. Web. 05 Mar 2021.
Vancouver:
Tang G. The development of a human-robot interface for industrial collaborative system. [Internet] [Doctoral dissertation]. Cranfield University; 2016. [cited 2021 Mar 05].
Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/10213 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026.
Council of Science Editors:
Tang G. The development of a human-robot interface for industrial collaborative system. [Doctoral Dissertation]. Cranfield University; 2016. Available from: http://dspace.lib.cranfield.ac.uk/handle/1826/10213 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691026
24.
高橋, 達.
高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット.
Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学
URL: http://hdl.handle.net/10061/8719
Subjects/Keywords: Human-Robot Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
高橋, . (n.d.). 高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/8719
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
高橋, 達. “高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed March 05, 2021.
http://hdl.handle.net/10061/8719.
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
高橋, 達. “高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット.” Web. 05 Mar 2021.
Note: this citation may be lacking information needed for this citation format:
No year of publication.
Vancouver:
高橋 . 高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2021 Mar 05].
Available from: http://hdl.handle.net/10061/8719.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.
Council of Science Editors:
高橋 . 高齢者の発話機会の増加を目的としたソーシャルメディア仲介ロボット : Mediation Robots as Social Media for Increasing an Opportunity of Conversation for Elderly; コウレイシャ ノ ハツワ キカイ ノ ゾウカ オ モクテキ ト シタ ソーシャル メディア チュウカイ ロボット. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/8719
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Vanderbilt University
25.
Young, Eric Michael.
Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb.
Degree: MS, Mechanical Engineering, 2015, Vanderbilt University
URL: http://hdl.handle.net/1803/13945
► In recent years, robotic rehabilitation has proven to be beneficial for individuals with impaired limbs, particularly due to the potential of robotic therapists to be…
(more)
▼ In recent years, robotic rehabilitation has proven to be beneficial for individuals with impaired limbs, particularly due to the potential of robotic therapists to be more accessible, consistent and cost-effective than their
human counterparts. While pursuing better rehabilitation methods is a crucial endeavor, it is also important to acknowledge that many people need an alternative form of assistance for physical impairments, both while undergoing rehabilitation and in the unfortunate but common scenario of rehabilitation providing insufficient improvements. The aim of this thesis is to present a low-cost robotic assistive device which may serve as a complement to rehabilitation procedures. The proposed system determines the intended movement of a user’s upper arm via eye-gaze inputs and force inputs, and physically assists said movement. In this manner, the system may provide immediate relief for someone suffering from physical impairments in their upper limbs, either as a complement to ongoing rehabilitation therapy or as a partial solution in the case of insufficient improvements from rehabilitation.
Advisors/Committee Members: Thomas Withrow (committee member), Zachary Warren (committee member), Nilanjan Sarkar (Committee Chair).
Subjects/Keywords: human-robot interaction; intention detection; robotic assistance
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Young, E. M. (2015). Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb. (Thesis). Vanderbilt University. Retrieved from http://hdl.handle.net/1803/13945
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Young, Eric Michael. “Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb.” 2015. Thesis, Vanderbilt University. Accessed March 05, 2021.
http://hdl.handle.net/1803/13945.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Young, Eric Michael. “Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb.” 2015. Web. 05 Mar 2021.
Vancouver:
Young EM. Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb. [Internet] [Thesis]. Vanderbilt University; 2015. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/1803/13945.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Young EM. Low-Cost, Intention-Detecting Robot to Assist the Movement of an Impaired Upper Limb. [Thesis]. Vanderbilt University; 2015. Available from: http://hdl.handle.net/1803/13945
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Waterloo
26.
Lin, Daiwei.
Spatially-Distributed Interactive Behaviour Generation for Architecture-Scale Systems Based on Reinforcement Learning.
Degree: 2020, University of Waterloo
URL: http://hdl.handle.net/10012/15648
► This thesis is part of the research activities of the Living Architecture System Group (LASG). LASG develops immersive, interactive art sculptures combining concepts of architecture,…
(more)
▼ This thesis is part of the research activities of the Living Architecture System Group (LASG). LASG develops immersive, interactive art sculptures combining concepts of architecture, art, and electronics which allow occupants to interact with immersively. The primary goal of this research is to investigate the design of effective human-robot interaction behaviours using reinforcement learning. In this thesis, reinforcement learning is used adapt human designed behaviours to maximize occupant engagement.
Algorithms were tested in a simulation environment created using Unity. The system developed by LASG was simulated and simplified human visitor models are designed for the tests. Three adaptive behaviour modes and two exploration methods were compared in the simulated environment. We showed that reinforcement learning algorithms can learn to increase engagement by adapting to visitors' preferences and exploring with parameter noise performed better than action noise because of wider exploration.
A field study was conducted based on the LASG's installation Aegis, Transforming Space exhibition at the Royal Ontario Museum (ROM) from June 2nd to October 8th, 2018. The experiment was conducted in a natural setting where no constraints are imposed on visitors and group interaction is accommodated. Experimental results demonstrated that learning on top of human designed pre-scripted behaviours (PLA) is better at increasing visitors engagement than only using pre-scripted behaviours (PB). Visitor responses to the GodSpeed standardized questionnaire suggested that PLA is more highly rated than PB in terms of Likeability and interactivity.
Subjects/Keywords: reinforcement learning; human-robot interaction; architecture-scale
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lin, D. (2020). Spatially-Distributed Interactive Behaviour Generation for Architecture-Scale Systems Based on Reinforcement Learning. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/15648
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lin, Daiwei. “Spatially-Distributed Interactive Behaviour Generation for Architecture-Scale Systems Based on Reinforcement Learning.” 2020. Thesis, University of Waterloo. Accessed March 05, 2021.
http://hdl.handle.net/10012/15648.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lin, Daiwei. “Spatially-Distributed Interactive Behaviour Generation for Architecture-Scale Systems Based on Reinforcement Learning.” 2020. Web. 05 Mar 2021.
Vancouver:
Lin D. Spatially-Distributed Interactive Behaviour Generation for Architecture-Scale Systems Based on Reinforcement Learning. [Internet] [Thesis]. University of Waterloo; 2020. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/10012/15648.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lin D. Spatially-Distributed Interactive Behaviour Generation for Architecture-Scale Systems Based on Reinforcement Learning. [Thesis]. University of Waterloo; 2020. Available from: http://hdl.handle.net/10012/15648
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Illinois – Chicago
27.
Parastegari, Mohammad Sina.
Modelling and Control of Object Handover, A Study in Human-Robot Interaction.
Degree: 2018, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/22716
► Object handover is a common physical interaction between humans. It is thus also of signi cant interest for human-robot interaction. In this work, we are…
(more)
▼ Object handover is a common physical
interaction between humans. It is thus also of
signi cant interest for
human-
robot interaction. In this work, we are focused on
robot-tohuman
object handover. To implement the task on the
robot, the con guration (position and
orientation) in which the object is transferred should be selected so that the handover is safe
and comfortable for the
human. The trajectory along which the
robot moves the object to the
point of transfer should be also selected so that the
robot intention is clear and the handover
feels natural to the
human. We propose to select the con guration for the transfer and the
trajectory to reach this con guration based on what humans do in
human-human handover.
We describe a
human study designed to investigate the
human-human handover and propose
an ergonomic model that can predict object transfer position observed in the study. A humanrobot
experiment is then conducted that shows that the proposed model generates transfer
positions that match the preferred height and distance relative to the
human.
Another signi cant challenge in
robot-to-
human handover is how to reduce the failure rate,
i.e., ensuring that the object does not fall (object safety), while at the same time allowing the
human to easily acquire the object (smoothness). To endow the
robot with a failure recovery
mechanism, we investigate how humans detect failure during the transfer phase of the handover.
We conduct a
human study that shows that a
human giver primarily relies on vision rather
than haptic sensing to detect the fall of the object. Motivated by this study, a robotic handover
system is proposed that consists of a motion sensor attached to the
robot's gripper, a force sensor at the base of the gripper, and a controller that is capable of re-grasping the object if it
starts falling. The proposed system is implemented on a Baxter
robot and is shown to achieve
a smooth and safe handover.
Advisors/Committee Members: Zefran, Milos (advisor), Soltanalian, Mojtaba (committee member), Patton, Jim (committee member), Ziebart, Brian (committee member), Berniker, Max (committee member), Zefran, Milos (chair).
Subjects/Keywords: Physical Human-Robot Interaction
Object Handover
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Parastegari, M. S. (2018). Modelling and Control of Object Handover, A Study in Human-Robot Interaction. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/22716
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Parastegari, Mohammad Sina. “Modelling and Control of Object Handover, A Study in Human-Robot Interaction.” 2018. Thesis, University of Illinois – Chicago. Accessed March 05, 2021.
http://hdl.handle.net/10027/22716.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Parastegari, Mohammad Sina. “Modelling and Control of Object Handover, A Study in Human-Robot Interaction.” 2018. Web. 05 Mar 2021.
Vancouver:
Parastegari MS. Modelling and Control of Object Handover, A Study in Human-Robot Interaction. [Internet] [Thesis]. University of Illinois – Chicago; 2018. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/10027/22716.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Parastegari MS. Modelling and Control of Object Handover, A Study in Human-Robot Interaction. [Thesis]. University of Illinois – Chicago; 2018. Available from: http://hdl.handle.net/10027/22716
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Rice University
28.
Losey, Dylan P.
Responding to Physical Human-Robot Interaction: Theory and Approximations.
Degree: PhD, Engineering, 2018, Rice University
URL: http://hdl.handle.net/1911/105912
► This thesis explores how robots should respond to physical human interactions. From surgical devices to assistive arms, robots are becoming an important aspect of our…
(more)
▼ This thesis explores how robots should respond to physical
human interactions. From surgical devices to assistive arms, robots are becoming an important aspect of our everyday lives. Unlike earlier robots – which were developed for carefully regulated factory settings – today's robots must work alongside
human end-users, and even facilitate physical interactions between the
robot and the
human. Within the current state-of-the-art, the
human's intentionally applied forces are treated as unwanted disturbances that the
robot should avoid, reject, or ignore: once the
human stops interacting, these robots simply return to their original behavior. By contrast, we recognize that physical interactions are really an implicit form of communication: the
human is applying forces and torques to correct the
robot's behavior, and teach the
robot how it should complete its task. Within this work, we demonstrate that optimally responding to physical
human interactions results in robots that learn from these corrections and change their underlying behavior.
We first formalize physical
human-
robot interaction as a partially observable dynamical system, where the
human's applied forces and torques are observations about the objective function that the
robot should be optimizing, and, more specifically, the
human's preferences for how the
robot should behave. Solving this system defines the right way for a
robot to respond to physical corrections. We derive three approximate solutions for real-time implementation on robotic hardware: these different approximations assume increasing amounts of structure, and consider cases where the
robot is given (a) an arbitrary initial trajectory, (b) a parameterized initial trajectory, or (c) the task-related features. We next extend our approximations to account for noisy and imperfect end-users, who may accidentally correct the
robot more or less than they intended. We enable robots to reason over what aspects of the
human's
interaction were intentional, and which of the
human's preferences are still unclear. Our overall approach to physical
human-
robot interaction provides a theoretical basis for robots that both realize why the
human is interacting and personalize their behavior in response to that end-user. The feasibility of our theoretical contributions is demonstrated through simulations and user studies.
Advisors/Committee Members: O'Malley, Marcia K (advisor).
Subjects/Keywords: human-robot interaction; machine learning; optimal control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Losey, D. P. (2018). Responding to Physical Human-Robot Interaction: Theory and Approximations. (Doctoral Dissertation). Rice University. Retrieved from http://hdl.handle.net/1911/105912
Chicago Manual of Style (16th Edition):
Losey, Dylan P. “Responding to Physical Human-Robot Interaction: Theory and Approximations.” 2018. Doctoral Dissertation, Rice University. Accessed March 05, 2021.
http://hdl.handle.net/1911/105912.
MLA Handbook (7th Edition):
Losey, Dylan P. “Responding to Physical Human-Robot Interaction: Theory and Approximations.” 2018. Web. 05 Mar 2021.
Vancouver:
Losey DP. Responding to Physical Human-Robot Interaction: Theory and Approximations. [Internet] [Doctoral dissertation]. Rice University; 2018. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/1911/105912.
Council of Science Editors:
Losey DP. Responding to Physical Human-Robot Interaction: Theory and Approximations. [Doctoral Dissertation]. Rice University; 2018. Available from: http://hdl.handle.net/1911/105912

Rice University
29.
Losey, Dylan Patrick.
Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation.
Degree: MS, Engineering, 2016, Rice University
URL: http://hdl.handle.net/1911/96555
► This thesis discusses the use of adaptive control within human-robot interaction, and in particular rehabilitation robots, in order to change the perceived closed-loop system dynamics…
(more)
▼ This thesis discusses the use of adaptive control within
human-
robot interaction, and in particular rehabilitation robots, in order to change the perceived closed-loop system dynamics and compensate for unexpected and changing
subject behaviors. I first motivate the use of controllers during robotic rehabilitation through a
human-subjects study, in which I juxtapose
interaction controllers and a novel motor learning protocol, and find that haptic guidance and error augmentation can improve the retention of trained behavior after feedback is removed. Next, I develop an adaptive controller for rigid upper-limb rehabilitation robots, which uses sensorless force estimation to minimize the amount of robotic assistance while also bounding the
subject's trajectory errors. Finally, I discuss the use of time domain adaptive control in the context of physically compliant rehabilitation robots – in particular, series elastic actuators – where I discover that adaptive techniques enable passively rendering virtual environments not achievable using existing practices. Each of these adaptive controllers is developed using the theoretical framework of Lyapunov stability analysis, and is tested on single degree-of-freedom robotic hardware. I conclude that adaptive control provides an avenue for safe robotic
interaction, both through stability analysis and physical compliance, and can adjust to subjects of various impairment levels to ensure that training is meaningful, in the sense that desired trajectories, interactions, and long-term effects are achieved.
Advisors/Committee Members: O'Malley, Marcia K (advisor).
Subjects/Keywords: adaptive control; human-robot interaction; rehabilitation robotics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Losey, D. P. (2016). Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/96555
Chicago Manual of Style (16th Edition):
Losey, Dylan Patrick. “Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation.” 2016. Masters Thesis, Rice University. Accessed March 05, 2021.
http://hdl.handle.net/1911/96555.
MLA Handbook (7th Edition):
Losey, Dylan Patrick. “Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation.” 2016. Web. 05 Mar 2021.
Vancouver:
Losey DP. Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation. [Internet] [Masters thesis]. Rice University; 2016. [cited 2021 Mar 05].
Available from: http://hdl.handle.net/1911/96555.
Council of Science Editors:
Losey DP. Adaptive and Self-Adjusting Controllers for Safe and Meaningful Human-Robot Interaction during Rehabilitation. [Masters Thesis]. Rice University; 2016. Available from: http://hdl.handle.net/1911/96555

Delft University of Technology
30.
de Graaf, Marnix (author).
Exploring trade-offs in on-board versus cloud-based social robots.
Degree: 2019, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:7c5a0445-cac3-470c-9e36-e9c828f15cde
► Social Robotics is an emerging field in Computer Science. Most social robots currently commercially available to buy do not have fast hardware components. As a…
(more)
▼ Social Robotics is an emerging field in Computer Science. Most social robots currently commercially available to buy do not have fast hardware components. As a result, the built-in software has low accuracy and performance with (amongst others) speech and facial recognition and dialogs during social
interaction with users. Cloud computation offers state-of-the-art techniques, performance, and accuracy with its massive available computational power, but at extra costs and increased latency. In this work, we extend and improve a social
robot's standard capabilities and performance by making use of cloud computation. This thesis covers an exploration for the trade-offs present when replacing or augmenting built-in
robot software with IBM cloud services, based on the humanoid Pepper
robot from Softbank. The approach for this exploration was guided by a hospitality use case demonstrated in the offices of two companies: a Dutch Health Insurer and IBM Netherlands. Two products were developed for this single use case based on different development toolboxes. The first toolbox contains all development software from the
robot's manufacturer (the NAOqi toolbox), while the second toolbox makes use of cloud services (the Watson Toolbox). Using the product built with the NAOqi toolbox, we evaluate interactions with real users and obtain baseline data and experiences. After evaluating the second product built with the Watson toolbox, we can compare differences in
human-
robot interaction quality,
robot component quality, development methods, and software engineering complexity and Total Costs of Ownership for both products. The main findings include an overview of relevant test metrics and test methods for a social
robot's component, including acquired data for some components of the Pepper
robot. We show possible architectures for a (semi) cloud-based system, and their trade-offs. Evaluations show that the cloud-based system indeed performs better and has higher
human-
interaction quality compared to the product built with the NAOqi toolbox, yet downsides such as latency and operating costs are present. This is also reflected in the analysis of single components, where specifically Speech-to-Text from the cloud shows a significant increase in performance and capabilities. We show that a mix of toolboxes results in the best working and cheapest social
robot when considering Total Cost of Ownership. IBM Cloud pricing structures and operating costs are analyzed for this. Finally, we contribute to the currently available knowledge on this
subject with a decision matrix combining all previously mentioned information in a compact form accessible to people not knowledgeable in the hospitality
robot or cloud domains. With the matrix, early-development advice decisions for creating a social
robot can be formulated using the data gathered in this thesis. With a broad approach, this research focuses on finding and discussing trade-offs, rather than an in-depth analysis of all components. Providing methods…
Advisors/Committee Members: Hindriks, Koen (mentor), Hung, Hayley (graduation committee), Szlávik, Zoltán (graduation committee), Timmermans, Benjamin (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Social Robotics; Cloud Computing; human robot interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
de Graaf, M. (. (2019). Exploring trade-offs in on-board versus cloud-based social robots. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:7c5a0445-cac3-470c-9e36-e9c828f15cde
Chicago Manual of Style (16th Edition):
de Graaf, Marnix (author). “Exploring trade-offs in on-board versus cloud-based social robots.” 2019. Masters Thesis, Delft University of Technology. Accessed March 05, 2021.
http://resolver.tudelft.nl/uuid:7c5a0445-cac3-470c-9e36-e9c828f15cde.
MLA Handbook (7th Edition):
de Graaf, Marnix (author). “Exploring trade-offs in on-board versus cloud-based social robots.” 2019. Web. 05 Mar 2021.
Vancouver:
de Graaf M(. Exploring trade-offs in on-board versus cloud-based social robots. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2021 Mar 05].
Available from: http://resolver.tudelft.nl/uuid:7c5a0445-cac3-470c-9e36-e9c828f15cde.
Council of Science Editors:
de Graaf M(. Exploring trade-offs in on-board versus cloud-based social robots. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:7c5a0445-cac3-470c-9e36-e9c828f15cde
◁ [1] [2] [3] [4] [5] … [16] ▶
.