You searched for subject:( Human Computer Interaction)
.
Showing records 1 – 30 of
146392 total matches.
◁ [1] [2] [3] [4] [5] … [4880] ▶

Purdue University
1.
Verma, Ansh.
EVALUATING ENGINEERING LEARNING AND GENDER NEUTRALITY FOR THE PRODUCT DESIGN OF A MODULAR ROBOTIC KIT.
Degree: MSME, Mechanical Engineering, 2015, Purdue University
URL: https://docs.lib.purdue.edu/open_access_theses/1165
► The development of a system is informed from design factors in order to success- fully support the intended usability from the perceived affordances [1]. The…
(more)
▼ The development of a system is informed from design factors in order to success- fully support the intended usability from the perceived affordances [1]. The theory of ‘
Human Centered Design’ champions that these factors be derived from the user itself. It is based on exploiting these affordances that the boundary of technology is pushed to sometimes invent new methods or sometimes approach a problem from newer perspectives. This thesis is an example where we inform our design rationales from children in order to develop a gender neutral modular robotic toy kit.
Advisors/Committee Members: Karthik Ramani, Rebecca Krammer, Tahira Reid.
Subjects/Keywords: Human Computer Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Verma, A. (2015). EVALUATING ENGINEERING LEARNING AND GENDER NEUTRALITY FOR THE PRODUCT DESIGN OF A MODULAR ROBOTIC KIT. (Thesis). Purdue University. Retrieved from https://docs.lib.purdue.edu/open_access_theses/1165
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Verma, Ansh. “EVALUATING ENGINEERING LEARNING AND GENDER NEUTRALITY FOR THE PRODUCT DESIGN OF A MODULAR ROBOTIC KIT.” 2015. Thesis, Purdue University. Accessed January 16, 2021.
https://docs.lib.purdue.edu/open_access_theses/1165.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Verma, Ansh. “EVALUATING ENGINEERING LEARNING AND GENDER NEUTRALITY FOR THE PRODUCT DESIGN OF A MODULAR ROBOTIC KIT.” 2015. Web. 16 Jan 2021.
Vancouver:
Verma A. EVALUATING ENGINEERING LEARNING AND GENDER NEUTRALITY FOR THE PRODUCT DESIGN OF A MODULAR ROBOTIC KIT. [Internet] [Thesis]. Purdue University; 2015. [cited 2021 Jan 16].
Available from: https://docs.lib.purdue.edu/open_access_theses/1165.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Verma A. EVALUATING ENGINEERING LEARNING AND GENDER NEUTRALITY FOR THE PRODUCT DESIGN OF A MODULAR ROBOTIC KIT. [Thesis]. Purdue University; 2015. Available from: https://docs.lib.purdue.edu/open_access_theses/1165
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
2.
Papoutsaki, Alexandra.
Democratizing Eye Tracking.
Degree: Department of Computer Science, 2017, Brown University
URL: https://repository.library.brown.edu/studio/item/bdr:792601/
► Eye tracking, the process of capturing the gaze location within a display, is extensively used in usability studies, psychology, human-computer interaction, and marketing. The setup…
(more)
▼ Eye tracking, the process of capturing the gaze
location within a display, is extensively used in usability
studies, psychology,
human-
computer interaction, and marketing. The
setup and operation of modern eye trackers is time-consuming and a
specialist is needed to calibrate them and be present throughout
the experiment, leading to highly-controlled user studies with
artificial tasks and only a small number of participants. In
addition, their steep price, which rises to tens of thousands of
dollars, restricts their use to only a small number of labs that
can afford them. This thesis aims to democratize eye tracking by
using common webcams already present in laptops and desktops. We
introduce WebGazer, a webcam eye tracker that infers the gaze of
web visitors in real time. WebGazer is developed as an open-source
JavaScript library that can be incorporated into any website. Its
eye tracking model self-calibrates by mapping eye features to
positions on the display that correspond to user interactions. We
investigate whether webcam eye tracking can lead to similar
conclusions to in-lab eye tracking studies. We explore this
question in the context of web search, by extending WebGazer so
that it can predict the examined search element within a search
engine result page. We use SearchGazer to replicate three seminal
studies in the area of information retrieval and demonstrate that
scalable and remote eye tracking studies on user behavior are
possible at a fraction of cost and time. Finally, we create a
benchmark for webcam eye tracking with data collected from a lab
study with more than 60 participants. This dataset allows us to
investigate the relationship between user interactions and gaze,
confirming past findings on the alignment of gaze with clicks and
cursor movement, and introducing novel insights into the
differences in gaze behavior across users based on their ability to
touch type. Taking advantage of the temporal alignment of gaze and
user interactions, we perform improvements in WebGazer's accuracy
and functionality. These contributions make eye tracking accessible
to everyday users, researchers, and developers. Traditional eye
tracking studies that are confined to labs can now be performed
remotely and at scale. Subjects can participate in studies in their
everyday environments which can yield a more naturalistic behavior
and lead to more powerful insights.
Advisors/Committee Members: Huang, Jeff (Advisor), Tompkin, James (Reader), Laidlaw, David (Reader).
Subjects/Keywords: human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Papoutsaki, A. (2017). Democratizing Eye Tracking. (Thesis). Brown University. Retrieved from https://repository.library.brown.edu/studio/item/bdr:792601/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Papoutsaki, Alexandra. “Democratizing Eye Tracking.” 2017. Thesis, Brown University. Accessed January 16, 2021.
https://repository.library.brown.edu/studio/item/bdr:792601/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Papoutsaki, Alexandra. “Democratizing Eye Tracking.” 2017. Web. 16 Jan 2021.
Vancouver:
Papoutsaki A. Democratizing Eye Tracking. [Internet] [Thesis]. Brown University; 2017. [cited 2021 Jan 16].
Available from: https://repository.library.brown.edu/studio/item/bdr:792601/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Papoutsaki A. Democratizing Eye Tracking. [Thesis]. Brown University; 2017. Available from: https://repository.library.brown.edu/studio/item/bdr:792601/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
3.
Quay-de la Vallee, Hannah.
On a (Per)Mission: Leveraging User Ratings of App
Permissions to Help Users Manage Privacy.
Degree: Department of Computer Science, 2017, Brown University
URL: https://repository.library.brown.edu/studio/item/bdr:733483/
► Apps provide valuable utility and customizability to a range of user devices, but installation of third-party apps also presents significant security risks. Many app systems…
(more)
▼ Apps provide valuable utility and customizability to a
range of user devices, but installation of third-party apps also
presents significant security risks. Many app systems use
permissions to mitigate this risk. It then falls to users to decide
which apps to install and how to manage their permissions, but
unfortunately, many users lack the expertise to do this in a
meaningful way. In this thesis, I determine that users face two
distinct privacy decisions when using apps: which apps to install,
and how to manage apps' permissions once they are installed. In
both cases, users are not given meaningful guidance to help them
make these choices. For decisions about which apps to install,
users would benefit from privacy information in the app
marketplace, since that is how most users choose apps. Once users
install an app, they are confronted with the second type of
decision: how to manage the app's permissions. In this case, users
would benefit from an assistant that helps them see which
permissions might present privacy concerns. I therefore present two
tools: a privacy-conscious app marketplace and a permission
management assistant. Both of these tools rely on privacy
information, in the form of ratings of apps' permissions. I discuss
gathering this rating information from both
human and automated
sources and how it is used in the two tools. I also explore how the
brand of an app could affect how users rate its permissions.
Additionally, because my goal is to convey privacy information to
users, I design and evaluate several interfaces for displaying
permission ratings. I discuss surprising misconceptions generated
by some of these interfaces, and present an interface that
effectively communicates permission ratings.
Advisors/Committee Members: Krishnamurthi, Shriram (Advisor), Littman, Michael (Reader), Huang, Jeff (Reader).
Subjects/Keywords: human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Quay-de la Vallee, H. (2017). On a (Per)Mission: Leveraging User Ratings of App
Permissions to Help Users Manage Privacy. (Thesis). Brown University. Retrieved from https://repository.library.brown.edu/studio/item/bdr:733483/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Quay-de la Vallee, Hannah. “On a (Per)Mission: Leveraging User Ratings of App
Permissions to Help Users Manage Privacy.” 2017. Thesis, Brown University. Accessed January 16, 2021.
https://repository.library.brown.edu/studio/item/bdr:733483/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Quay-de la Vallee, Hannah. “On a (Per)Mission: Leveraging User Ratings of App
Permissions to Help Users Manage Privacy.” 2017. Web. 16 Jan 2021.
Vancouver:
Quay-de la Vallee H. On a (Per)Mission: Leveraging User Ratings of App
Permissions to Help Users Manage Privacy. [Internet] [Thesis]. Brown University; 2017. [cited 2021 Jan 16].
Available from: https://repository.library.brown.edu/studio/item/bdr:733483/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Quay-de la Vallee H. On a (Per)Mission: Leveraging User Ratings of App
Permissions to Help Users Manage Privacy. [Thesis]. Brown University; 2017. Available from: https://repository.library.brown.edu/studio/item/bdr:733483/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Oregon State University
4.
Thompson, Leslie Braitsch.
An analytical methodology to support the identification and remediation of potential human fallibilities in complex human-machine systems.
Degree: MS, Industrial Engineering, 2008, Oregon State University
URL: http://hdl.handle.net/1957/9089
► This research proposes a Human Fallibility Identification and Remediation Methodology (HFIRM) that supports the systematic identification and remediation of potential human errors. The objective of…
(more)
▼ This research proposes a
Human Fallibility Identification and Remediation Methodology (HFIRM) that supports the systematic identification and remediation of potential
human errors. The objective of this research was to develop and test a prototype framework that supports the practical application of
human factors knowledge to the analysis and design of complex systems. This was accomplished through the development of a methodology that guides users through a systemic fallibility analysis that draws from a database of
human performance knowledge. The results of the preliminary usability study suggest that participants perceived HFIRM positively in terms of both its usability and efficacy, supporting the face validity of the framework. This methodology extends existing research in the domain of
human error analysis and incorporates
human factors principles in order to develop a novel
human performance analysis methodology.
Advisors/Committee Members: Funk, Kenneth H (advisor), Doolen, Toni (committee member).
Subjects/Keywords: Human Factors; Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Thompson, L. B. (2008). An analytical methodology to support the identification and remediation of potential human fallibilities in complex human-machine systems. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/9089
Chicago Manual of Style (16th Edition):
Thompson, Leslie Braitsch. “An analytical methodology to support the identification and remediation of potential human fallibilities in complex human-machine systems.” 2008. Masters Thesis, Oregon State University. Accessed January 16, 2021.
http://hdl.handle.net/1957/9089.
MLA Handbook (7th Edition):
Thompson, Leslie Braitsch. “An analytical methodology to support the identification and remediation of potential human fallibilities in complex human-machine systems.” 2008. Web. 16 Jan 2021.
Vancouver:
Thompson LB. An analytical methodology to support the identification and remediation of potential human fallibilities in complex human-machine systems. [Internet] [Masters thesis]. Oregon State University; 2008. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/1957/9089.
Council of Science Editors:
Thompson LB. An analytical methodology to support the identification and remediation of potential human fallibilities in complex human-machine systems. [Masters Thesis]. Oregon State University; 2008. Available from: http://hdl.handle.net/1957/9089

University of the Arts London
5.
Barker, Leon.
Gestures in machine interaction.
Degree: PhD, 2011, University of the Arts London
URL: https://ualresearchonline.arts.ac.uk/id/eprint/15579/
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.650303
► Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual…
(more)
▼ Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax.
Subjects/Keywords: Human-computer Interaction; Computer Vision
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Barker, L. (2011). Gestures in machine interaction. (Doctoral Dissertation). University of the Arts London. Retrieved from https://ualresearchonline.arts.ac.uk/id/eprint/15579/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.650303
Chicago Manual of Style (16th Edition):
Barker, Leon. “Gestures in machine interaction.” 2011. Doctoral Dissertation, University of the Arts London. Accessed January 16, 2021.
https://ualresearchonline.arts.ac.uk/id/eprint/15579/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.650303.
MLA Handbook (7th Edition):
Barker, Leon. “Gestures in machine interaction.” 2011. Web. 16 Jan 2021.
Vancouver:
Barker L. Gestures in machine interaction. [Internet] [Doctoral dissertation]. University of the Arts London; 2011. [cited 2021 Jan 16].
Available from: https://ualresearchonline.arts.ac.uk/id/eprint/15579/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.650303.
Council of Science Editors:
Barker L. Gestures in machine interaction. [Doctoral Dissertation]. University of the Arts London; 2011. Available from: https://ualresearchonline.arts.ac.uk/id/eprint/15579/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.650303

University of Waterloo
6.
Greene, Eugene Dominic.
Augmenting Visual Feedback Using Sensory Substitution.
Degree: 2011, University of Waterloo
URL: http://hdl.handle.net/10012/6161
► Direct interaction in virtual environments can be realized using relatively simple hardware, such as standard webcams and monitors. The result is a large gap between…
(more)
▼ Direct interaction in virtual environments can be realized using relatively simple hardware, such as standard webcams and monitors. The result is a large gap between the stimuli existing in real-world interactions and those provided in the virtual environment. This leads to reduced efficiency and effectiveness when performing tasks. Conceivably these missing stimuli might be supplied through a visual modality, using sensory substitution. This work suggests a display technique that attempts to usefully and non-detrimentally employ sensory substitution to display proximity, tactile, and force information.
We solve three problems with existing feedback mechanisms. Attempting to add information to existing visuals, we need to balance: not occluding the existing visual output; not causing the user to look away from the existing visual output, or otherwise distracting the user; and displaying as much new information as possible. We assume the user interacts with a virtual environment consisting of a manually controlled probe and a set of surfaces.
Our solution is a pseudo-shadow: a shadow-like projection of the user's probe onto the surface being explored or manipulated. Instead of drawing the probe, we only draw the pseudo-shadow, and use it as a canvas on which to add other information. Static information is displayed by varying the parameters of a procedural texture rendered in the pseudo-shadow. The probe velocity and probe-surface distance modify this texture to convey dynamic information. Much of the computation occurs on the GPU, so the pseudo-shadow renders quickly enough for real-time interaction.
As a result, this work contains three contributions: a simple collision detection and handling mechanism that can generalize to distance-based force fields; a way to display content during probe-surface interaction that reduces occlusion and spatial distraction; and a way to visually convey small-scale tactile texture.
Subjects/Keywords: Computer Graphics; Human-Computer Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Greene, E. D. (2011). Augmenting Visual Feedback Using Sensory Substitution. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/6161
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Greene, Eugene Dominic. “Augmenting Visual Feedback Using Sensory Substitution.” 2011. Thesis, University of Waterloo. Accessed January 16, 2021.
http://hdl.handle.net/10012/6161.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Greene, Eugene Dominic. “Augmenting Visual Feedback Using Sensory Substitution.” 2011. Web. 16 Jan 2021.
Vancouver:
Greene ED. Augmenting Visual Feedback Using Sensory Substitution. [Internet] [Thesis]. University of Waterloo; 2011. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/10012/6161.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Greene ED. Augmenting Visual Feedback Using Sensory Substitution. [Thesis]. University of Waterloo; 2011. Available from: http://hdl.handle.net/10012/6161
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of the Arts London
7.
Barker, Leon.
Gestures in Machine Interaction.
Degree: 2011, University of the Arts London
URL: https://ualresearchonline.arts.ac.uk/id/eprint/15579/
► Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual…
(more)
▼ Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax.
Subjects/Keywords: Human-computer Interaction; Computer Vision
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Barker, L. (2011). Gestures in Machine Interaction. (Thesis). University of the Arts London. Retrieved from https://ualresearchonline.arts.ac.uk/id/eprint/15579/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Barker, Leon. “Gestures in Machine Interaction.” 2011. Thesis, University of the Arts London. Accessed January 16, 2021.
https://ualresearchonline.arts.ac.uk/id/eprint/15579/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Barker, Leon. “Gestures in Machine Interaction.” 2011. Web. 16 Jan 2021.
Vancouver:
Barker L. Gestures in Machine Interaction. [Internet] [Thesis]. University of the Arts London; 2011. [cited 2021 Jan 16].
Available from: https://ualresearchonline.arts.ac.uk/id/eprint/15579/.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Barker L. Gestures in Machine Interaction. [Thesis]. University of the Arts London; 2011. Available from: https://ualresearchonline.arts.ac.uk/id/eprint/15579/
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Melbourne
8.
Webber, Sarah Ellen.
Digital technologies and encounters with zoo animals.
Degree: 2019, University of Melbourne
URL: http://hdl.handle.net/11343/227663
► Zoos worldwide are beginning to deploy digital technologies for both visitors and animals. Such installations include interactive signage for visitors and touchscreen computers for animal…
(more)
▼ Zoos worldwide are beginning to deploy digital technologies for both visitors and animals. Such installations include interactive signage for visitors and touchscreen computers for animal cognition research. Zoos present animals in carefully crafted settings, with the aim of inspiring visitors’ respect and concern for wildlife. However, little is known about the effects that digital technologies can have on visitors’ encounters with zoo animals. This thesis addresses this knowledge gap by investigating the design of digital technology that might support zoos in shaping visitors’ perceptions of animals. Through four studies, different methodological approaches are brought to bear on this question.
This thesis commences by surveying the broader context of the zoo, through a first study which investigates digital technologies at a selected zoo. This case study examines the deployment and use of interactive systems against deeper themes relating to the zoo’s mission and exhibit design intentions. The outcomes of this study reveal tensions related to the introduction of digital displays within the naturalistic setting that zoos construct.
The second study focuses on a particular design project to identify the special considerations relating to design of animal interactives, digital technologies to be used by zoo animals. Research through design approaches are adopted to examine the co-design of an interactive installation for use by orangutans. From this study emerge twelve considerations for designing animal interactives in zoos. These considerations respond to zoos’ visitor engagement strategies, animal interaction aims, and constraints associated with conducting iterative design in the zoo setting.
The third study continues the trajectory of design, providing a formative evaluation of the animal interactive. This study, conducted as part of the design process, examines how the design intentions manifest in Study 2 were realised in visitors’ responses to the installation. Interviews conducted with visitors at the exhibit reveal a variety of cognitive and emotional forms of empathetic responses. Study 3 brings into focus the concept of belief in animal mind as a significant aspect of people’s responses to seeing animal interaction, motivating the subsequent evaluation of effects on perceptions of animal minds.
The fourth study comprises a systematic evaluation of the effects of the animal interactive on visitors’ perceptions of animals. Study 4 combines qualitative methods to probe deeper notions of belief in animal mind, and quantitative methods to measure the effects of the animal interactive. This final study of the thesis entails a field experiment, to compare perceptions of visitors who witnessed use of the animal interactive to those of a control group who did not.
In the final Discussion, four themes are developed which transect the studies. Addressing the social dimensions of animal-human-computer interaction, digital technology in naturalistic settings, anthropomorphism, and interactive design with…
Subjects/Keywords: animal-computer interaction; human-computer interaction; zoos
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Webber, S. E. (2019). Digital technologies and encounters with zoo animals. (Doctoral Dissertation). University of Melbourne. Retrieved from http://hdl.handle.net/11343/227663
Chicago Manual of Style (16th Edition):
Webber, Sarah Ellen. “Digital technologies and encounters with zoo animals.” 2019. Doctoral Dissertation, University of Melbourne. Accessed January 16, 2021.
http://hdl.handle.net/11343/227663.
MLA Handbook (7th Edition):
Webber, Sarah Ellen. “Digital technologies and encounters with zoo animals.” 2019. Web. 16 Jan 2021.
Vancouver:
Webber SE. Digital technologies and encounters with zoo animals. [Internet] [Doctoral dissertation]. University of Melbourne; 2019. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/11343/227663.
Council of Science Editors:
Webber SE. Digital technologies and encounters with zoo animals. [Doctoral Dissertation]. University of Melbourne; 2019. Available from: http://hdl.handle.net/11343/227663

University of Utah
9.
Frey, Matthew S.
Full-arm haptic rendering with the sarcos dextrous master.
Degree: MS;, Mechanical Engineering;, 2008, University of Utah
URL: http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/188/rec/515
► This thesis introduces the need for full-arm haptic rendering and reports potential benefits related to the evaluation of static models in confined, or workspace limiting,…
(more)
▼ This thesis introduces the need for full-arm haptic rendering and reports potential benefits related to the evaluation of static models in confined, or workspace limiting, virtual environments. There are many examples where haptic rendering has been used to add the sense of touch to virtual models; however, the majority of previous research has focused on tool or hand interactions. In this research, the SARCOS DTS Master makes haptic interactions possible across the entire human arm. Users are able to naturally assess workspace and force limitations imposed by the human arm when evaluating confined virtual environments. Simple polygonal models are used to represent the virtual environment and virtual arm. Fast collision detection is implemented using spacialized normal cone hierarchies and local descent algorithms, and an admittance control scheme is used to control the torques of the DTS Master's powerful hydraulic actuators. To test the benefits of full-arm haptic rendering, experiment participants are subject to five virtual environments, once with hand-only haptic rendering and once with full-arm haptic rendering. The objective benefits are measured in terms of task completion time, maximum force application, and assessment accuracy. Participants are also asked to answer subjective questions about the ease of completing the task with and without full-arm haptic rendering. Results of the experiments show that in most confined environments completion times are reduced, applied forces at the target increase, and participants are more likely to find a path to the target with full-arm haptic rendering. Subjectively, participants prefer full-arm haptic rendering as it reduces their dependency on visual cues while reaching for and applying a sustained force to target.
Subjects/Keywords: Virtual reality; Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Frey, M. S. (2008). Full-arm haptic rendering with the sarcos dextrous master. (Masters Thesis). University of Utah. Retrieved from http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/188/rec/515
Chicago Manual of Style (16th Edition):
Frey, Matthew S. “Full-arm haptic rendering with the sarcos dextrous master.” 2008. Masters Thesis, University of Utah. Accessed January 16, 2021.
http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/188/rec/515.
MLA Handbook (7th Edition):
Frey, Matthew S. “Full-arm haptic rendering with the sarcos dextrous master.” 2008. Web. 16 Jan 2021.
Vancouver:
Frey MS. Full-arm haptic rendering with the sarcos dextrous master. [Internet] [Masters thesis]. University of Utah; 2008. [cited 2021 Jan 16].
Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/188/rec/515.
Council of Science Editors:
Frey MS. Full-arm haptic rendering with the sarcos dextrous master. [Masters Thesis]. University of Utah; 2008. Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/188/rec/515
10.
Cornelio-Martinez, Patricia Ivette.
Examining the sense of agency in human-computer interaction.
Degree: PhD, 2020, University of Sussex
URL: http://sro.sussex.ac.uk/id/eprint/91258/
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804992
► Humans are agents, we feel that we control the course of events on our everyday life. This refers to the Sense of Agency (SoA). This…
(more)
▼ Humans are agents, we feel that we control the course of events on our everyday life. This refers to the Sense of Agency (SoA). This experience is not only crucial in our daily life, but also in our interaction with technology. When we manipulate a user interface (e.g., computer, smartphone, etc.), we expect that the system responds to our input commands with feedback, as we desire to feel that we are in charge of the interaction. If this interplay elicits a SoA, then the user will perceive an instinctive feeling of “I am controlling this”. Although research in Human-Computer Interaction (HCI) pursuits the design of intuitive and responsive systems, most of the current studies have been focussed mainly on interaction techniques (e.g., software-hardware) and User Experience (UX) (e.g., comfort, usability, etc.), and very little has been investigated in terms of the SoA i.e., the conscious experience of being in control regarding the interaction. In this thesis, we present an experimental exploration of the role of the SoA in interaction paradigms typical of HCI. After two chapters of introduction and related work, we describe a series of studies that explore agency implication in interaction with systems through human senses such as vision, audio, touch and smell. Chapter 3 explores the SoA in mid-air haptic interaction through touchless actions. Then, Chapter 4 examines agency modulation through smell and its application for olfactory interfaces. Chapter 5 describes two novel timing techniques based on auditory and haptic cues that provide alternative timing methods to the traditional Libet clock. Finally, we conclude with a discussion chapter that highlights the importance of our SoA during interactions with technology as well as the implications of the results found, in the design of user interfaces.
Subjects/Keywords: QA0076.9.H85 Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cornelio-Martinez, P. I. (2020). Examining the sense of agency in human-computer interaction. (Doctoral Dissertation). University of Sussex. Retrieved from http://sro.sussex.ac.uk/id/eprint/91258/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804992
Chicago Manual of Style (16th Edition):
Cornelio-Martinez, Patricia Ivette. “Examining the sense of agency in human-computer interaction.” 2020. Doctoral Dissertation, University of Sussex. Accessed January 16, 2021.
http://sro.sussex.ac.uk/id/eprint/91258/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804992.
MLA Handbook (7th Edition):
Cornelio-Martinez, Patricia Ivette. “Examining the sense of agency in human-computer interaction.” 2020. Web. 16 Jan 2021.
Vancouver:
Cornelio-Martinez PI. Examining the sense of agency in human-computer interaction. [Internet] [Doctoral dissertation]. University of Sussex; 2020. [cited 2021 Jan 16].
Available from: http://sro.sussex.ac.uk/id/eprint/91258/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804992.
Council of Science Editors:
Cornelio-Martinez PI. Examining the sense of agency in human-computer interaction. [Doctoral Dissertation]. University of Sussex; 2020. Available from: http://sro.sussex.ac.uk/id/eprint/91258/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.804992
11.
Norasikin, Mohd Adili.
Reconfigurable mid-air displays.
Degree: PhD, 2020, University of Sussex
URL: http://sro.sussex.ac.uk/id/eprint/90379/
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.801066
► This thesis addressed the difficulties involved in reconfiguring permeable mid-air displays (e.g., fog screens) through the experimental investigations of three interactive prototypes: MistForm, SoundBender, and…
(more)
▼ This thesis addressed the difficulties involved in reconfiguring permeable mid-air displays (e.g., fog screens) through the experimental investigations of three interactive prototypes: MistForm, SoundBender, and SonicSpray. Each of the prototypes includes their specific reconfigurability techniques. The discussion begins in Chapter 1. Chapter 2 described a straightforward technique used by MistForm to coarsely and mechanically reconfigure the permeable mid-air display. MistForm can adaptively deform its display surface to a specific condition through linear mist emitters controlled by five actuators. It is capable of turning problems into solutions, for example, a concave display can be used as a shared screen while convex shape as a personal screen. However, the investigation found the MistForm to be large and noisy. These challenges have led to a study investigation of SoundBender in Chapter 3. Chapter 3 described an investigation of a hybrid technique that reconfigured non-solid diffusers. The method can precisely manipulate any given complex sound field, encoded by a metamaterial (MM) mounted on phased array transducer (PAT). The force from the sound affected the surrounding particles. The technique can be used to reconfigure matter such as paper, mist, and flame in air space. However, the chapter did not focus on coordinating its use specifically for permeable mid-air displays. Therefore, this thesis carried out an investigation of SonicSpray in Chapter 4. It describes a technique to reconfigure mid-air display of permeable matter (i.e., aerosols) precisely by using a small farm factor PAT. This thesis ends with a conclusion in Chapter 5. The next generation of mid-air displays needs to be in small form factor, multipurpose and controllable, which have been introduced and demonstrated in this thesis. The research in this thesis can facilitate the future design of displays. However, this thesis highlights the response rate of the permeable particles, the primary concern yet to be solved. The airflow speed of the particles was found to be decreased proportionally to the number of transducers used. In the future, for better control the display, researchers should improve the response rate of the particles, for example, using sources with higher sound power.
Subjects/Keywords: QA0076.9.H85 Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Norasikin, M. A. (2020). Reconfigurable mid-air displays. (Doctoral Dissertation). University of Sussex. Retrieved from http://sro.sussex.ac.uk/id/eprint/90379/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.801066
Chicago Manual of Style (16th Edition):
Norasikin, Mohd Adili. “Reconfigurable mid-air displays.” 2020. Doctoral Dissertation, University of Sussex. Accessed January 16, 2021.
http://sro.sussex.ac.uk/id/eprint/90379/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.801066.
MLA Handbook (7th Edition):
Norasikin, Mohd Adili. “Reconfigurable mid-air displays.” 2020. Web. 16 Jan 2021.
Vancouver:
Norasikin MA. Reconfigurable mid-air displays. [Internet] [Doctoral dissertation]. University of Sussex; 2020. [cited 2021 Jan 16].
Available from: http://sro.sussex.ac.uk/id/eprint/90379/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.801066.
Council of Science Editors:
Norasikin MA. Reconfigurable mid-air displays. [Doctoral Dissertation]. University of Sussex; 2020. Available from: http://sro.sussex.ac.uk/id/eprint/90379/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.801066

University of Toronto
12.
Xia, Haijun.
Object-orieinted Representation and Interaction: A Step Towards Cognitively Direct Interaction.
Degree: PhD, 2020, University of Toronto
URL: http://hdl.handle.net/1807/101346
► Since the dawn of civilization, we have striven to invent tools to support our productivity and creativity. A key role of these tools is to…
(more)
▼ Since the dawn of civilization, we have striven to invent tools to support our productivity and creativity. A key role of these tools is to maintain, display, and provide means to operate on certain representations of information. The ideal representations visualize information in forms that can be perceived effortlessly and afford interactions which enable users to reason, analyze, and make sense of the information.
Perhaps the most complex tool invented so far is digital
computer. Since its inception, research has been striving to leverage computation to augment our productivity and creativity. The increasingly fast computation has shifted the bottleneck of employing computation from the hardware specifications to the representation of and
interaction with digital information.
The Window, Icons, Menus, Pointer (WIMP) user interface has been the dominant Graphical User Interface (GUI) framework of digital computers for 50 years. The many problems of the WIMP have long been recognized and yet unresolved: it relies on tedious manipulation of complex interface elements to achieve simple tasks; with mouse and keyboard as primary input, it does not effectively leverage the rich
interaction capabilities we possess.
We contribute a novel representation, the Object-Oriented Representation. By representing abstract elements in users’ workflow as concrete objects, the Object-Oriented Representation offloads abstract content and structure from users’ minds to external interfaces, which reduces cognitive load. By directly matching users’ internal representations of the information, it enables users to directly articulate their intentions, rather than translating them into tedious actions constrained by inappropriate representations.
The smallest unit of GUI, attribute, is first objectified as Attribute Object. Attribute Object affords rich interactions which were tedious or impossible, and allows for the composition of higher-level elements to address other complex tasks, including Collection Object which holistically addresses the many problems of interacting with multiple objects, and Mapping Object which facilitates the creation of complex data visualization. This exploration demonstrates the potential of a new representation which can unleash and boost our creativity and productivity. A set of design principles are further distilled, which describe how representation,
interaction, and functionality should be combined to enable of cognitively direct
interaction.
Advisors/Committee Members: Wigdor, Daniel, Computer Science.
Subjects/Keywords: Human-Computer Interaction; 0984
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xia, H. (2020). Object-orieinted Representation and Interaction: A Step Towards Cognitively Direct Interaction. (Doctoral Dissertation). University of Toronto. Retrieved from http://hdl.handle.net/1807/101346
Chicago Manual of Style (16th Edition):
Xia, Haijun. “Object-orieinted Representation and Interaction: A Step Towards Cognitively Direct Interaction.” 2020. Doctoral Dissertation, University of Toronto. Accessed January 16, 2021.
http://hdl.handle.net/1807/101346.
MLA Handbook (7th Edition):
Xia, Haijun. “Object-orieinted Representation and Interaction: A Step Towards Cognitively Direct Interaction.” 2020. Web. 16 Jan 2021.
Vancouver:
Xia H. Object-orieinted Representation and Interaction: A Step Towards Cognitively Direct Interaction. [Internet] [Doctoral dissertation]. University of Toronto; 2020. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/1807/101346.
Council of Science Editors:
Xia H. Object-orieinted Representation and Interaction: A Step Towards Cognitively Direct Interaction. [Doctoral Dissertation]. University of Toronto; 2020. Available from: http://hdl.handle.net/1807/101346

Hong Kong University of Science and Technology
13.
Lin, Sikun CSE.
Where's your focus : personalized attention.
Degree: 2017, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-91096
;
https://doi.org/10.14711/thesis-991012554569603412
;
http://repository.ust.hk/ir/bitstream/1783.1-91096/1/th_redirect.html
► Human visual attention is subjective and biased according to the personal preference of the viewer, however, current works of saliency detection are general and objective,…
(more)
▼ Human visual attention is subjective and biased according to the personal preference of the viewer, however, current works of saliency detection are general and objective, without counting the factor of the observer. This will make the attention prediction for a particular person not accurate enough. In this work, we propose PANet, a convolutional network that predicts saliency in images with personal preference. The model consists of two streams which share common feature extraction layers, and one stream is responsible for saliency prediction, while the other is adapted from the detection model and used to fit user preference. Experimental results on augmented PASCAL-S and SALICON dataset confirm that PANet can predict saliency areas according to input preference vectors. Compared with other general saliency prediction models, the model with ability of fitting user preference will provide more benefits to either augmented reality (AR) or recommendation applications.
Subjects/Keywords: Human-computer interaction
; Augmented reality
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lin, S. C. (2017). Where's your focus : personalized attention. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-91096 ; https://doi.org/10.14711/thesis-991012554569603412 ; http://repository.ust.hk/ir/bitstream/1783.1-91096/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Lin, Sikun CSE. “Where's your focus : personalized attention.” 2017. Thesis, Hong Kong University of Science and Technology. Accessed January 16, 2021.
http://repository.ust.hk/ir/Record/1783.1-91096 ; https://doi.org/10.14711/thesis-991012554569603412 ; http://repository.ust.hk/ir/bitstream/1783.1-91096/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Lin, Sikun CSE. “Where's your focus : personalized attention.” 2017. Web. 16 Jan 2021.
Vancouver:
Lin SC. Where's your focus : personalized attention. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2017. [cited 2021 Jan 16].
Available from: http://repository.ust.hk/ir/Record/1783.1-91096 ; https://doi.org/10.14711/thesis-991012554569603412 ; http://repository.ust.hk/ir/bitstream/1783.1-91096/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Lin SC. Where's your focus : personalized attention. [Thesis]. Hong Kong University of Science and Technology; 2017. Available from: http://repository.ust.hk/ir/Record/1783.1-91096 ; https://doi.org/10.14711/thesis-991012554569603412 ; http://repository.ust.hk/ir/bitstream/1783.1-91096/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Hong Kong University of Science and Technology
14.
Zou, Yongpan CSE.
Evolving human-device interaction in the context of ubiquitous computing.
Degree: 2017, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-94571
;
https://doi.org/10.14711/thesis-b1778937
;
http://repository.ust.hk/ir/bitstream/1783.1-94571/1/th_redirect.html
► Sensing, computing and communicating are three basic elements of the Internet of Things (IoT). Benefited from hardware innovation, present devices have been enhanced to such…
(more)
▼ Sensing, computing and communicating are three basic elements of the Internet of Things (IoT). Benefited from hardware innovation, present devices have been enhanced to such a extent that even a palm-sized device owes powerful sensing, computing and communicating capabilities. This consequently promotes the broad applications of ubiquitous computing and blurs the boundary between human and device. In such a situation, the human-device interaction (HDI) is becoming increasingly pervasive around us and covers a wide range of applications including environment sensing and human dynamics sensing. In this thesis, we follow a line of exploring different approaches of designing human-device interactive systems for the mentioned applications. Specifically, we propose four novel systems in this thesis covering two aspects of human-device interaction from application perspective, namely, environment sensing and human dynamics sensing. In the first work, we propose a novel system to aid users distinguish in-wall objects and map hidden pipeline layout, using off-the-shelf sensors embedded in smartphones. In the second work, we design a objects distinguishing system named TagFree with commercial Wi-Fi infrastructure which differentiates a single object at a time or up to three objects simultaneously with favorable performance. Compared with conventional methods, this system removes the need of additional devices attached to objects. Following the above work in environment sensing, we also conduct research in gesture recognition and text entry with commodity devices. In the third work, we develop GRfid, a novel device-free gesture recognition system based on phase information output by COTS RFID devices which is potentially applied in smart homes, museums and art galleries where RFID technology is widely utilized. In the last work, we present a novel text-entry system, AcouTexts, aiming at dealing with problem of inputting texts on tiny devices. With AcouTexts, users can enter texts to a device by just using a finger even without touching the device. We prototype the above systems with commodity devices/infrastructure and evaluate their performances with real-world experiments.
Subjects/Keywords: Ubiquitous computing
; Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zou, Y. C. (2017). Evolving human-device interaction in the context of ubiquitous computing. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-94571 ; https://doi.org/10.14711/thesis-b1778937 ; http://repository.ust.hk/ir/bitstream/1783.1-94571/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Zou, Yongpan CSE. “Evolving human-device interaction in the context of ubiquitous computing.” 2017. Thesis, Hong Kong University of Science and Technology. Accessed January 16, 2021.
http://repository.ust.hk/ir/Record/1783.1-94571 ; https://doi.org/10.14711/thesis-b1778937 ; http://repository.ust.hk/ir/bitstream/1783.1-94571/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Zou, Yongpan CSE. “Evolving human-device interaction in the context of ubiquitous computing.” 2017. Web. 16 Jan 2021.
Vancouver:
Zou YC. Evolving human-device interaction in the context of ubiquitous computing. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2017. [cited 2021 Jan 16].
Available from: http://repository.ust.hk/ir/Record/1783.1-94571 ; https://doi.org/10.14711/thesis-b1778937 ; http://repository.ust.hk/ir/bitstream/1783.1-94571/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Zou YC. Evolving human-device interaction in the context of ubiquitous computing. [Thesis]. Hong Kong University of Science and Technology; 2017. Available from: http://repository.ust.hk/ir/Record/1783.1-94571 ; https://doi.org/10.14711/thesis-b1778937 ; http://repository.ust.hk/ir/bitstream/1783.1-94571/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Victoria
15.
Wedlake, Martine Bruce.
The Newtonian Architecture for Virtual Landscapes : an architecture, model and implementation.
Degree: Department of Electrical and Computer Engineering, 2017, University of Victoria
URL: https://dspace.library.uvic.ca//handle/1828/8846
► There is much research in the literature regarding the construction of distributed virtual reality implementations. After evaluating some well-known virtual reality systems, it was determined…
(more)
▼ There is much research in the literature regarding the construction of distributed virtual reality implementations. After evaluating some well-known virtual reality systems, it was determined that several problems exist that need to be solved. In particular: network efficiency, object distribution and coherency, inadequate system resource management, and overall performance.
In order to properly address these issues, a holistic design approach is taken. The entire system is examined, rather than focusing on a specific problem area (such as the
human-
computer interface).
The major component of this work, the Newtonian Architecture for Virtual Landscapes (NAVL), is presented to respond to the problems areas discovered. Highlights of the architecture include: (1) A distributed client/server network that addressed the networking issues. (2) Autonomous objects encapsulate control and object state into a single entity. Using autonomous objects avoids lengthy synchronization processes (e.g., full database locking). (3) ForceLets, a novel synchronization method, minimize the network bandwidth required to keep an object synchronized at remote locations. In addition, ForceLets provide much improved synchronization of the object at the remote locations in the presence of network lag.
Implementation details of the NAVL prototype are also presented. The implementation consists of an object simulation and execution unit, rendering and collision detection unit, and network subsystem and protocols.
An evaluation of the NAVL system architecture examines the efficiency of the key architectural components: (1) A bandwidth and latency analysis examines the efficiency of the distributed client/server network. (2) The object distribution and coherency components are tested directly from the prototype. Profiles of actual prototype execution are used to show the efficiency gains of the ForceLet approach as compared to the commonly used stream-of-data coherency mechanism. (3) The rendering and collision detection unit is tested by examining the effects on CPU utilization and frame rate with increases in the number of virtual objects.
Advisors/Committee Members: Li, Kin F. (supervisor), el Guibaly, Fayez H. F. (supervisor).
Subjects/Keywords: Human-computer interaction; Virtual reality
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wedlake, M. B. (2017). The Newtonian Architecture for Virtual Landscapes : an architecture, model and implementation. (Thesis). University of Victoria. Retrieved from https://dspace.library.uvic.ca//handle/1828/8846
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Wedlake, Martine Bruce. “The Newtonian Architecture for Virtual Landscapes : an architecture, model and implementation.” 2017. Thesis, University of Victoria. Accessed January 16, 2021.
https://dspace.library.uvic.ca//handle/1828/8846.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Wedlake, Martine Bruce. “The Newtonian Architecture for Virtual Landscapes : an architecture, model and implementation.” 2017. Web. 16 Jan 2021.
Vancouver:
Wedlake MB. The Newtonian Architecture for Virtual Landscapes : an architecture, model and implementation. [Internet] [Thesis]. University of Victoria; 2017. [cited 2021 Jan 16].
Available from: https://dspace.library.uvic.ca//handle/1828/8846.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Wedlake MB. The Newtonian Architecture for Virtual Landscapes : an architecture, model and implementation. [Thesis]. University of Victoria; 2017. Available from: https://dspace.library.uvic.ca//handle/1828/8846
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
16.
Ablart, Damien.
Exploration of mid-air haptics experience design.
Degree: PhD, 2020, University of Sussex
URL: http://sro.sussex.ac.uk/id/eprint/94941/
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817544
► Ultrasonic Mid-air Haptics (UMH) is a novel technology that uses the mechanical properties of sound waves to create a pressure point in mid-air. This pressure…
(more)
▼ Ultrasonic Mid-air Haptics (UMH) is a novel technology that uses the mechanical properties of sound waves to create a pressure point in mid-air. This pressure point, called focal point, can slightly bend the skin and be felt in mid-air without any attachment to the body. This thesis focuses on both studying how to integrate this technology with other senses (i.e. vision and audition) and exploring the range of tactile sensations it can provide. The first two projects presented in this document present the integration of ultrasonic mid-air haptics with audio-visual content. The first project describes the process of creating a unique haptic experience that was part of a six-weeks multisensory exhibition in a museum. The second project moved from the museum to a controlled environment and explored the creation of haptic experiences based on physiologic measurements for six short films. Both studies showed the positive value of adding ultrasonic mid-air haptics to traditional media through higher reported arousal and participants' high enthusiasm for multisensory content. In the two latter projects of this thesis, it was explored how we could extend the range of possible tactile sensations provided by UMHs. We introduced a new technique called Spatio-Temporal Modulation (STM). It enabled the creation of brand-new tactile experiences, including more salient shapes and wider range of textures. We also provided some guidelines on how to control some of the tactile properties of the sensation, including strength,roughness,or regularity. The findings of those four projects contribute to the growing body of knowledge of UMHs. A summary of the key contributions is provided at the end of the thesis as well as several leads for future works.
Subjects/Keywords: QA0076.9.H85 Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ablart, D. (2020). Exploration of mid-air haptics experience design. (Doctoral Dissertation). University of Sussex. Retrieved from http://sro.sussex.ac.uk/id/eprint/94941/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817544
Chicago Manual of Style (16th Edition):
Ablart, Damien. “Exploration of mid-air haptics experience design.” 2020. Doctoral Dissertation, University of Sussex. Accessed January 16, 2021.
http://sro.sussex.ac.uk/id/eprint/94941/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817544.
MLA Handbook (7th Edition):
Ablart, Damien. “Exploration of mid-air haptics experience design.” 2020. Web. 16 Jan 2021.
Vancouver:
Ablart D. Exploration of mid-air haptics experience design. [Internet] [Doctoral dissertation]. University of Sussex; 2020. [cited 2021 Jan 16].
Available from: http://sro.sussex.ac.uk/id/eprint/94941/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817544.
Council of Science Editors:
Ablart D. Exploration of mid-air haptics experience design. [Doctoral Dissertation]. University of Sussex; 2020. Available from: http://sro.sussex.ac.uk/id/eprint/94941/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817544
17.
Frier, William Thierry Alain.
Rendering spatiotemporal mid-air tactile patterns.
Degree: PhD, 2020, University of Sussex
URL: http://sro.sussex.ac.uk/id/eprint/94802/
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817543
► Mid-air haptics is a recent field concerned with conveying haptic feedback in midair to complement 3D interfaces which are already integrating gesture tracking or volumetric…
(more)
▼ Mid-air haptics is a recent field concerned with conveying haptic feedback in midair to complement 3D interfaces which are already integrating gesture tracking or volumetric displays. While the community has mainly spent the last decade focusing on the technical challenges of developing a mid-air haptic display, little attention has been spent on haptic pattern-rendering techniques. The work presented here targets this last consideration and investigates the perceptual implications of varying the parameters of a recently developed rendering technique called spatiotemporal modulation. The technique aims at producing spatially distributed mid-air haptic patterns, by rapidly and repeatedly moving a tactile point along a given pattern path. However, it is unclear how the rendering parameters affect skin deformation and haptic perception. In addition, especially when two parameters are interdependent, it is unclear which should be optimised. In the first study, I used vibrometry to compare the effects of pattern-rendering speed (i.e. the speed at which the tactile point moves along the pattern) and rendering rate (i.e. the rate at which a given pattern is repeated) on skin displacement. The study highlights the importance of rendering speed over rendering rate in maximising the skin displacement. A user study showed later that rendering speed also maximised pattern perceived strength, corroborating that increased displacement leads to increased perceived strength. A second user study investigated the importance of the pattern sampling rate (i.e. the sampling position number along a pattern) while rendering a given mid-air haptic pattern. The results show that decreasing the sampling rate enhanced the pattern strength, especially for patterns rendered at a rate of under 20Hz. These results also allow the unlocking of low rate stimuli that could not be perceived with the traditional sampling approach. In each of these studies, the discoveries are summarised in comprehensive guidelines, so designers can benefit through an implementation of my results in their design of mid-air haptic patterns.
Subjects/Keywords: QA0076.9.H85 Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Frier, W. T. A. (2020). Rendering spatiotemporal mid-air tactile patterns. (Doctoral Dissertation). University of Sussex. Retrieved from http://sro.sussex.ac.uk/id/eprint/94802/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817543
Chicago Manual of Style (16th Edition):
Frier, William Thierry Alain. “Rendering spatiotemporal mid-air tactile patterns.” 2020. Doctoral Dissertation, University of Sussex. Accessed January 16, 2021.
http://sro.sussex.ac.uk/id/eprint/94802/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817543.
MLA Handbook (7th Edition):
Frier, William Thierry Alain. “Rendering spatiotemporal mid-air tactile patterns.” 2020. Web. 16 Jan 2021.
Vancouver:
Frier WTA. Rendering spatiotemporal mid-air tactile patterns. [Internet] [Doctoral dissertation]. University of Sussex; 2020. [cited 2021 Jan 16].
Available from: http://sro.sussex.ac.uk/id/eprint/94802/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817543.
Council of Science Editors:
Frier WTA. Rendering spatiotemporal mid-air tactile patterns. [Doctoral Dissertation]. University of Sussex; 2020. Available from: http://sro.sussex.ac.uk/id/eprint/94802/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.817543

University of Minnesota
18.
Ekstrand, Michael.
Towards Recommender Engineering: tools and experiments for identifying recommender differences.
Degree: PhD, 2014, University of Minnesota
URL: http://hdl.handle.net/11299/165307
► Since the introduction of their modern form 20 years ago, recommender systems have proven a valuable tool for help users manage information overload.Two decades of…
(more)
▼ Since the introduction of their modern form 20 years ago, recommender systems have proven a valuable tool for help users manage information overload.Two decades of research have produced many algorithms for computing recommendations, mechanisms for evaluating their effectiveness, and user interfaces and experiences to embody them.It has also been found that the outputs of different recommendation algorithms differ in user-perceptible ways that affect their suitability to different tasks and information needs.However, there has been little work to systematically map out the space of algorithms and the characteristics they exhibit that makes them more or less effective in different applications. As a result, developers of recommender systems must experiment, conducting basic science on each application and its users to determine the approach(es) that will meet their needs.This thesis presents our work towards \emph{recommender engineering}: the design of recommender systems from well-understood principles of user needs, domain properties, and algorithm behaviors.This will reduce the experimentation required for each new recommender application, allowing developers to design recommender systems that are likely to be effective for their particular application.To that end, we make four contributions: the LensKit toolkit for conducting experiments on a wide variety of recommender algorithms and data sets under different experimental conditions (offline experiments with diverse metrics, online user studies, and the ability to grow to support additional methodologies), along with new developments in object-oriented software configuration to support this toolkit;experiments on the configuration options of widely-used algorithms to provide guidance on tuning and configuring them; an offline experiment on the differences in the errors made by different algorithms; and a user study on the user-perceptible differences between lists of movie recommendations produced by three common recommender algorithms.Much research is needed to fully realize the vision of recommender engineering in the coming years; it is our hope that LensKit will prove a valuable foundation for much of this work, and our experiments represent a small piece of the kinds of studies that must be carried out, replicated, and validated to enable recommender systems to be engineered.
Subjects/Keywords: Human-computer interaction; Recommender systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ekstrand, M. (2014). Towards Recommender Engineering: tools and experiments for identifying recommender differences. (Doctoral Dissertation). University of Minnesota. Retrieved from http://hdl.handle.net/11299/165307
Chicago Manual of Style (16th Edition):
Ekstrand, Michael. “Towards Recommender Engineering: tools and experiments for identifying recommender differences.” 2014. Doctoral Dissertation, University of Minnesota. Accessed January 16, 2021.
http://hdl.handle.net/11299/165307.
MLA Handbook (7th Edition):
Ekstrand, Michael. “Towards Recommender Engineering: tools and experiments for identifying recommender differences.” 2014. Web. 16 Jan 2021.
Vancouver:
Ekstrand M. Towards Recommender Engineering: tools and experiments for identifying recommender differences. [Internet] [Doctoral dissertation]. University of Minnesota; 2014. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/11299/165307.
Council of Science Editors:
Ekstrand M. Towards Recommender Engineering: tools and experiments for identifying recommender differences. [Doctoral Dissertation]. University of Minnesota; 2014. Available from: http://hdl.handle.net/11299/165307
19.
Feng, Mi.
Quantifying, Modeling and Managing How People Interact with Visualizations on the Web.
Degree: PhD, 2019, Worcester Polytechnic Institute
URL: etd-042319-140601
;
https://digitalcommons.wpi.edu/etd-dissertations/518
► The growing number of interactive visualizations on the web has made it possible for the general public to access data and insights that were…
(more)
▼ The growing number of interactive visualizations on the web has made it possible for the general public to access data and insights that were once only available to domain experts. At the same time, this rise has yielded new challenges for visualization creators, who must now understand and engage a growing and diverse audience. To bridge this gap between creators and audiences, we explore and evaluate components of a design-feedback loop that would enable visualization creators to better accommodate their audiences as they explore the visualizations. In this dissertation, we approach this goal by quantifying, modeling and creating tools that manage people’s open-ended explorations of visualizations on the web. In particular, we: 1. Quantify the effects of design alternatives on people’s
interaction patterns in visualizations. We define and evaluate two techniques: HindSight (encoding a user’s
interaction history) and text-based search, where controlled experiments suggest that design details can significantly modulate the
interaction patterns we observe from participants using a given visualization. 2. Develop new metrics that characterize facets of people’s exploration processes. Specifically, we derive expressive metrics describing
interaction patterns such as exploration uniqueness, and use Bayesian inference to model distributional effects on
interaction behavior. Our results show that these metrics capture novel patterns in people’s interactions with visualizations. 3. Create tools that manage and analyze an audience’s
interaction data for a given visualization. We develop a prototype tool, ReVisIt, that visualizes an audience’s interactions with a given visualization. Through an interview study with visualization creators, we found that ReVisIt make creators aware of individual and overall trends in their audiences’
interaction patterns. By establishing some of the core elements of a design-feedback loop for visualization creators, the results in this research may have a tangible impact on the future of publishing interactive visualizations on the web. Equipped with techniques, metrics, and tools that realize an initial feedback loop, creators are better able to understand the behavior and user needs, and thus create visualizations that make data and insights more accessible to the diverse audiences on the web.
Advisors/Committee Members: Emmanuel Agu, Committee Member, Alex Endert, Committee Member, Lane Harrison, Advisor, Elke Rundensteiner, Committee Member.
Subjects/Keywords: data visualization; human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Feng, M. (2019). Quantifying, Modeling and Managing How People Interact with Visualizations on the Web. (Doctoral Dissertation). Worcester Polytechnic Institute. Retrieved from etd-042319-140601 ; https://digitalcommons.wpi.edu/etd-dissertations/518
Chicago Manual of Style (16th Edition):
Feng, Mi. “Quantifying, Modeling and Managing How People Interact with Visualizations on the Web.” 2019. Doctoral Dissertation, Worcester Polytechnic Institute. Accessed January 16, 2021.
etd-042319-140601 ; https://digitalcommons.wpi.edu/etd-dissertations/518.
MLA Handbook (7th Edition):
Feng, Mi. “Quantifying, Modeling and Managing How People Interact with Visualizations on the Web.” 2019. Web. 16 Jan 2021.
Vancouver:
Feng M. Quantifying, Modeling and Managing How People Interact with Visualizations on the Web. [Internet] [Doctoral dissertation]. Worcester Polytechnic Institute; 2019. [cited 2021 Jan 16].
Available from: etd-042319-140601 ; https://digitalcommons.wpi.edu/etd-dissertations/518.
Council of Science Editors:
Feng M. Quantifying, Modeling and Managing How People Interact with Visualizations on the Web. [Doctoral Dissertation]. Worcester Polytechnic Institute; 2019. Available from: etd-042319-140601 ; https://digitalcommons.wpi.edu/etd-dissertations/518
20.
Ravulakollu, Kiran Kumar.
Sensory integration model inspired by the superior colliculus for multimodal stimuli localization.
Degree: PhD, 2012, University of Sunderland
URL: http://sure.sunderland.ac.uk/id/eprint/3759/
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574246
► Sensory information processing is an important feature of robotic agents that must interact with humans or the environment. For example, numerous attempts have been made…
(more)
▼ Sensory information processing is an important feature of robotic agents that must interact with humans or the environment. For example, numerous attempts have been made to develop robots that have the capability of performing interactive communication. In most cases, individual sensory information is processed and based on this, an output action is performed. In many robotic applications, visual and audio sensors are used to emulate human-like communication. The Superior Colliculus, located in the mid-brain region of the nervous system, carries out similar functionality of audio and visual stimuli integration in both humans and animals. In recent years numerous researchers have attempted integration of sensory information using biological inspiration. A common focus lies in generating a single output state (i.e. a multimodal output) that can localize the source of the audio and visual stimuli. This research addresses the problem and attempts to find an effective solution by investigating various computational and biological mechanisms involved in the generation of multimodal output. A primary goal is to develop a biologically inspired computational architecture using artificial neural networks. The advantage of this approach is that it mimics the behaviour of the Superior Colliculus, which has the potential of enabling more effective human-like communication with robotic agents. The thesis describes the design and development of the architecture, which is constructed from artificial neural networks using radial basis functions. The primary inspiration for the architecture came from emulating the function top and deep layers of the Superior Colliculus, due to their visual and audio stimuli localization mechanisms, respectively. The integration experimental results have successfully demonstrated the key issues, including low-level multimodal stimuli localization, dimensionality reduction of audio and visual input-space without affecting stimuli strength, and stimuli localization with enhancement and depression phenomena. Comparisons have been made between computational and neural network based methods, and unimodal verses multimodal integrated outputs in order to determine the effectiveness of the approach.
Subjects/Keywords: 629.892; Human-Computer Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ravulakollu, K. K. (2012). Sensory integration model inspired by the superior colliculus for multimodal stimuli localization. (Doctoral Dissertation). University of Sunderland. Retrieved from http://sure.sunderland.ac.uk/id/eprint/3759/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574246
Chicago Manual of Style (16th Edition):
Ravulakollu, Kiran Kumar. “Sensory integration model inspired by the superior colliculus for multimodal stimuli localization.” 2012. Doctoral Dissertation, University of Sunderland. Accessed January 16, 2021.
http://sure.sunderland.ac.uk/id/eprint/3759/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574246.
MLA Handbook (7th Edition):
Ravulakollu, Kiran Kumar. “Sensory integration model inspired by the superior colliculus for multimodal stimuli localization.” 2012. Web. 16 Jan 2021.
Vancouver:
Ravulakollu KK. Sensory integration model inspired by the superior colliculus for multimodal stimuli localization. [Internet] [Doctoral dissertation]. University of Sunderland; 2012. [cited 2021 Jan 16].
Available from: http://sure.sunderland.ac.uk/id/eprint/3759/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574246.
Council of Science Editors:
Ravulakollu KK. Sensory integration model inspired by the superior colliculus for multimodal stimuli localization. [Doctoral Dissertation]. University of Sunderland; 2012. Available from: http://sure.sunderland.ac.uk/id/eprint/3759/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.574246
21.
Kirman, Ben.
Playful networks : measuring, analysing and understanding the social effects of game design.
Degree: PhD, 2011, University of Lincoln
URL: http://eprints.lincoln.ac.uk/id/eprint/15079/
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625105
► Games are fundamentally a social activity. The effects of this foundation can be felt at every level - from the social negotiation of rules, through…
(more)
▼ Games are fundamentally a social activity. The effects of this foundation can be felt at every level - from the social negotiation of rules, through cooperation and collaboration between players during the game, to the effects of relationships and social status on play. Social effects can change the way the game is played, but the mechanics of games can also affect the patterns of social behaviours of the players. The arrangement of game mechanics and interfaces together defines a ``social architecture". This architecture is not limited to directly social mechanics such as trading and messaging - the game design itself has a holistic effect on social activity. This dissertation frames games around these social aspects, and focuses on analysis of the patterns that emerge from these playful interactions. Firstly, a model is defined to understand games based on the social effects of play, and these effects explored based on the varying impact they have on the play experience. Mischief and deviance is also investigated as forces that challenge these social effects in and around games. Based on interaction data gathered from server logs of experimental social games, social network analysis is used as a tool to uncover the macroscopic social architectures formed by each design. This allows the use of quantitative methods to understand the nature of the relationship between game design and the social patterns that emerge around games in play. Key findings confirm that social activity follows a heavy-tailed distribution - a small number of ``hardcore" players are responsible for a disproportionately large number of interactions in the community of the game. Further than this, the connections between active hardcore and the rest of the player base show that without the hardcore users, the community of games as ``small worlds" would collapse, with large numbers of players being separated from the society within a game. The emergence of grouping behaviour is investigated based on the effect of social feedback. Following findings of social psychology in non-game environments, evidence is provided that highlights the effect of socio-contextual feedback on players forming strongly bound tribal groups within games. The communities formed through the play of games can be described in terms of network graphs - webs of interactions flowing around a network of players. Social network analyses of social games show the emergence of patterns of reciprocity, clustering and tribal behaviours among the players. The evidence also shows that the collections of game mechanics, or social architectures, of games have a predictable effect on the wider social patterns of the players. As such, this suggests games can be specifically engineered for social effects based on changes in the patterns of interactions, and issues around mechanical or interface elements can be identified based on anomalies observed in the network graph of player interactions. Together, this dissertation provides a link between the theoretical ideas around social play to the…
Subjects/Keywords: 004; G440 Human-computer Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kirman, B. (2011). Playful networks : measuring, analysing and understanding the social effects of game design. (Doctoral Dissertation). University of Lincoln. Retrieved from http://eprints.lincoln.ac.uk/id/eprint/15079/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625105
Chicago Manual of Style (16th Edition):
Kirman, Ben. “Playful networks : measuring, analysing and understanding the social effects of game design.” 2011. Doctoral Dissertation, University of Lincoln. Accessed January 16, 2021.
http://eprints.lincoln.ac.uk/id/eprint/15079/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625105.
MLA Handbook (7th Edition):
Kirman, Ben. “Playful networks : measuring, analysing and understanding the social effects of game design.” 2011. Web. 16 Jan 2021.
Vancouver:
Kirman B. Playful networks : measuring, analysing and understanding the social effects of game design. [Internet] [Doctoral dissertation]. University of Lincoln; 2011. [cited 2021 Jan 16].
Available from: http://eprints.lincoln.ac.uk/id/eprint/15079/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625105.
Council of Science Editors:
Kirman B. Playful networks : measuring, analysing and understanding the social effects of game design. [Doctoral Dissertation]. University of Lincoln; 2011. Available from: http://eprints.lincoln.ac.uk/id/eprint/15079/ ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625105
22.
Ren, Yi.
Ink-based Note Taking On Mobile Devices.
Degree: 2015, University of Waterloo
URL: http://hdl.handle.net/10012/9373
► Although touchscreen mobile phones are widely used for recording informal text notes (e.g., grocery lists, reminders and directions), the lack of efficient mechanisms for combining…
(more)
▼ Although touchscreen mobile phones are widely used for recording informal text notes (e.g., grocery lists, reminders and directions), the lack of efficient mechanisms for combining informal graphical content with text is a persistent challenge.
In the first part of the thesis, we present InkAnchor, a digital ink editor that allows users to easily create ink-based notes by finger sketching on a mobile phone touchscreen. InkAnchor incorporates flexible anchoring, focus-plus-context input, content chunking, and lightweight editing mechanisms to support the capture of informal notes and annotations. We describe the design and evaluation of InkAnchor through a series of user studies, which revealed that the integrated support enabled by InkAnchor is a significant improvement over current mobile note taking applications on a range of mobile note-taking tasks.
The thesis also introduces FingerTip, a shift-targeting solution to facilitate detailed drawings. Occlusion caused by users' finger on the screen and users' uncertainty of the pixel they interact with are resolved in FingerTip via shifting the actual point where inking occurs beyond the end of the user's finger. However, despite a positive first impression on the part of prospective end users, fingertip turned out only passable on the drawing experience for non-text content.
Combining the results of InkAnchor and FigerTip, this thesis does demonstrate that a significant subset of mobile note taking tasks can be supported using focus+context input, and that tuning for hand drawn text input has significant value in the mobile smartphone note taking and sketch input domain.
Subjects/Keywords: Note taking; Human computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ren, Y. (2015). Ink-based Note Taking On Mobile Devices. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/9373
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ren, Yi. “Ink-based Note Taking On Mobile Devices.” 2015. Thesis, University of Waterloo. Accessed January 16, 2021.
http://hdl.handle.net/10012/9373.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ren, Yi. “Ink-based Note Taking On Mobile Devices.” 2015. Web. 16 Jan 2021.
Vancouver:
Ren Y. Ink-based Note Taking On Mobile Devices. [Internet] [Thesis]. University of Waterloo; 2015. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/10012/9373.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ren Y. Ink-based Note Taking On Mobile Devices. [Thesis]. University of Waterloo; 2015. Available from: http://hdl.handle.net/10012/9373
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Hong Kong University of Science and Technology
23.
Chen, Zhaokang ECE.
Enhancing human-computer interaction by inferring users' intent from eye gaze.
Degree: 2020, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-104965
;
https://doi.org/10.14711/thesis-991012818669803412
;
http://repository.ust.hk/ir/bitstream/1783.1-104965/1/th_redirect.html
► This thesis investigates the use of eye gaze in human-computer interaction. One of the biggest challenges of using gaze as an indicator of visual attention…
(more)
▼ This thesis investigates the use of eye gaze in human-computer interaction. One of the biggest challenges of using gaze as an indicator of visual attention is the "Midas Touch" problem: it is very difficult to distinguish between spontaneous eye movements for gathering visual information and intentional eye movements for selection. To avoid this problem, rather than directly using gaze positions as input, we infer users' intent from their past gaze trajectories and then provide appropriate assistance. To be specific, we propose a two-stage hidden-Markov-model-based framework to model the gaze trajectories and to infer the intended targets of the users. Results on a 2D cursor control task and a hyperlink inference task show that this model infers users' intended target with high accuracy. We then integrate this gaze model into two applications: a hybrid gaze/electroencephalography (EEG) brain-computer interface (BCI), which integrates the cues from eye gaze and the cues from EEG to control a robot arm, and a gaze-based web browser, which dynamically adjusts the amount of time for which the user needs to fixate on the desired hyperlink in order to select it. Our algorithm improves the overall performance of both systems in a natural way without increasing the cognitive load of the user. Moving forward, in order to support applications where the movements of the user should be relatively unconstrained, we propose a new deep neural network for appearance-based gaze estimation. We propose to use dilated-convolutions, which extract high-level features at high resolution on eye images, and gaze decomposition, which decomposes the line of sight into the sum of a subject-independent term and a subject-dependent bias. We achieve state-of-the-art subject-independent gaze estimation on the MPIIGaze and EYEDIAP datasets. To further reduce the estimation error, we propose a personal calibration method that works remarkably well on calibration sets of low complexity, i.e., the number of gaze targets used for calibration and/or the number of images per gaze target are small. Our results show that the proposed calibration outperforms other alternatives when the calibration set is of low complexity. We also collect a large-scale dataset, NISLGaze, which contains large variations in head pose and face location. We use NISLGaze to evaluate gaze estimation both with and without calibration in a more realistic setting.
Subjects/Keywords: Human-computer interaction
; Eye tracking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chen, Z. E. (2020). Enhancing human-computer interaction by inferring users' intent from eye gaze. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-104965 ; https://doi.org/10.14711/thesis-991012818669803412 ; http://repository.ust.hk/ir/bitstream/1783.1-104965/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Chen, Zhaokang ECE. “Enhancing human-computer interaction by inferring users' intent from eye gaze.” 2020. Thesis, Hong Kong University of Science and Technology. Accessed January 16, 2021.
http://repository.ust.hk/ir/Record/1783.1-104965 ; https://doi.org/10.14711/thesis-991012818669803412 ; http://repository.ust.hk/ir/bitstream/1783.1-104965/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Chen, Zhaokang ECE. “Enhancing human-computer interaction by inferring users' intent from eye gaze.” 2020. Web. 16 Jan 2021.
Vancouver:
Chen ZE. Enhancing human-computer interaction by inferring users' intent from eye gaze. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2020. [cited 2021 Jan 16].
Available from: http://repository.ust.hk/ir/Record/1783.1-104965 ; https://doi.org/10.14711/thesis-991012818669803412 ; http://repository.ust.hk/ir/bitstream/1783.1-104965/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Chen ZE. Enhancing human-computer interaction by inferring users' intent from eye gaze. [Thesis]. Hong Kong University of Science and Technology; 2020. Available from: http://repository.ust.hk/ir/Record/1783.1-104965 ; https://doi.org/10.14711/thesis-991012818669803412 ; http://repository.ust.hk/ir/bitstream/1783.1-104965/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Rutgers University
24.
Kalampratsidou, Vilelmini.
Co-adaptive multimodal interface guided by real-time multisensory stochastic feedback.
Degree: PhD, Computer Science, 2018, Rutgers University
URL: https://rucore.libraries.rutgers.edu/rutgers-lib/57627/
► In this work, we present new data-types, analytics, and human-computer interfaces as a platform to enable a new type of co-adaptive-behavioural analyses to track neuroplasticity.…
(more)
▼ In this work, we present new data-types, analytics, and
human-
computer interfaces as a platform to enable a new type of co-adaptive-behavioural analyses to track neuroplasticity. We exhibit seven different works, all of which are steps in creating an interface that “collaborates” in a closed-loop formula with the sensory-motor system in order to augment existing or substitute lost sensations. Such interfaces are beneficial as they enable the systems to adapt and evolve based on the participants’ rate of adaptation and preferences in evolution, ultimately steering the system towards favorable regimes. We started by trying to address the question: textit{"how does our novel sensory-motor system learn and adapt to changes?"}. In a pointing task, subjects had to discover and learn the sequence of the points presented on the screen (which was repetitive) and familiarise themselves with a non-predicted event (which occurred occasionally). In this very first study, we examined the learnability of motor system across seven individuals, and we investigated the learning patterns of each individual. Then, we explored how other bodily signals, such as temperature, are affecting movement. At this point, we conducted two studies. In the first one, we looked into the impact of the temperature range in the quality of the performed movement. This study was conducted in 40 individuals, 20 Schizophrenia patients, known to have temperature irregularities, and 20 controls. We identified the differences between the two populations in the range of temperature and the stochastic signatures of their kinematic data. To have a better look into the relation of movement and temperature, we conducted a second study utilizing data of a pre-professional ballet student recorded during her 6h training and her follow up sleep. For this study, we designed a new data type that allows us to examine movement as a function of temperature and see how each degree of temperature impacts the fluctuations in movement. This new data structure could be used for the integration of any bodily signal. Next, we identified the need to build visualization tools that could picture in real-time sensory information extracted from the analysis that would be informative to the participant. Such tools could be used in a vision-driven co-adaptive interface. For this reason, we designed an in-Matlab avatar that enables us to color-code sensory information to the corresponding body parts of the participant. In our next study, we examined two college-age individuals (a control and an Asperger syndrome) under sensory modalities and preferences. We built methods to extract for each individual the preferred sensory modality from the motor stream, textbf{selectivity}, and preferences of the particular modality that motivate the system to perform at its best, textbf{preferability}. These two parameters were critical to finally close the loop by letting the system decide upon the individual preferences. Therefore, we moved from the open-loop approach, to…
Advisors/Committee Members: Kalampratsidou, Vilelmini (author), Torres, Elizabeth (chair), Metaxas, Dimitris (internal member), Bekris, Kostas (internal member), Moustakides, Geiorge (internal member), Ihlefeld, Antje (outside member), School of Graduate Studies.
Subjects/Keywords: Afferent pathways; Human-computer interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kalampratsidou, V. (2018). Co-adaptive multimodal interface guided by real-time multisensory stochastic feedback. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/57627/
Chicago Manual of Style (16th Edition):
Kalampratsidou, Vilelmini. “Co-adaptive multimodal interface guided by real-time multisensory stochastic feedback.” 2018. Doctoral Dissertation, Rutgers University. Accessed January 16, 2021.
https://rucore.libraries.rutgers.edu/rutgers-lib/57627/.
MLA Handbook (7th Edition):
Kalampratsidou, Vilelmini. “Co-adaptive multimodal interface guided by real-time multisensory stochastic feedback.” 2018. Web. 16 Jan 2021.
Vancouver:
Kalampratsidou V. Co-adaptive multimodal interface guided by real-time multisensory stochastic feedback. [Internet] [Doctoral dissertation]. Rutgers University; 2018. [cited 2021 Jan 16].
Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/57627/.
Council of Science Editors:
Kalampratsidou V. Co-adaptive multimodal interface guided by real-time multisensory stochastic feedback. [Doctoral Dissertation]. Rutgers University; 2018. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/57627/

Rutgers University
25.
Hendahewa, Chathra Hasini, 1982-.
Implicit search feature based approach to assist users in exploratory search tasks.
Degree: PhD, Computer Science, 2016, Rutgers University
URL: https://rucore.libraries.rutgers.edu/rutgers-lib/49207/
► Analyzing and modeling users' online search behaviors when conducting exploratory search tasks could be instrumental in discovering search behavior patterns that can then be leveraged…
(more)
▼ Analyzing and modeling users' online search behaviors when conducting exploratory search tasks could be instrumental in discovering search behavior patterns that can then be leveraged to assist users in reaching their search task goals. In this dissertation, we propose a framework for evaluating exploratory search based on implicit features and user search action sequences extracted from the transactional log data to model different aspects of exploratory search namely, uncertainty, creativity, exploration, and knowledge discovery. We show the e ectiveness of the proposed framework by demonstrating how it can be used to understand and evaluate user search performance to identify struggling and non-struggling users. Major contributions of this dissertation are three-fold. We show that we can e ectively model user search behavior using implicit features to predict the user's future performance level with high accuracy when conducting exploratory search tasks. We also provide a recommendation approach to assist struggling users by recommending them better search paths in order to improve their search performance and reach the task goal. Further, using simulations we demonstrate that our search process based recommendations improve the search performance of struggling users over time and validate these ndings using both qualitative and quantitative approaches. We also exhibit that the recommended search trail order matters and it outperforms the random order of search trails and would bene t the struggling users using search trail order evaluation metrics.
Advisors/Committee Members: Shah, Chirag (chair), Imielinksi, Tomasz (internal member), Gerasoulis, Apostolos (internal member), Russell, Daniel (outside member).
Subjects/Keywords: Human-computer interaction; Information behavior
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hendahewa, Chathra Hasini, 1. (2016). Implicit search feature based approach to assist users in exploratory search tasks. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/49207/
Chicago Manual of Style (16th Edition):
Hendahewa, Chathra Hasini, 1982-. “Implicit search feature based approach to assist users in exploratory search tasks.” 2016. Doctoral Dissertation, Rutgers University. Accessed January 16, 2021.
https://rucore.libraries.rutgers.edu/rutgers-lib/49207/.
MLA Handbook (7th Edition):
Hendahewa, Chathra Hasini, 1982-. “Implicit search feature based approach to assist users in exploratory search tasks.” 2016. Web. 16 Jan 2021.
Vancouver:
Hendahewa, Chathra Hasini 1. Implicit search feature based approach to assist users in exploratory search tasks. [Internet] [Doctoral dissertation]. Rutgers University; 2016. [cited 2021 Jan 16].
Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/49207/.
Council of Science Editors:
Hendahewa, Chathra Hasini 1. Implicit search feature based approach to assist users in exploratory search tasks. [Doctoral Dissertation]. Rutgers University; 2016. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/49207/

Texas A&M University
26.
Bhangaonkar, Sourabh Suhas.
Effect of Narrative Structures on Sensemaking.
Degree: MS, Visualization, 2015, Texas A&M University
URL: http://hdl.handle.net/1969.1/156518
► Making sense of a given situation involves an active processing of information to achieve understanding. Such situations involve a common activity of analyzing a body…
(more)
▼ Making sense of a given situation involves an active processing of information to achieve understanding. Such situations involve a common activity of analyzing a body of the given or derived data. Prior literature shows that during sensemaking process, individuals search for knowledge representation and encoding data in that representation to answer task specific questions.
In this research we are interested to find implications of ‘narrative structure’ used as a mental model during knowledge representation phase of sense making process as proposed in ‘Pirolli & Card’s sensemaking model and to examine how this mental models affects overall quality of the synthesized knowledge derived during given analysis task.
We chose academic domain for this research, and conducted series of user studies involving University researchers. For initial studies we interviewed and observed researchers to understand how individuals do literature review and synthesize knowledge. For final comparative study, participants were asked to do literature review using a visualization system, called StoryTree. We designed and developed StoryTree system, by analyzing data gathered during initial studies. This visualization system assisted participants during literature review, by facilitating them to organize intermediate literature details visually while reviewing given literature and by generating literature summary at the end of review task. We analyzed summary reports written by these participants using measures of narrative coherence and narrative richness to generate a report quality score. Our analysis shows that reports created with the support of visualization which implements narrative structure are more coherent and richer compare to the reports generated using visualization, which does not implement narrative structure.
Advisors/Committee Members: Quek, Francis (advisor), Chu, Sharon (advisor), Yamauchi, Takashi (committee member).
Subjects/Keywords: Narratives; sensemaking; human computer interaction; interaction design
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bhangaonkar, S. S. (2015). Effect of Narrative Structures on Sensemaking. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/156518
Chicago Manual of Style (16th Edition):
Bhangaonkar, Sourabh Suhas. “Effect of Narrative Structures on Sensemaking.” 2015. Masters Thesis, Texas A&M University. Accessed January 16, 2021.
http://hdl.handle.net/1969.1/156518.
MLA Handbook (7th Edition):
Bhangaonkar, Sourabh Suhas. “Effect of Narrative Structures on Sensemaking.” 2015. Web. 16 Jan 2021.
Vancouver:
Bhangaonkar SS. Effect of Narrative Structures on Sensemaking. [Internet] [Masters thesis]. Texas A&M University; 2015. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/1969.1/156518.
Council of Science Editors:
Bhangaonkar SS. Effect of Narrative Structures on Sensemaking. [Masters Thesis]. Texas A&M University; 2015. Available from: http://hdl.handle.net/1969.1/156518

Texas A&M University
27.
Cummings, Danielle.
Multimodal Interaction for Enhancing Team Coordination on the Battlefield.
Degree: PhD, Computer Science, 2013, Texas A&M University
URL: http://hdl.handle.net/1969.1/151044
► Team coordination is vital to the success of team missions. On the battlefield and in other hazardous environments, mission outcomes are often very unpredictable because…
(more)
▼ Team coordination is vital to the success of team missions. On the battlefield and in other hazardous environments, mission outcomes are often very unpredictable because of unforeseen circumstances and complications encountered that adversely affect team coordination. In addition, the battlefield is constantly evolving as new technology, such as context-aware systems and unmanned drones, becomes available to assist teams in coordinating team efforts. As a result, we must re-evaluate the dynamics of teams that operate in high-stress, hazardous environments in order to learn how to use technology to enhance team coordination within this new context. In dangerous environments where multi-tasking is critical for the safety and success of the team operation, it is important to know what forms of
interaction are most conducive to team tasks.
We have explored
interaction methods, including various types of user input and data feedback mediums that can assist teams in performing unified tasks on the battlefield. We’ve conducted an ethnographic analysis of Soldiers and researched technologies such as sketch recognition, physiological data classification, augmented reality, and haptics to come up with a set of core principles to be used when de- signing technological tools for these teams. This dissertation provides support for these principles and addresses outstanding problems of team connectivity, mobility, cognitive load, team awareness, and hands-free
interaction in mobile military applications. This research has resulted in the development of a multimodal solution that enhances team coordination by allowing users to synchronize their tasks while keeping an overall awareness of team status and their environment. The set of solutions we’ve developed utilizes optimal
interaction techniques implemented and evaluated in related projects; the ultimate goal of this research is to learn how to use technology to provide total situational awareness and team connectivity on the battlefield. This information can be used to aid the research and development of technological solutions for teams that operate in hazardous environments as more advanced resources become available.
Advisors/Committee Members: Hammond, Tracy (advisor), Amato, Nancy (committee member), Shell, Dylan (committee member), McNamara, Ann (committee member).
Subjects/Keywords: Multimodal Interaction; Human-Computer Interaction; Military Applications
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cummings, D. (2013). Multimodal Interaction for Enhancing Team Coordination on the Battlefield. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/151044
Chicago Manual of Style (16th Edition):
Cummings, Danielle. “Multimodal Interaction for Enhancing Team Coordination on the Battlefield.” 2013. Doctoral Dissertation, Texas A&M University. Accessed January 16, 2021.
http://hdl.handle.net/1969.1/151044.
MLA Handbook (7th Edition):
Cummings, Danielle. “Multimodal Interaction for Enhancing Team Coordination on the Battlefield.” 2013. Web. 16 Jan 2021.
Vancouver:
Cummings D. Multimodal Interaction for Enhancing Team Coordination on the Battlefield. [Internet] [Doctoral dissertation]. Texas A&M University; 2013. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/1969.1/151044.
Council of Science Editors:
Cummings D. Multimodal Interaction for Enhancing Team Coordination on the Battlefield. [Doctoral Dissertation]. Texas A&M University; 2013. Available from: http://hdl.handle.net/1969.1/151044

Tampere University
28.
Carvalho, Mariana.
Designing and evaluating user interface mechanisms for affect labeling to enhance online discussion environments
.
Degree: 2020, Tampere University
URL: https://trepo.tuni.fi/handle/10024/121611
► The development of technology and diversification of means of communication have in-creased the range of social interactions that people have in their daily lives. As…
(more)
▼ The development of technology and diversification of means of communication have in-creased the range of social interactions that people have in their daily lives. As an adverse side effect, it is not uncommon to see heated and uncivil discussions happening online, creating a hostile environment that highly affects both the users and the reputation of the services. This is a particularly troublesome problem in online news sites, which different studies have shown to suffer from different negative consequences caused by hostile interactions on their online platforms.
While several theories have tried to explain why communication decays in digital medias, scholars have arrived at conflicting conclusions regarding the responsible factors. One possible approach is to better understand the processes of emotion regulation in order to ad-dress the hostile behaviors frequently seen online. Recent research suggests that affect labeling, that is, explicitly naming one’s emotional reactions, could be an option for emotion regulation online. Therefore, this thesis proposes to design and evaluate different options of affect labeling mechanisms to help individual emotion regulation, aiming to enhance the quality of online discussions.
This thesis studies design factors which might lead to higher acceptability in an affect labeling enabling mechanism. In order to do so, this work was composed by three stages. First, different news websites from around the world were benchmarked to identify mechanisms being used to ensure user engagement and quality, which were then categorized. These categories served as base for the creation of fourteen user interface designs that would enable an affect labeling process. Next, a user study was conducted, where eighteen participants were interviewed regarding their online engagement habits and perceptions on affect labeling and then evaluated six of the designs regarding their acceptability.
The study confirms some of the negative impacts of online hostility shown by previous studies, such as a general lack of motivation from all the participants to participate in online discussions. It also shows that participants’ perception on affect labelling change depend-ing on the approach used for the process to happen: obvious options, such as open text, were less preferred, while more subtle variations (e.g. reactions, emojis) were well accepted, perceived as a safer form of expression. The evaluations show that the participants preferred simplicity and a small number of steps to use a tool. At the same time, they also valued the tool providing a feeling of focus on the user, as an individual, and allowing certain level or personalization on the users’ inputs.
Based on the results, guidelines for future affect labelling designs were compiled. It would be interesting to perform new evaluations with participants with more active online engagement behavior in order to compare if the results are valid and gather new insights.
Subjects/Keywords: interaction designers
;
human-computer interaction
;
online discussion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Carvalho, M. (2020). Designing and evaluating user interface mechanisms for affect labeling to enhance online discussion environments
. (Masters Thesis). Tampere University. Retrieved from https://trepo.tuni.fi/handle/10024/121611
Chicago Manual of Style (16th Edition):
Carvalho, Mariana. “Designing and evaluating user interface mechanisms for affect labeling to enhance online discussion environments
.” 2020. Masters Thesis, Tampere University. Accessed January 16, 2021.
https://trepo.tuni.fi/handle/10024/121611.
MLA Handbook (7th Edition):
Carvalho, Mariana. “Designing and evaluating user interface mechanisms for affect labeling to enhance online discussion environments
.” 2020. Web. 16 Jan 2021.
Vancouver:
Carvalho M. Designing and evaluating user interface mechanisms for affect labeling to enhance online discussion environments
. [Internet] [Masters thesis]. Tampere University; 2020. [cited 2021 Jan 16].
Available from: https://trepo.tuni.fi/handle/10024/121611.
Council of Science Editors:
Carvalho M. Designing and evaluating user interface mechanisms for affect labeling to enhance online discussion environments
. [Masters Thesis]. Tampere University; 2020. Available from: https://trepo.tuni.fi/handle/10024/121611

University of Manitoba
29.
Hasan, Mohammad Khalad.
Around-device interaction for exploring large information spaces on mobile devices.
Degree: Computer Science, 2013, University of Manitoba
URL: http://hdl.handle.net/1993/32250
► The standard approach for browsing information on mobile devices includes touchscreen gestures such as pinch and flick. These gestures often require minute operations such as…
(more)
▼ The standard approach for browsing information on mobile devices includes touchscreen gestures such as pinch and flick. These gestures often require minute operations such as repetitive panning to browse contact lists on a mobile device. Using these gestures to explore large information spaces to facilitate decision-making tasks often involves considerable effort and the user has to deal with screen occlusion and fat-finger situations. However, the void space around mobile devices is much larger than the small touch screen. Researchers have demonstrated that such in-air space can be used as an alternative to touch input for fundamental operations, such as answering and rejecting phone calls. While such prior work has laid the foundation for around-device input, a complete mobile application that deploys and benefits from such an input modality had not been investigated prior to this thesis.
In this thesis, we explored how in-air space around a mobile device can be used to structure mobile interfaces to facilitate complex goals such as making a purchase decision with a smartphone. To achieve this goal, we began with investigating various design factors that influence the performance of accessing content that can virtually exist around the device. We then explored users’ and spectators’ perceptions of using around-device gestures to access on-device information as their readiness of performing such gestures could lead to rapid adoption of this
interaction style. Finally, we used these prior findings to design and structure a complete mobile commerce application with around-device space and compared it to traditional touch interfaces. Study results revealed that using an in-air mobile interface can be more efficient than standard touchscreen interactions. Overall, this research took the first successful step in empirically showing the practical value for using the around-device space for exploring large information spaces on mobile devices.
Advisors/Committee Members: Irani, Pourang (Computer Science) (supervisor), Wang, Yang (Computer Science).
Subjects/Keywords: Human-Computer Interaction; Around-Device Interaction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Hasan, M. K. (2013). Around-device interaction for exploring large information spaces on mobile devices. (Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/32250
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Hasan, Mohammad Khalad. “Around-device interaction for exploring large information spaces on mobile devices.” 2013. Thesis, University of Manitoba. Accessed January 16, 2021.
http://hdl.handle.net/1993/32250.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Hasan, Mohammad Khalad. “Around-device interaction for exploring large information spaces on mobile devices.” 2013. Web. 16 Jan 2021.
Vancouver:
Hasan MK. Around-device interaction for exploring large information spaces on mobile devices. [Internet] [Thesis]. University of Manitoba; 2013. [cited 2021 Jan 16].
Available from: http://hdl.handle.net/1993/32250.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Hasan MK. Around-device interaction for exploring large information spaces on mobile devices. [Thesis]. University of Manitoba; 2013. Available from: http://hdl.handle.net/1993/32250
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
30.
Ekhtiarabadi, Afshin Ameri.
Unified Incremental Multimodal Interface for Human-Robot Interaction.
Degree: Design and Engineering, 2011, Mälardalen University
URL: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
► Face-to-face human communication is a multimodal and incremental process. Humans employ different information channels (modalities) for their communication. Since some of these modalities are…
(more)
▼ Face-to-face human communication is a multimodal and incremental process. Humans employ different information channels (modalities) for their communication. Since some of these modalities are more error-prone to specic type of data, a multimodal communication can benefit from strengths of each modality and therefore reduce ambiguities during the interaction. Such interfaces can be applied to intelligent robots who operate in close relation with humans. With this approach, robots can communicate with their human colleagues in the same way they communicate with each other, thus leading to an easier and more robust human-robot interaction (HRI).In this work we suggest a new method for implementing multimodal interfaces in HRI domain and present the method employed on an industrial robot. We show that operating the system is made easier by using this interface.
Robot Colleague
Subjects/Keywords: Multimodal Interaction; Human-Robot Interaction; Human Computer Interaction; Människa-datorinteraktion (interaktionsdesign)
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ekhtiarabadi, A. A. (2011). Unified Incremental Multimodal Interface for Human-Robot Interaction. (Thesis). Mälardalen University. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ekhtiarabadi, Afshin Ameri. “Unified Incremental Multimodal Interface for Human-Robot Interaction.” 2011. Thesis, Mälardalen University. Accessed January 16, 2021.
http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ekhtiarabadi, Afshin Ameri. “Unified Incremental Multimodal Interface for Human-Robot Interaction.” 2011. Web. 16 Jan 2021.
Vancouver:
Ekhtiarabadi AA. Unified Incremental Multimodal Interface for Human-Robot Interaction. [Internet] [Thesis]. Mälardalen University; 2011. [cited 2021 Jan 16].
Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ekhtiarabadi AA. Unified Incremental Multimodal Interface for Human-Robot Interaction. [Thesis]. Mälardalen University; 2011. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13478
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] … [4880] ▶
.