Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

You searched for +publisher:"University of North Carolina" +contributor:("Xu, Yi"). One record found.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of North Carolina

1. Xu, Yi. Toward Robust Video Event Detection and Retrieval Under Adversarial Constraints.

Degree: Computer Science, 2016, University of North Carolina

The continuous stream of videos that are uploaded and shared on the Internet has been leveraged by computer vision researchers for a myriad of detection and retrieval tasks, including gesture detection, copy detection, face authentication, etc. However, the existing state-of-the-art event detection and retrieval techniques fail to deal with several real-world challenges (e.g., low resolution, low brightness and noise) under adversary constraints. This dissertation focuses on these challenges in realistic scenarios and demonstrates practical methods to address the problem of robustness and efficiency within video event detection and retrieval systems in five application settings (namely, CAPTCHA decoding, face liveness detection, reconstructing typed input on mobile devices, video confirmation attack, and content-based copy detection). Specifically, for CAPTCHA decoding, I propose an automated approach which can decode moving-image object recognition (MIOR) CAPTCHAs faster than humans. I showed that not only are there inherent weaknesses in current MIOR CAPTCHA designs, but that several obvious countermeasures (e.g., extending the length of the codeword) are not viable. More importantly, my work highlights the fact that the choice of underlying hard problem selected by the designers of a leading commercial solution falls into a solvable subclass of computer vision problems. For face liveness detection, I introduce a novel approach to bypass modern face authentication systems. More specifically, by leveraging a handful of pictures of the target user taken from social media, I show how to create realistic, textured, 3D facial models that undermine the security of widely used face authentication solutions. My framework makes use of virtual reality (VR) systems, incorporating along the way the ability to perform animations (e.g., raising an eyebrow or smiling) of the facial model, in order to trick liveness detectors into believing that the 3D model is a real human face. I demonstrate that such VR-based spoofing attacks constitute a fundamentally new class of attacks that point to a serious weaknesses in camera-based authentication systems. For reconstructing typed input on mobile devices, I proposed a method that successfully transcribes the text typed on a keyboard by exploiting video of the user typing, even from significant distances and from repeated reflections. This feat allows us to reconstruct typed input from the image of a mobile phone’s screen on a user’s eyeball as reflected through a nearby mirror, extending the privacy threat to include situations where the adversary is located around a corner from the user. To assess the viability of a video confirmation attack, I explored a technique that exploits the emanations of changes in light to reveal the programs being watched. I leverage the key insight that the observable emanations of a display (e.g., a TV or monitor) during presentation of the viewing content induces a distinctive flicker pattern that can be exploited by an adversary. My proposed approach… Advisors/Committee Members: Xu, Yi, Frahm, Jan-Michael, Monrose, Fabian, Dunn, Enrique, Berg, Tamara, Crandall, David.

Subjects/Keywords: College of Arts and Sciences; Department of Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Xu, Y. (2016). Toward Robust Video Event Detection and Retrieval Under Adversarial Constraints. (Thesis). University of North Carolina. Retrieved from https://cdr.lib.unc.edu/record/uuid:926a5d94-944d-4c61-963f-b863b0dc1f41

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Xu, Yi. “Toward Robust Video Event Detection and Retrieval Under Adversarial Constraints.” 2016. Thesis, University of North Carolina. Accessed January 20, 2021. https://cdr.lib.unc.edu/record/uuid:926a5d94-944d-4c61-963f-b863b0dc1f41.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Xu, Yi. “Toward Robust Video Event Detection and Retrieval Under Adversarial Constraints.” 2016. Web. 20 Jan 2021.

Vancouver:

Xu Y. Toward Robust Video Event Detection and Retrieval Under Adversarial Constraints. [Internet] [Thesis]. University of North Carolina; 2016. [cited 2021 Jan 20]. Available from: https://cdr.lib.unc.edu/record/uuid:926a5d94-944d-4c61-963f-b863b0dc1f41.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Xu Y. Toward Robust Video Event Detection and Retrieval Under Adversarial Constraints. [Thesis]. University of North Carolina; 2016. Available from: https://cdr.lib.unc.edu/record/uuid:926a5d94-944d-4c61-963f-b863b0dc1f41

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.