Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"Georgia Tech" +contributor:("Hays, James"). Showing records 1 – 14 of 14 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Georgia Tech

1. Srinivasan, Natesh. An image-based approach for 3D reconstruction of urban scenes using architectural symmetries.

Degree: PhD, Computer Science, 2018, Georgia Tech

 In this dissertation, I focus on an important, generalizable and freely available sub-category of semantic information in addressing modern reconstruction challenges: the notion of symmetry.… (more)

Subjects/Keywords: Computer vision; 3D reconstruction; Scene understanding; Symmetry

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Srinivasan, N. (2018). An image-based approach for 3D reconstruction of urban scenes using architectural symmetries. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60276

Chicago Manual of Style (16th Edition):

Srinivasan, Natesh. “An image-based approach for 3D reconstruction of urban scenes using architectural symmetries.” 2018. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/60276.

MLA Handbook (7th Edition):

Srinivasan, Natesh. “An image-based approach for 3D reconstruction of urban scenes using architectural symmetries.” 2018. Web. 22 Feb 2020.

Vancouver:

Srinivasan N. An image-based approach for 3D reconstruction of urban scenes using architectural symmetries. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/60276.

Council of Science Editors:

Srinivasan N. An image-based approach for 3D reconstruction of urban scenes using architectural symmetries. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/60276


Georgia Tech

2. Kundu, Abhijit. Urban 3D Scene Understanding from Images.

Degree: PhD, Interactive Computing, 2018, Georgia Tech

 Human vision is marvelous in obtaining a structured representation of complex dynamic scenes, such as spatial scene-layout, re-organization of the scene into its constituent objects,… (more)

Subjects/Keywords: computer vision; machine learning; inverse graphics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kundu, A. (2018). Urban 3D Scene Understanding from Images. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61114

Chicago Manual of Style (16th Edition):

Kundu, Abhijit. “Urban 3D Scene Understanding from Images.” 2018. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/61114.

MLA Handbook (7th Edition):

Kundu, Abhijit. “Urban 3D Scene Understanding from Images.” 2018. Web. 22 Feb 2020.

Vancouver:

Kundu A. Urban 3D Scene Understanding from Images. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/61114.

Council of Science Editors:

Kundu A. Urban 3D Scene Understanding from Images. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/61114


Georgia Tech

3. Li, Yin. Learning embodied models of actions from first person video.

Degree: PhD, Interactive Computing, 2017, Georgia Tech

 Advances in sensor miniaturization, low-power computing, and battery life have enabled the first generation of mainstream wearable cameras. Millions of hours of videos are captured… (more)

Subjects/Keywords: First person vision; Egocentric vision; Action recognition; Gaze estimation; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, Y. (2017). Learning embodied models of actions from first person video. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59207

Chicago Manual of Style (16th Edition):

Li, Yin. “Learning embodied models of actions from first person video.” 2017. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/59207.

MLA Handbook (7th Edition):

Li, Yin. “Learning embodied models of actions from first person video.” 2017. Web. 22 Feb 2020.

Vancouver:

Li Y. Learning embodied models of actions from first person video. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/59207.

Council of Science Editors:

Li Y. Learning embodied models of actions from first person video. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/59207


Georgia Tech

4. Wells, Joshua W. Content-adaptive cross-layer optimized video processing using real-time feature feedback.

Degree: PhD, Electrical and Computer Engineering, 2017, Georgia Tech

 The objective of this research is to design a low-power video processing system capable of minimizing power consumption through graceful reduction of the quality of… (more)

Subjects/Keywords: Video processing; Video encoding; Object tracking; Low power; Error tolerant; Content adaptive

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wells, J. W. (2017). Content-adaptive cross-layer optimized video processing using real-time feature feedback. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59751

Chicago Manual of Style (16th Edition):

Wells, Joshua W. “Content-adaptive cross-layer optimized video processing using real-time feature feedback.” 2017. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/59751.

MLA Handbook (7th Edition):

Wells, Joshua W. “Content-adaptive cross-layer optimized video processing using real-time feature feedback.” 2017. Web. 22 Feb 2020.

Vancouver:

Wells JW. Content-adaptive cross-layer optimized video processing using real-time feature feedback. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/59751.

Council of Science Editors:

Wells JW. Content-adaptive cross-layer optimized video processing using real-time feature feedback. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/59751


Georgia Tech

5. Vo, Nam Ngoc. Image Retrieval and Geolocalization with Deep Learning.

Degree: PhD, Interactive Computing, 2019, Georgia Tech

 This work studies image localization task and explores image ranking/retrieval approach. Deep Learning has advanced many computer vision task including image retrieval; in addition, location… (more)

Subjects/Keywords: Image geolocalization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Vo, N. N. (2019). Image Retrieval and Geolocalization with Deep Learning. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61194

Chicago Manual of Style (16th Edition):

Vo, Nam Ngoc. “Image Retrieval and Geolocalization with Deep Learning.” 2019. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/61194.

MLA Handbook (7th Edition):

Vo, Nam Ngoc. “Image Retrieval and Geolocalization with Deep Learning.” 2019. Web. 22 Feb 2020.

Vancouver:

Vo NN. Image Retrieval and Geolocalization with Deep Learning. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/61194.

Council of Science Editors:

Vo NN. Image Retrieval and Geolocalization with Deep Learning. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61194


Georgia Tech

6. Castro, Daniel Alejandro. Understanding The Motion of A Human State In Video Classification.

Degree: PhD, Computer Science, 2019, Georgia Tech

 For the last 50 years we have studied the correspondence between human motion and the action or goal they are attempting to accomplish. Humans themselves… (more)

Subjects/Keywords: action recognition; dance videos; human pose; pose parameterization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Castro, D. A. (2019). Understanding The Motion of A Human State In Video Classification. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61262

Chicago Manual of Style (16th Edition):

Castro, Daniel Alejandro. “Understanding The Motion of A Human State In Video Classification.” 2019. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/61262.

MLA Handbook (7th Edition):

Castro, Daniel Alejandro. “Understanding The Motion of A Human State In Video Classification.” 2019. Web. 22 Feb 2020.

Vancouver:

Castro DA. Understanding The Motion of A Human State In Video Classification. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/61262.

Council of Science Editors:

Castro DA. Understanding The Motion of A Human State In Video Classification. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61262


Georgia Tech

7. Nakamura, Takuma. Multiple-Hypothesis Vision-Based Landing Autonomy.

Degree: PhD, Aerospace Engineering, 2018, Georgia Tech

 Unmanned aerial vehicles (UAVs) need humans in the mission loop for many tasks, and landing is one of the tasks that typically involves a human… (more)

Subjects/Keywords: Sensor fusion; Kalman filter; Particle filter; SLAM; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nakamura, T. (2018). Multiple-Hypothesis Vision-Based Landing Autonomy. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62195

Chicago Manual of Style (16th Edition):

Nakamura, Takuma. “Multiple-Hypothesis Vision-Based Landing Autonomy.” 2018. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/62195.

MLA Handbook (7th Edition):

Nakamura, Takuma. “Multiple-Hypothesis Vision-Based Landing Autonomy.” 2018. Web. 22 Feb 2020.

Vancouver:

Nakamura T. Multiple-Hypothesis Vision-Based Landing Autonomy. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/62195.

Council of Science Editors:

Nakamura T. Multiple-Hypothesis Vision-Based Landing Autonomy. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/62195


Georgia Tech

8. Agrawal, Aishwarya. Visual Question Answering and Beyond.

Degree: PhD, Interactive Computing, 2019, Georgia Tech

 In this dissertation, I propose and study a multi-modal Artificial Intelligence (AI) task called Visual Question Answering (VQA)  – given an image and a natural… (more)

Subjects/Keywords: Visual Question Answering; Deep Learning; Computer Vision; Natural Language Processing; Machine Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Agrawal, A. (2019). Visual Question Answering and Beyond. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62277

Chicago Manual of Style (16th Edition):

Agrawal, Aishwarya. “Visual Question Answering and Beyond.” 2019. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/62277.

MLA Handbook (7th Edition):

Agrawal, Aishwarya. “Visual Question Answering and Beyond.” 2019. Web. 22 Feb 2020.

Vancouver:

Agrawal A. Visual Question Answering and Beyond. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/62277.

Council of Science Editors:

Agrawal A. Visual Question Answering and Beyond. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62277


Georgia Tech

9. Ahsan, Unaiza. Leveraging Mid-level Representations for Complex Activity Recognition.

Degree: PhD, Interactive Computing, 2019, Georgia Tech

 Dynamic scene understanding requires learning representations of the components of the scene including objects, environments, actions and events. Complex activity recognition from images and videos… (more)

Subjects/Keywords: activity recognition; self-supervised learning; event recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ahsan, U. (2019). Leveraging Mid-level Representations for Complex Activity Recognition. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61199

Chicago Manual of Style (16th Edition):

Ahsan, Unaiza. “Leveraging Mid-level Representations for Complex Activity Recognition.” 2019. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/61199.

MLA Handbook (7th Edition):

Ahsan, Unaiza. “Leveraging Mid-level Representations for Complex Activity Recognition.” 2019. Web. 22 Feb 2020.

Vancouver:

Ahsan U. Leveraging Mid-level Representations for Complex Activity Recognition. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/61199.

Council of Science Editors:

Ahsan U. Leveraging Mid-level Representations for Complex Activity Recognition. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61199

10. Agrawal, Varun. Visual attribute labeling of images.

Degree: MS, Computer Science, 2019, Georgia Tech

 In this work, we analyze and apply various recent techniques in visual attribute recognition and labeling on a common benchmark dataset in order to motivate… (more)

Subjects/Keywords: Images; Attributes; Labeling; Ranking; Computer vision; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Agrawal, V. (2019). Visual attribute labeling of images. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61813

Chicago Manual of Style (16th Edition):

Agrawal, Varun. “Visual attribute labeling of images.” 2019. Masters Thesis, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/61813.

MLA Handbook (7th Edition):

Agrawal, Varun. “Visual attribute labeling of images.” 2019. Web. 22 Feb 2020.

Vancouver:

Agrawal V. Visual attribute labeling of images. [Internet] [Masters thesis]. Georgia Tech; 2019. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/61813.

Council of Science Editors:

Agrawal V. Visual attribute labeling of images. [Masters Thesis]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61813

11. Singhal, Prateek. Multimodal tracking for robust pose estimation.

Degree: MS, Computer Science, 2016, Georgia Tech

 An on-line 3D visual object tracking framework for monocular cameras by incorporating spatial knowledge and uncertainty from semantic mapping along with high frequency measurements from… (more)

Subjects/Keywords: SLAM; Tracking; Vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Singhal, P. (2016). Multimodal tracking for robust pose estimation. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54970

Chicago Manual of Style (16th Edition):

Singhal, Prateek. “Multimodal tracking for robust pose estimation.” 2016. Masters Thesis, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/54970.

MLA Handbook (7th Edition):

Singhal, Prateek. “Multimodal tracking for robust pose estimation.” 2016. Web. 22 Feb 2020.

Vancouver:

Singhal P. Multimodal tracking for robust pose estimation. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/54970.

Council of Science Editors:

Singhal P. Multimodal tracking for robust pose estimation. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/54970

12. Asoka Kumar Shenoi, Ashwin Kumar. A CRF that combines tactile sensing and vision for haptic mapping.

Degree: MS, Electrical and Computer Engineering, 2016, Georgia Tech

 We consider the problem of enabling a robot to efficiently obtain a dense haptic map of its visible surroundings Using the complementary properties of vision… (more)

Subjects/Keywords: Tactile; Vision; Haptic mapping; CNN; CRF

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Asoka Kumar Shenoi, A. K. (2016). A CRF that combines tactile sensing and vision for haptic mapping. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55027

Chicago Manual of Style (16th Edition):

Asoka Kumar Shenoi, Ashwin Kumar. “A CRF that combines tactile sensing and vision for haptic mapping.” 2016. Masters Thesis, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/55027.

MLA Handbook (7th Edition):

Asoka Kumar Shenoi, Ashwin Kumar. “A CRF that combines tactile sensing and vision for haptic mapping.” 2016. Web. 22 Feb 2020.

Vancouver:

Asoka Kumar Shenoi AK. A CRF that combines tactile sensing and vision for haptic mapping. [Internet] [Masters thesis]. Georgia Tech; 2016. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/55027.

Council of Science Editors:

Asoka Kumar Shenoi AK. A CRF that combines tactile sensing and vision for haptic mapping. [Masters Thesis]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55027

13. Sridhar, Sandhya. Computer vision for driver assistance systems.

Degree: MS, Electrical and Computer Engineering, 2018, Georgia Tech

 The objective of the proposed thesis is to illustrate the training, validation and evaluation of vehicle detection algorithms using computer vision and deep learning methods,… (more)

Subjects/Keywords: Computer vision; Deep learning; Autonomous driving; Object tracking; Object detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sridhar, S. (2018). Computer vision for driver assistance systems. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59938

Chicago Manual of Style (16th Edition):

Sridhar, Sandhya. “Computer vision for driver assistance systems.” 2018. Masters Thesis, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/59938.

MLA Handbook (7th Edition):

Sridhar, Sandhya. “Computer vision for driver assistance systems.” 2018. Web. 22 Feb 2020.

Vancouver:

Sridhar S. Computer vision for driver assistance systems. [Internet] [Masters thesis]. Georgia Tech; 2018. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/59938.

Council of Science Editors:

Sridhar S. Computer vision for driver assistance systems. [Masters Thesis]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/59938

14. Humayun, Ahmad. Detection and Incremental Object Learning in Videos.

Degree: PhD, Computer Science, 2018, Georgia Tech

 Unlike state-of-the-art batch machine learning methods, children have a remarkable facility for learning visual representations of objects through a combination of self-directed visual exploration and… (more)

Subjects/Keywords: Video Object Detection; Incremental Learning; Object Proposals

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Humayun, A. (2018). Detection and Incremental Object Learning in Videos. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61614

Chicago Manual of Style (16th Edition):

Humayun, Ahmad. “Detection and Incremental Object Learning in Videos.” 2018. Doctoral Dissertation, Georgia Tech. Accessed February 22, 2020. http://hdl.handle.net/1853/61614.

MLA Handbook (7th Edition):

Humayun, Ahmad. “Detection and Incremental Object Learning in Videos.” 2018. Web. 22 Feb 2020.

Vancouver:

Humayun A. Detection and Incremental Object Learning in Videos. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2020 Feb 22]. Available from: http://hdl.handle.net/1853/61614.

Council of Science Editors:

Humayun A. Detection and Incremental Object Learning in Videos. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/61614

.