Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"Georgia Tech" +contributor:("Batra, Dhruv"). Showing records 1 – 18 of 18 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Georgia Tech

1. Chattopadhyay, Prithvijit. Evaluating visual conversational agents via cooperative human-AI games.

Degree: MS, Computer Science, 2019, Georgia Tech

 As AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It… (more)

Subjects/Keywords: Visual conversational agents; Visual dialog; Human-AI teams; Reinforcement learning; Machine learning; Computer vision; Artificial intelligence

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chattopadhyay, P. (2019). Evaluating visual conversational agents via cooperative human-AI games. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61308

Chicago Manual of Style (16th Edition):

Chattopadhyay, Prithvijit. “Evaluating visual conversational agents via cooperative human-AI games.” 2019. Masters Thesis, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/61308.

MLA Handbook (7th Edition):

Chattopadhyay, Prithvijit. “Evaluating visual conversational agents via cooperative human-AI games.” 2019. Web. 16 Apr 2021.

Vancouver:

Chattopadhyay P. Evaluating visual conversational agents via cooperative human-AI games. [Internet] [Masters thesis]. Georgia Tech; 2019. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/61308.

Council of Science Editors:

Chattopadhyay P. Evaluating visual conversational agents via cooperative human-AI games. [Masters Thesis]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61308

2. Deshraj. EvalAI: Evaluating AI systems at scale.

Degree: MS, Computer Science, 2018, Georgia Tech

 Artificial Intelligence research has progressed tremendously in the last few years. There has been the introduction of several new multi-modal datasets and tasks due to… (more)

Subjects/Keywords: Machine learning; Artificial intelligence; Evalai; Deep learning; Computer vision; Reinforcement learning; Systems; Scale; Data science; Kaggle

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Deshraj. (2018). EvalAI: Evaluating AI systems at scale. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60738

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

Deshraj. “EvalAI: Evaluating AI systems at scale.” 2018. Masters Thesis, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/60738.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

Deshraj. “EvalAI: Evaluating AI systems at scale.” 2018. Web. 16 Apr 2021.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

Deshraj. EvalAI: Evaluating AI systems at scale. [Internet] [Masters thesis]. Georgia Tech; 2018. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/60738.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

Deshraj. EvalAI: Evaluating AI systems at scale. [Masters Thesis]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/60738

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


Georgia Tech

3. Prabhu, Viraj Uday. Few-shot learning for dermatological disease diagnosis.

Degree: MS, Computer Science, 2019, Georgia Tech

 In this thesis, we consider the problem of clinical image classification for the purpose of aiding doctors in dermatological disease diagnosis. Diagnosis of dermatological disease… (more)

Subjects/Keywords: Image classification; Low shot learning; Automated diagnosis

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Prabhu, V. U. (2019). Few-shot learning for dermatological disease diagnosis. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61296

Chicago Manual of Style (16th Edition):

Prabhu, Viraj Uday. “Few-shot learning for dermatological disease diagnosis.” 2019. Masters Thesis, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/61296.

MLA Handbook (7th Edition):

Prabhu, Viraj Uday. “Few-shot learning for dermatological disease diagnosis.” 2019. Web. 16 Apr 2021.

Vancouver:

Prabhu VU. Few-shot learning for dermatological disease diagnosis. [Internet] [Masters thesis]. Georgia Tech; 2019. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/61296.

Council of Science Editors:

Prabhu VU. Few-shot learning for dermatological disease diagnosis. [Masters Thesis]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61296


Georgia Tech

4. Tendulkar, Purva Milind. Computational methods for creative inspiration in thematic typography and dance.

Degree: MS, Computer Science, 2020, Georgia Tech

 As progress in technology continues, there is a need to adapt and upscale tools used in artistic and creative processes. This can either take the… (more)

Subjects/Keywords: Creativity; Human studies; Typography; Dance; Music; AI; Computer vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tendulkar, P. M. (2020). Computational methods for creative inspiration in thematic typography and dance. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63699

Chicago Manual of Style (16th Edition):

Tendulkar, Purva Milind. “Computational methods for creative inspiration in thematic typography and dance.” 2020. Masters Thesis, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/63699.

MLA Handbook (7th Edition):

Tendulkar, Purva Milind. “Computational methods for creative inspiration in thematic typography and dance.” 2020. Web. 16 Apr 2021.

Vancouver:

Tendulkar PM. Computational methods for creative inspiration in thematic typography and dance. [Internet] [Masters thesis]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/63699.

Council of Science Editors:

Tendulkar PM. Computational methods for creative inspiration in thematic typography and dance. [Masters Thesis]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63699


Georgia Tech

5. Raval, Ananya. Generation of Linux commands using natural language descriptions.

Degree: MS, Computer Science, 2018, Georgia Tech

 Translating natural language into source code or programs is an important problem in natural language understanding  – both in terms of practical applications and in… (more)

Subjects/Keywords: Natural language processing; Program synthesis; Neural machine translation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Raval, A. (2018). Generation of Linux commands using natural language descriptions. (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59849

Chicago Manual of Style (16th Edition):

Raval, Ananya. “Generation of Linux commands using natural language descriptions.” 2018. Masters Thesis, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/59849.

MLA Handbook (7th Edition):

Raval, Ananya. “Generation of Linux commands using natural language descriptions.” 2018. Web. 16 Apr 2021.

Vancouver:

Raval A. Generation of Linux commands using natural language descriptions. [Internet] [Masters thesis]. Georgia Tech; 2018. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/59849.

Council of Science Editors:

Raval A. Generation of Linux commands using natural language descriptions. [Masters Thesis]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/59849


Georgia Tech

6. Agrawal, Aishwarya. Visual question answering and beyond.

Degree: PhD, Interactive Computing, 2019, Georgia Tech

 In this dissertation, I propose and study a multi-modal Artificial Intelligence (AI) task called Visual Question Answering (VQA)  – given an image and a natural… (more)

Subjects/Keywords: Visual question answering; Deep learning; Computer vision; Natural language processing; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Agrawal, A. (2019). Visual question answering and beyond. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62277

Chicago Manual of Style (16th Edition):

Agrawal, Aishwarya. “Visual question answering and beyond.” 2019. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/62277.

MLA Handbook (7th Edition):

Agrawal, Aishwarya. “Visual question answering and beyond.” 2019. Web. 16 Apr 2021.

Vancouver:

Agrawal A. Visual question answering and beyond. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/62277.

Council of Science Editors:

Agrawal A. Visual question answering and beyond. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62277


Georgia Tech

7. Yang, Jianwei. Structured visual understanding, generation and reasoning.

Degree: PhD, Interactive Computing, 2020, Georgia Tech

 The world around us is highly structured. In the real world, a single object usually consists of multiple components organized in some structures (e.g., a… (more)

Subjects/Keywords: Scene graph; Structured visual understanding; Visual generation; Reasoning; Vision and language

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yang, J. (2020). Structured visual understanding, generation and reasoning. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62744

Chicago Manual of Style (16th Edition):

Yang, Jianwei. “Structured visual understanding, generation and reasoning.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/62744.

MLA Handbook (7th Edition):

Yang, Jianwei. “Structured visual understanding, generation and reasoning.” 2020. Web. 16 Apr 2021.

Vancouver:

Yang J. Structured visual understanding, generation and reasoning. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/62744.

Council of Science Editors:

Yang J. Structured visual understanding, generation and reasoning. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62744


Georgia Tech

8. Lu, Jiasen. Visually grounded language understanding and generation.

Degree: PhD, Computer Science, 2020, Georgia Tech

 The world around us involves multiple modalities  – we see objects, feel texture, hear sounds, smell odors and so on. In order for Artificial Intelligence… (more)

Subjects/Keywords: Computer vision; Natural language processing; Visual question answering; Multi-task learning; Deep learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lu, J. (2020). Visually grounded language understanding and generation. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62745

Chicago Manual of Style (16th Edition):

Lu, Jiasen. “Visually grounded language understanding and generation.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/62745.

MLA Handbook (7th Edition):

Lu, Jiasen. “Visually grounded language understanding and generation.” 2020. Web. 16 Apr 2021.

Vancouver:

Lu J. Visually grounded language understanding and generation. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/62745.

Council of Science Editors:

Lu J. Visually grounded language understanding and generation. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62745


Georgia Tech

9. Das, Abhishek. Building agents that can see, talk, and act.

Degree: PhD, Interactive Computing, 2020, Georgia Tech

 A long-term goal in AI is to build general-purpose intelligent agents that simultaneously possess the ability to perceive the rich visual environment around us (through… (more)

Subjects/Keywords: Computer vision; Natural language processing; Machine learning; Embodiment

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Das, A. (2020). Building agents that can see, talk, and act. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62768

Chicago Manual of Style (16th Edition):

Das, Abhishek. “Building agents that can see, talk, and act.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/62768.

MLA Handbook (7th Edition):

Das, Abhishek. “Building agents that can see, talk, and act.” 2020. Web. 16 Apr 2021.

Vancouver:

Das A. Building agents that can see, talk, and act. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/62768.

Council of Science Editors:

Das A. Building agents that can see, talk, and act. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62768


Georgia Tech

10. Cogswell, Michael Andrew. Disentangling neural network representations for improved generalization.

Degree: PhD, Interactive Computing, 2020, Georgia Tech

 Despite the increasingly broad perceptual capabilities of neural networks, applying them to new tasks requires significant engineering effort in data collection and model design. Generally,… (more)

Subjects/Keywords: Deep learning; Disentanglement; Compositionality; Representation learning; Visual dialog; Language emergence

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cogswell, M. A. (2020). Disentangling neural network representations for improved generalization. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62813

Chicago Manual of Style (16th Edition):

Cogswell, Michael Andrew. “Disentangling neural network representations for improved generalization.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/62813.

MLA Handbook (7th Edition):

Cogswell, Michael Andrew. “Disentangling neural network representations for improved generalization.” 2020. Web. 16 Apr 2021.

Vancouver:

Cogswell MA. Disentangling neural network representations for improved generalization. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/62813.

Council of Science Editors:

Cogswell MA. Disentangling neural network representations for improved generalization. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62813


Georgia Tech

11. Hsu, Yen-Chang. Learning from pairwise similarity for visual categorization.

Degree: PhD, Electrical and Computer Engineering, 2020, Georgia Tech

 Learning high-capacity machine learning models for perception, especially for high-dimensional inputs such as in computer vision, requires a large amount of human-annotated data. Many efforts… (more)

Subjects/Keywords: Transfer learning; Pairwise similarity; Clustering; Deep learning; Neural networks; Classification; Out-of-distribution detection; Instance segmentation; Lane detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hsu, Y. (2020). Learning from pairwise similarity for visual categorization. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62814

Chicago Manual of Style (16th Edition):

Hsu, Yen-Chang. “Learning from pairwise similarity for visual categorization.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/62814.

MLA Handbook (7th Edition):

Hsu, Yen-Chang. “Learning from pairwise similarity for visual categorization.” 2020. Web. 16 Apr 2021.

Vancouver:

Hsu Y. Learning from pairwise similarity for visual categorization. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/62814.

Council of Science Editors:

Hsu Y. Learning from pairwise similarity for visual categorization. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62814


Georgia Tech

12. Ramasamy Selvaraju, Ramprasaath. Explaining model decisions and fixing them via human feedback.

Degree: PhD, Computer Science, 2020, Georgia Tech

 Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of… (more)

Subjects/Keywords: Visual explanations; Interpretability; Computer vision; Vision and language; Deep learning; Grad-CAM

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ramasamy Selvaraju, R. (2020). Explaining model decisions and fixing them via human feedback. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62867

Chicago Manual of Style (16th Edition):

Ramasamy Selvaraju, Ramprasaath. “Explaining model decisions and fixing them via human feedback.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/62867.

MLA Handbook (7th Edition):

Ramasamy Selvaraju, Ramprasaath. “Explaining model decisions and fixing them via human feedback.” 2020. Web. 16 Apr 2021.

Vancouver:

Ramasamy Selvaraju R. Explaining model decisions and fixing them via human feedback. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/62867.

Council of Science Editors:

Ramasamy Selvaraju R. Explaining model decisions and fixing them via human feedback. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62867


Georgia Tech

13. Chandrasekaran, Arjun. Towards natural human-AI interactions in vision and language.

Degree: PhD, Interactive Computing, 2019, Georgia Tech

 Inter-human interaction is a rich form of communication. Human interactions typically leverage a good theory of mind, involve pragmatics, story-telling, humor, sarcasm, empathy, sympathy, etc.… (more)

Subjects/Keywords: AI; Neural networks; Human-AI interaction; Human-AI collaboration; Humor; Narrative; Sorytelling; Explainable AI; Interpretability; Predictability; Guesswhich; Human-in-loop evaluation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chandrasekaran, A. (2019). Towards natural human-AI interactions in vision and language. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62323

Chicago Manual of Style (16th Edition):

Chandrasekaran, Arjun. “Towards natural human-AI interactions in vision and language.” 2019. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/62323.

MLA Handbook (7th Edition):

Chandrasekaran, Arjun. “Towards natural human-AI interactions in vision and language.” 2019. Web. 16 Apr 2021.

Vancouver:

Chandrasekaran A. Towards natural human-AI interactions in vision and language. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/62323.

Council of Science Editors:

Chandrasekaran A. Towards natural human-AI interactions in vision and language. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62323


Georgia Tech

14. Shaban, Amirreza. Low-shot learning for object recognition, detection, and segmentation.

Degree: PhD, Interactive Computing, 2020, Georgia Tech

 Deep Neural Networks are powerful at solving classification problems in computer vision. However, learning classifiers with these models requires a large amount of labeled training… (more)

Subjects/Keywords: Few-shot learning; Low-shot learning; Bi-level optimization; Few-shot semantic segmentation; Video object segmentation; Weakly-supervised few-shot object detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shaban, A. (2020). Low-shot learning for object recognition, detection, and segmentation. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63599

Chicago Manual of Style (16th Edition):

Shaban, Amirreza. “Low-shot learning for object recognition, detection, and segmentation.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/63599.

MLA Handbook (7th Edition):

Shaban, Amirreza. “Low-shot learning for object recognition, detection, and segmentation.” 2020. Web. 16 Apr 2021.

Vancouver:

Shaban A. Low-shot learning for object recognition, detection, and segmentation. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/63599.

Council of Science Editors:

Shaban A. Low-shot learning for object recognition, detection, and segmentation. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63599


Georgia Tech

15. Castro, Daniel Alejandro. Understanding the motion of a human state in video classification.

Degree: PhD, Computer Science, 2019, Georgia Tech

 For the last 50 years we have studied the correspondence between human motion and the action or goal they are attempting to accomplish. Humans themselves… (more)

Subjects/Keywords: Action recognition; Dance videos; Human pose; Pose parameterization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Castro, D. A. (2019). Understanding the motion of a human state in video classification. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61262

Chicago Manual of Style (16th Edition):

Castro, Daniel Alejandro. “Understanding the motion of a human state in video classification.” 2019. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/61262.

MLA Handbook (7th Edition):

Castro, Daniel Alejandro. “Understanding the motion of a human state in video classification.” 2019. Web. 16 Apr 2021.

Vancouver:

Castro DA. Understanding the motion of a human state in video classification. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/61262.

Council of Science Editors:

Castro DA. Understanding the motion of a human state in video classification. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/61262


Georgia Tech

16. Drews, Paul Michael. Visual attention for high speed driving.

Degree: PhD, Electrical and Computer Engineering, 2018, Georgia Tech

 Coupling of control and perception is an especially difficult problem. This thesis investigates this problem in the context of aggressive off-road driving. By jointly developing… (more)

Subjects/Keywords: Robotics; Computer vision; Autonomous vehicles; Neural networks; High speed

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Drews, P. M. (2018). Visual attention for high speed driving. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/61183

Chicago Manual of Style (16th Edition):

Drews, Paul Michael. “Visual attention for high speed driving.” 2018. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/61183.

MLA Handbook (7th Edition):

Drews, Paul Michael. “Visual attention for high speed driving.” 2018. Web. 16 Apr 2021.

Vancouver:

Drews PM. Visual attention for high speed driving. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/61183.

Council of Science Editors:

Drews PM. Visual attention for high speed driving. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/61183


Georgia Tech

17. Vijayakumar, Ashwin Kalyan. Improved search techniques for structured prediction.

Degree: PhD, Interactive Computing, 2020, Georgia Tech

 Many useful AI tasks like machine translation, captioning or program syn- thesis to name a few can be abstracted as structured prediction problems. For these… (more)

Subjects/Keywords: Sequence decoding; Program synthesis

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Vijayakumar, A. K. (2020). Improved search techniques for structured prediction. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63701

Chicago Manual of Style (16th Edition):

Vijayakumar, Ashwin Kalyan. “Improved search techniques for structured prediction.” 2020. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/63701.

MLA Handbook (7th Edition):

Vijayakumar, Ashwin Kalyan. “Improved search techniques for structured prediction.” 2020. Web. 16 Apr 2021.

Vancouver:

Vijayakumar AK. Improved search techniques for structured prediction. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/63701.

Council of Science Editors:

Vijayakumar AK. Improved search techniques for structured prediction. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63701

18. Vedantam, Shanmukha Ramak. Interpretation, grounding and imagination for machine intelligence.

Degree: PhD, Interactive Computing, 2018, Georgia Tech

 Understanding how to model computer vision and natural language jointly is a long-standing challenge in artificial intelligence. In this thesis, I study how modeling vision… (more)

Subjects/Keywords: Computer vision; Machine learning; Artificial intelligence

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Vedantam, S. R. (2018). Interpretation, grounding and imagination for machine intelligence. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60799

Chicago Manual of Style (16th Edition):

Vedantam, Shanmukha Ramak. “Interpretation, grounding and imagination for machine intelligence.” 2018. Doctoral Dissertation, Georgia Tech. Accessed April 16, 2021. http://hdl.handle.net/1853/60799.

MLA Handbook (7th Edition):

Vedantam, Shanmukha Ramak. “Interpretation, grounding and imagination for machine intelligence.” 2018. Web. 16 Apr 2021.

Vancouver:

Vedantam SR. Interpretation, grounding and imagination for machine intelligence. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Apr 16]. Available from: http://hdl.handle.net/1853/60799.

Council of Science Editors:

Vedantam SR. Interpretation, grounding and imagination for machine intelligence. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/60799

.