Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Visual Question Answering). Showing records 1 – 13 of 13 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Illinois – Urbana-Champaign

1. Gulganjalli Narasimhan, Medhini. Visual question answering using external knowledge.

Degree: MS, Computer Science, 2019, University of Illinois – Urbana-Champaign

 Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains… (more)

Subjects/Keywords: Visual question answering; knowledge bases; graph convolution networks

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gulganjalli Narasimhan, M. (2019). Visual question answering using external knowledge. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/104918

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gulganjalli Narasimhan, Medhini. “Visual question answering using external knowledge.” 2019. Thesis, University of Illinois – Urbana-Champaign. Accessed October 23, 2020. http://hdl.handle.net/2142/104918.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gulganjalli Narasimhan, Medhini. “Visual question answering using external knowledge.” 2019. Web. 23 Oct 2020.

Vancouver:

Gulganjalli Narasimhan M. Visual question answering using external knowledge. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2019. [cited 2020 Oct 23]. Available from: http://hdl.handle.net/2142/104918.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gulganjalli Narasimhan M. Visual question answering using external knowledge. [Thesis]. University of Illinois – Urbana-Champaign; 2019. Available from: http://hdl.handle.net/2142/104918

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Texas – Austin

2. -7737-8648. Designing algorithms that assist people to ask visual questions.

Degree: MSin Information Studies, Information Studies, 2019, University of Texas – Austin

Visual question answering services can help people with visual impairments answer their visual questions by supporting them to submit an image and a question. However,… (more)

Subjects/Keywords: Visual question answering; People with visual impairments; Visually impaired people; Algorithms; Visual questions; Algorithm design

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-7737-8648. (2019). Designing algorithms that assist people to ask visual questions. (Masters Thesis). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/8226

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-7737-8648. “Designing algorithms that assist people to ask visual questions.” 2019. Masters Thesis, University of Texas – Austin. Accessed October 23, 2020. http://dx.doi.org/10.26153/tsw/8226.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-7737-8648. “Designing algorithms that assist people to ask visual questions.” 2019. Web. 23 Oct 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-7737-8648. Designing algorithms that assist people to ask visual questions. [Internet] [Masters thesis]. University of Texas – Austin; 2019. [cited 2020 Oct 23]. Available from: http://dx.doi.org/10.26153/tsw/8226.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-7737-8648. Designing algorithms that assist people to ask visual questions. [Masters Thesis]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/8226

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

3. Ben-Younes, Hedi. Multi-modal representation learning towards visual reasoning : Apprentissage de représentation multi-modale et raisonnement visuel.

Degree: Docteur es, Informatique, 2019, Sorbonne université

La quantité d'images présentes sur internet augmente considérablement, et il est nécessaire de développer des techniques permettant le traitement automatique de ces contenus. Alors que… (more)

Subjects/Keywords: Apprentissage profond; Vision artificielle; Visual question answering; Représentation multi-modale; Intelligence artificielle; Vision artificielle; Deep learning; Computer vision; Visual question answering; Multi-Modal representation; Machine learning; Visual reasoning; 006.37

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ben-Younes, H. (2019). Multi-modal representation learning towards visual reasoning : Apprentissage de représentation multi-modale et raisonnement visuel. (Doctoral Dissertation). Sorbonne université. Retrieved from http://www.theses.fr/2019SORUS173

Chicago Manual of Style (16th Edition):

Ben-Younes, Hedi. “Multi-modal representation learning towards visual reasoning : Apprentissage de représentation multi-modale et raisonnement visuel.” 2019. Doctoral Dissertation, Sorbonne université. Accessed October 23, 2020. http://www.theses.fr/2019SORUS173.

MLA Handbook (7th Edition):

Ben-Younes, Hedi. “Multi-modal representation learning towards visual reasoning : Apprentissage de représentation multi-modale et raisonnement visuel.” 2019. Web. 23 Oct 2020.

Vancouver:

Ben-Younes H. Multi-modal representation learning towards visual reasoning : Apprentissage de représentation multi-modale et raisonnement visuel. [Internet] [Doctoral dissertation]. Sorbonne université; 2019. [cited 2020 Oct 23]. Available from: http://www.theses.fr/2019SORUS173.

Council of Science Editors:

Ben-Younes H. Multi-modal representation learning towards visual reasoning : Apprentissage de représentation multi-modale et raisonnement visuel. [Doctoral Dissertation]. Sorbonne université; 2019. Available from: http://www.theses.fr/2019SORUS173


Rochester Institute of Technology

4. Kafle, Kushal. Advancing Multi-Modal Deep Learning: Towards Language-Grounded Visual Understanding.

Degree: PhD, Chester F. Carlson Center for Imaging Science (COS), 2020, Rochester Institute of Technology

  Using deep learning, computer vision now rivals people at object recognition and detection, opening doors to tackle new challenges in image understanding. Among these… (more)

Subjects/Keywords: Computer vision; Dataset bias; Deep learning; Natural language processing; Visual question answering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kafle, K. (2020). Advancing Multi-Modal Deep Learning: Towards Language-Grounded Visual Understanding. (Doctoral Dissertation). Rochester Institute of Technology. Retrieved from https://scholarworks.rit.edu/theses/10357

Chicago Manual of Style (16th Edition):

Kafle, Kushal. “Advancing Multi-Modal Deep Learning: Towards Language-Grounded Visual Understanding.” 2020. Doctoral Dissertation, Rochester Institute of Technology. Accessed October 23, 2020. https://scholarworks.rit.edu/theses/10357.

MLA Handbook (7th Edition):

Kafle, Kushal. “Advancing Multi-Modal Deep Learning: Towards Language-Grounded Visual Understanding.” 2020. Web. 23 Oct 2020.

Vancouver:

Kafle K. Advancing Multi-Modal Deep Learning: Towards Language-Grounded Visual Understanding. [Internet] [Doctoral dissertation]. Rochester Institute of Technology; 2020. [cited 2020 Oct 23]. Available from: https://scholarworks.rit.edu/theses/10357.

Council of Science Editors:

Kafle K. Advancing Multi-Modal Deep Learning: Towards Language-Grounded Visual Understanding. [Doctoral Dissertation]. Rochester Institute of Technology; 2020. Available from: https://scholarworks.rit.edu/theses/10357


Georgia Tech

5. Agrawal, Aishwarya. Visual question answering and beyond.

Degree: PhD, Interactive Computing, 2019, Georgia Tech

 In this dissertation, I propose and study a multi-modal Artificial Intelligence (AI) task called Visual Question Answering (VQA)  – given an image and a natural… (more)

Subjects/Keywords: Visual question answering; Deep learning; Computer vision; Natural language processing; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Agrawal, A. (2019). Visual question answering and beyond. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62277

Chicago Manual of Style (16th Edition):

Agrawal, Aishwarya. “Visual question answering and beyond.” 2019. Doctoral Dissertation, Georgia Tech. Accessed October 23, 2020. http://hdl.handle.net/1853/62277.

MLA Handbook (7th Edition):

Agrawal, Aishwarya. “Visual question answering and beyond.” 2019. Web. 23 Oct 2020.

Vancouver:

Agrawal A. Visual question answering and beyond. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2020 Oct 23]. Available from: http://hdl.handle.net/1853/62277.

Council of Science Editors:

Agrawal A. Visual question answering and beyond. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62277


Georgia Tech

6. Lu, Jiasen. Visually grounded language understanding and generation.

Degree: PhD, Computer Science, 2020, Georgia Tech

 The world around us involves multiple modalities  – we see objects, feel texture, hear sounds, smell odors and so on. In order for Artificial Intelligence… (more)

Subjects/Keywords: Computer vision; Natural language processing; Visual question answering; Multi-task learning; Deep learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lu, J. (2020). Visually grounded language understanding and generation. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62745

Chicago Manual of Style (16th Edition):

Lu, Jiasen. “Visually grounded language understanding and generation.” 2020. Doctoral Dissertation, Georgia Tech. Accessed October 23, 2020. http://hdl.handle.net/1853/62745.

MLA Handbook (7th Edition):

Lu, Jiasen. “Visually grounded language understanding and generation.” 2020. Web. 23 Oct 2020.

Vancouver:

Lu J. Visually grounded language understanding and generation. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2020 Oct 23]. Available from: http://hdl.handle.net/1853/62745.

Council of Science Editors:

Lu J. Visually grounded language understanding and generation. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62745


Boston University

7. Xu, Huijuan. Vision and language understanding with localized evidence.

Degree: PhD, Computer Science, 2018, Boston University

 Enabling machines to solve computer vision tasks with natural language components can greatly improve human interaction with computers. In this thesis, we address vision and… (more)

Subjects/Keywords: Computer science; Dense video captioning; Temporal activity detection; Text-to-clip retrieval; Visual question answering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Xu, H. (2018). Vision and language understanding with localized evidence. (Doctoral Dissertation). Boston University. Retrieved from http://hdl.handle.net/2144/34790

Chicago Manual of Style (16th Edition):

Xu, Huijuan. “Vision and language understanding with localized evidence.” 2018. Doctoral Dissertation, Boston University. Accessed October 23, 2020. http://hdl.handle.net/2144/34790.

MLA Handbook (7th Edition):

Xu, Huijuan. “Vision and language understanding with localized evidence.” 2018. Web. 23 Oct 2020.

Vancouver:

Xu H. Vision and language understanding with localized evidence. [Internet] [Doctoral dissertation]. Boston University; 2018. [cited 2020 Oct 23]. Available from: http://hdl.handle.net/2144/34790.

Council of Science Editors:

Xu H. Vision and language understanding with localized evidence. [Doctoral Dissertation]. Boston University; 2018. Available from: http://hdl.handle.net/2144/34790


KTH

8. Dushi, Denis. Using Deep Learning to Answer Visual Questions from Blind People.

Degree: Electrical Engineering and Computer Science (EECS), 2019, KTH

A natural application of artificial intelligence is to help blind people overcome their daily visual challenges through AI-based assistive technologies. In this regard, one… (more)

Subjects/Keywords: Visual Question Answering; VizWiz; Deep Learning.; Computer and Information Sciences; Data- och informationsvetenskap

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dushi, D. (2019). Using Deep Learning to Answer Visual Questions from Blind People. (Thesis). KTH. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247910

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Dushi, Denis. “Using Deep Learning to Answer Visual Questions from Blind People.” 2019. Thesis, KTH. Accessed October 23, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247910.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Dushi, Denis. “Using Deep Learning to Answer Visual Questions from Blind People.” 2019. Web. 23 Oct 2020.

Vancouver:

Dushi D. Using Deep Learning to Answer Visual Questions from Blind People. [Internet] [Thesis]. KTH; 2019. [cited 2020 Oct 23]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247910.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Dushi D. Using Deep Learning to Answer Visual Questions from Blind People. [Thesis]. KTH; 2019. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247910

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

9. Lin, Xiao. Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks.

Degree: PhD, Computer Engineering, 2017, Virginia Tech

 Learning and reasoning with common sense is a challenging problem in Artificial Intelligence (AI). Humans have the remarkable ability to interpret images and text from… (more)

Subjects/Keywords: Common Sense; Multimodal; Visual Question Answering; Image-Caption Ranking; Vision and Language; Active Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, X. (2017). Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/79521

Chicago Manual of Style (16th Edition):

Lin, Xiao. “Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks.” 2017. Doctoral Dissertation, Virginia Tech. Accessed October 23, 2020. http://hdl.handle.net/10919/79521.

MLA Handbook (7th Edition):

Lin, Xiao. “Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks.” 2017. Web. 23 Oct 2020.

Vancouver:

Lin X. Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks. [Internet] [Doctoral dissertation]. Virginia Tech; 2017. [cited 2020 Oct 23]. Available from: http://hdl.handle.net/10919/79521.

Council of Science Editors:

Lin X. Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks. [Doctoral Dissertation]. Virginia Tech; 2017. Available from: http://hdl.handle.net/10919/79521

10. Zeng, Xiaoyu, M.S. in Information Studies. Understanding & predicting the skills needed to answer a visual question: Understanding and predicting the skills needed to answer a visual question.

Degree: MSin Information Studies, Information Studies, 2019, University of Texas – Austin

 We proposed a method to automatically identify the relevant cognitive skills to perform a visual question answering (VQA) task. Focusing on a subset of VizWiz… (more)

Subjects/Keywords: Visual question answering; Multimodal machine learning; Accessibility

answering (VQA). In visual question answering, the goal is for the algorithm to predict… …to automatically identify the two main skills involved in visual question answering: text… …our experiment data and offered insights into the unique nature of visual question answering… …achieve this goal, we analyzed two benchmark datasets for visual question answering to gain… …insights to the data and evaluate how skill labels could facilitate visual question answering. We… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zeng, Xiaoyu, M. S. i. I. S. (2019). Understanding & predicting the skills needed to answer a visual question: Understanding and predicting the skills needed to answer a visual question. (Masters Thesis). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/2957

Chicago Manual of Style (16th Edition):

Zeng, Xiaoyu, M S in Information Studies. “Understanding & predicting the skills needed to answer a visual question: Understanding and predicting the skills needed to answer a visual question.” 2019. Masters Thesis, University of Texas – Austin. Accessed October 23, 2020. http://dx.doi.org/10.26153/tsw/2957.

MLA Handbook (7th Edition):

Zeng, Xiaoyu, M S in Information Studies. “Understanding & predicting the skills needed to answer a visual question: Understanding and predicting the skills needed to answer a visual question.” 2019. Web. 23 Oct 2020.

Vancouver:

Zeng, Xiaoyu MSiIS. Understanding & predicting the skills needed to answer a visual question: Understanding and predicting the skills needed to answer a visual question. [Internet] [Masters thesis]. University of Texas – Austin; 2019. [cited 2020 Oct 23]. Available from: http://dx.doi.org/10.26153/tsw/2957.

Council of Science Editors:

Zeng, Xiaoyu MSiIS. Understanding & predicting the skills needed to answer a visual question: Understanding and predicting the skills needed to answer a visual question. [Masters Thesis]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/2957


Australian National University

11. Anderson, Peter James. Vision and Language Learning: From Image Captioning and Visual Question Answering towards Embodied Agents .

Degree: 2018, Australian National University

 Each time we ask for an object, describe a scene, follow directions or read a document containing images or figures, we are converting information between… (more)

Subjects/Keywords: image caption generation; image captioning; automatic image description; visual question answering; VQA; COCO; COCO dataset; vision and language; language and vision; vision and language navigation; VLN; SPICE; SPICE metric; image caption evaluation; image caption evaluation metric; bottom up and top down attention; visual attention; image attention; Matterport; Matterport3D; Matterport3D Simulator; constrained beam search; embodied agents; vision and language agents

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Anderson, P. J. (2018). Vision and Language Learning: From Image Captioning and Visual Question Answering towards Embodied Agents . (Thesis). Australian National University. Retrieved from http://hdl.handle.net/1885/164018

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Anderson, Peter James. “Vision and Language Learning: From Image Captioning and Visual Question Answering towards Embodied Agents .” 2018. Thesis, Australian National University. Accessed October 23, 2020. http://hdl.handle.net/1885/164018.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Anderson, Peter James. “Vision and Language Learning: From Image Captioning and Visual Question Answering towards Embodied Agents .” 2018. Web. 23 Oct 2020.

Vancouver:

Anderson PJ. Vision and Language Learning: From Image Captioning and Visual Question Answering towards Embodied Agents . [Internet] [Thesis]. Australian National University; 2018. [cited 2020 Oct 23]. Available from: http://hdl.handle.net/1885/164018.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Anderson PJ. Vision and Language Learning: From Image Captioning and Visual Question Answering towards Embodied Agents . [Thesis]. Australian National University; 2018. Available from: http://hdl.handle.net/1885/164018

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

12. Shih, Kevin Jonathan. Learning visual tasks with selective attention.

Degree: PhD, Computer Science, 2017, University of Illinois – Urbana-Champaign

 Knowing where to look in an image can significantly improve performance in computer vision tasks by eliminating irrelevant information from the rest of the input… (more)

Subjects/Keywords: Computer vision; Visual attention; Visual question answering (VQA); Keypoint localization; Part localization; Image recognition; Fine-grained image recognition; Deep learning; Multi-task learning; Machine learning

…Recognition(VR) and Visual Question Answering(VQA) with the proposed SVLR Module… …LIST OF ABBREVIATIONS VQA Visual Question Answering VR Visual Recognition HOG Histogram… …addresses the problem of visual attention for visual question answering, in which questions about… …image recognition, and visual question answering (VQA), the last of which can be… …question answering, in which we try to vary the behavior of visual attention for different… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shih, K. J. (2017). Learning visual tasks with selective attention. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/98359

Chicago Manual of Style (16th Edition):

Shih, Kevin Jonathan. “Learning visual tasks with selective attention.” 2017. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed October 23, 2020. http://hdl.handle.net/2142/98359.

MLA Handbook (7th Edition):

Shih, Kevin Jonathan. “Learning visual tasks with selective attention.” 2017. Web. 23 Oct 2020.

Vancouver:

Shih KJ. Learning visual tasks with selective attention. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2017. [cited 2020 Oct 23]. Available from: http://hdl.handle.net/2142/98359.

Council of Science Editors:

Shih KJ. Learning visual tasks with selective attention. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/98359


Université de Montréal

13. Pahuja, Vardaan. Visual question answering with modules and language modeling.

Degree: 2019, Université de Montréal

Subjects/Keywords: Réponse visuelle à une question; Visual Question Answering; Visual Reasoning; Modular Networks; Neural Structure Optimization; Language Modeling; Raisonnement Visuel; Réseaux Modulaires; Modélisation du Langage; Optimisation de la structure neuronale; Applied Sciences - Artificial Intelligence / Sciences appliqués et technologie - Intelligence artificielle (UMI : 0800)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Pahuja, V. (2019). Visual question answering with modules and language modeling. (Thesis). Université de Montréal. Retrieved from http://hdl.handle.net/1866/22534

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Pahuja, Vardaan. “Visual question answering with modules and language modeling.” 2019. Thesis, Université de Montréal. Accessed October 23, 2020. http://hdl.handle.net/1866/22534.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Pahuja, Vardaan. “Visual question answering with modules and language modeling.” 2019. Web. 23 Oct 2020.

Vancouver:

Pahuja V. Visual question answering with modules and language modeling. [Internet] [Thesis]. Université de Montréal; 2019. [cited 2020 Oct 23]. Available from: http://hdl.handle.net/1866/22534.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Pahuja V. Visual question answering with modules and language modeling. [Thesis]. Université de Montréal; 2019. Available from: http://hdl.handle.net/1866/22534

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.