Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Video captioning). Showing records 1 – 8 of 8 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Texas – Austin

1. -3729-8456. Natural-language video description with deep recurrent neural networks.

Degree: Computer Sciences, 2017, University of Texas – Austin

 For most people, watching a brief video and describing what happened (in words) is an easy task. For machines, extracting meaning from video pixels and… (more)

Subjects/Keywords: Video; Captioning; Description; LSTM; RNN; Recurrent; Neural networks; Image captioning; Video captioning; Language and vision

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-3729-8456. (2017). Natural-language video description with deep recurrent neural networks. (Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/62987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

-3729-8456. “Natural-language video description with deep recurrent neural networks.” 2017. Thesis, University of Texas – Austin. Accessed June 19, 2019. http://hdl.handle.net/2152/62987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

-3729-8456. “Natural-language video description with deep recurrent neural networks.” 2017. Web. 19 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-3729-8456. Natural-language video description with deep recurrent neural networks. [Internet] [Thesis]. University of Texas – Austin; 2017. [cited 2019 Jun 19]. Available from: http://hdl.handle.net/2152/62987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

-3729-8456. Natural-language video description with deep recurrent neural networks. [Thesis]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/62987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Not specified: Masters Thesis or Doctoral Dissertation


Rochester Institute of Technology

2. Nguyen, Thang Huy. Automatic Video Captioning using Deep Neural Network.

Degree: MS, Computer Engineering, 2017, Rochester Institute of Technology

Video understanding has become increasingly important as surveillance, social, and informational videos weave themselves into our everyday lives. Video captioning offers a simple way… (more)

Subjects/Keywords: Convolutional neural network; Deep learning; Recurrent neural network; Video captioning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nguyen, T. H. (2017). Automatic Video Captioning using Deep Neural Network. (Masters Thesis). Rochester Institute of Technology. Retrieved from https://scholarworks.rit.edu/theses/9516

Chicago Manual of Style (16th Edition):

Nguyen, Thang Huy. “Automatic Video Captioning using Deep Neural Network.” 2017. Masters Thesis, Rochester Institute of Technology. Accessed June 19, 2019. https://scholarworks.rit.edu/theses/9516.

MLA Handbook (7th Edition):

Nguyen, Thang Huy. “Automatic Video Captioning using Deep Neural Network.” 2017. Web. 19 Jun 2019.

Vancouver:

Nguyen TH. Automatic Video Captioning using Deep Neural Network. [Internet] [Masters thesis]. Rochester Institute of Technology; 2017. [cited 2019 Jun 19]. Available from: https://scholarworks.rit.edu/theses/9516.

Council of Science Editors:

Nguyen TH. Automatic Video Captioning using Deep Neural Network. [Masters Thesis]. Rochester Institute of Technology; 2017. Available from: https://scholarworks.rit.edu/theses/9516


University of Rochester

3. Li, Yuncheng. Weakly supervised learning from noisy data: from practice to theory.

Degree: PhD, 2018, University of Rochester

 This is an era about users, for example, web search engine personalized reranking, digital advertisement targeting and various recommendation systems. By engaging with various online… (more)

Subjects/Keywords: Multi-label classification; Fashion outfit mining; Video captioning; Haze level prediction; Knowledge distillation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, Y. (2018). Weakly supervised learning from noisy data: from practice to theory. (Doctoral Dissertation). University of Rochester. Retrieved from http://hdl.handle.net/1802/33330

Chicago Manual of Style (16th Edition):

Li, Yuncheng. “Weakly supervised learning from noisy data: from practice to theory.” 2018. Doctoral Dissertation, University of Rochester. Accessed June 19, 2019. http://hdl.handle.net/1802/33330.

MLA Handbook (7th Edition):

Li, Yuncheng. “Weakly supervised learning from noisy data: from practice to theory.” 2018. Web. 19 Jun 2019.

Vancouver:

Li Y. Weakly supervised learning from noisy data: from practice to theory. [Internet] [Doctoral dissertation]. University of Rochester; 2018. [cited 2019 Jun 19]. Available from: http://hdl.handle.net/1802/33330.

Council of Science Editors:

Li Y. Weakly supervised learning from noisy data: from practice to theory. [Doctoral Dissertation]. University of Rochester; 2018. Available from: http://hdl.handle.net/1802/33330


Boston University

4. Xu, Huijuan. Vision and language understanding with localized evidence.

Degree: PhD, Computer Science, 2018, Boston University

 Enabling machines to solve computer vision tasks with natural language components can greatly improve human interaction with computers. In this thesis, we address vision and… (more)

Subjects/Keywords: Computer science; Dense video captioning; Temporal activity detection; Text-to-clip retrieval; Visual question answering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Xu, H. (2018). Vision and language understanding with localized evidence. (Doctoral Dissertation). Boston University. Retrieved from http://hdl.handle.net/2144/34790

Chicago Manual of Style (16th Edition):

Xu, Huijuan. “Vision and language understanding with localized evidence.” 2018. Doctoral Dissertation, Boston University. Accessed June 19, 2019. http://hdl.handle.net/2144/34790.

MLA Handbook (7th Edition):

Xu, Huijuan. “Vision and language understanding with localized evidence.” 2018. Web. 19 Jun 2019.

Vancouver:

Xu H. Vision and language understanding with localized evidence. [Internet] [Doctoral dissertation]. Boston University; 2018. [cited 2019 Jun 19]. Available from: http://hdl.handle.net/2144/34790.

Council of Science Editors:

Xu H. Vision and language understanding with localized evidence. [Doctoral Dissertation]. Boston University; 2018. Available from: http://hdl.handle.net/2144/34790


The Ohio State University

5. Nina, Oliver A, Nina. A Multitask Learning Encoder-N-Decoder Framework for Movie and Video Description.

Degree: PhD, Electrical and Computer Engineering, 2018, The Ohio State University

 Learning visual feature representations for video analysis is non-trivial and requires a large amount of training samples and a proper generalization framework. Many of the… (more)

Subjects/Keywords: Computer Science; Computer Engineering; Electrical Engineering; Multitask, Video Captioning, Video Description, Improved Dropout, CLSTM, Simplified LSTMs, CRNN for Handwriting Recognition

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nina, Oliver A, N. (2018). A Multitask Learning Encoder-N-Decoder Framework for Movie and Video Description. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1531996548147165

Chicago Manual of Style (16th Edition):

Nina, Oliver A, Nina. “A Multitask Learning Encoder-N-Decoder Framework for Movie and Video Description.” 2018. Doctoral Dissertation, The Ohio State University. Accessed June 19, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1531996548147165.

MLA Handbook (7th Edition):

Nina, Oliver A, Nina. “A Multitask Learning Encoder-N-Decoder Framework for Movie and Video Description.” 2018. Web. 19 Jun 2019.

Vancouver:

Nina, Oliver A N. A Multitask Learning Encoder-N-Decoder Framework for Movie and Video Description. [Internet] [Doctoral dissertation]. The Ohio State University; 2018. [cited 2019 Jun 19]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1531996548147165.

Council of Science Editors:

Nina, Oliver A N. A Multitask Learning Encoder-N-Decoder Framework for Movie and Video Description. [Doctoral Dissertation]. The Ohio State University; 2018. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1531996548147165


Ryerson University

6. Irvin, Casey. Captioning Prosody: Experience As A Basis For Typographic Representations Of How Things Are Said.

Degree: 2012, Ryerson University

 This project explores a potential framework for expressing prosody in typefaces used for captioning video. The work employs C. S. Peirce’s triadic form of the… (more)

Subjects/Keywords: Video recordings for the hearing impaired; Hearing impaired  – Services for; Closed captioning; Layout (Printing); Type and type-founding; Visual communication

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Irvin, C. (2012). Captioning Prosody: Experience As A Basis For Typographic Representations Of How Things Are Said. (Thesis). Ryerson University. Retrieved from https://digital.library.ryerson.ca/islandora/object/RULA%3A1655

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Irvin, Casey. “Captioning Prosody: Experience As A Basis For Typographic Representations Of How Things Are Said.” 2012. Thesis, Ryerson University. Accessed June 19, 2019. https://digital.library.ryerson.ca/islandora/object/RULA%3A1655.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Irvin, Casey. “Captioning Prosody: Experience As A Basis For Typographic Representations Of How Things Are Said.” 2012. Web. 19 Jun 2019.

Vancouver:

Irvin C. Captioning Prosody: Experience As A Basis For Typographic Representations Of How Things Are Said. [Internet] [Thesis]. Ryerson University; 2012. [cited 2019 Jun 19]. Available from: https://digital.library.ryerson.ca/islandora/object/RULA%3A1655.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Irvin C. Captioning Prosody: Experience As A Basis For Typographic Representations Of How Things Are Said. [Thesis]. Ryerson University; 2012. Available from: https://digital.library.ryerson.ca/islandora/object/RULA%3A1655

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


George Mason University

7. Evmenova, Anna S. Lights! Camera! Captions!: The Effects of Picture and/or Word Captioning Adaptations, Alternative Narration, and Interactive Features on Video Comprehension by Students with Intellectual Disabilities .

Degree: 2008, George Mason University

 This rigorous single-subject research study investigated the effects of alternative narration, highlighted text, picture/word-based captions, and interactive video searching features for improving comprehension of non-fiction… (more)

Subjects/Keywords: interactive video; captioning adaptations; students with disabilities; academic instruction; intellectual disabilities; comprehension

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Evmenova, A. S. (2008). Lights! Camera! Captions!: The Effects of Picture and/or Word Captioning Adaptations, Alternative Narration, and Interactive Features on Video Comprehension by Students with Intellectual Disabilities . (Thesis). George Mason University. Retrieved from http://hdl.handle.net/1920/3071

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Evmenova, Anna S. “Lights! Camera! Captions!: The Effects of Picture and/or Word Captioning Adaptations, Alternative Narration, and Interactive Features on Video Comprehension by Students with Intellectual Disabilities .” 2008. Thesis, George Mason University. Accessed June 19, 2019. http://hdl.handle.net/1920/3071.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Evmenova, Anna S. “Lights! Camera! Captions!: The Effects of Picture and/or Word Captioning Adaptations, Alternative Narration, and Interactive Features on Video Comprehension by Students with Intellectual Disabilities .” 2008. Web. 19 Jun 2019.

Vancouver:

Evmenova AS. Lights! Camera! Captions!: The Effects of Picture and/or Word Captioning Adaptations, Alternative Narration, and Interactive Features on Video Comprehension by Students with Intellectual Disabilities . [Internet] [Thesis]. George Mason University; 2008. [cited 2019 Jun 19]. Available from: http://hdl.handle.net/1920/3071.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Evmenova AS. Lights! Camera! Captions!: The Effects of Picture and/or Word Captioning Adaptations, Alternative Narration, and Interactive Features on Video Comprehension by Students with Intellectual Disabilities . [Thesis]. George Mason University; 2008. Available from: http://hdl.handle.net/1920/3071

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Université de Montréal

8. Yao, Li. Learning visual representations with neural networks for video captioning and image generation .

Degree: 2018, Université de Montréal

Subjects/Keywords: neural networks; representation learning; video captioning; unsupervised learning; supervised learning; visual representation; réseaux de neurones; apprentissage de représentations; description naturelle de vidéos; apprentissage supervisé; apprentissage non-supervisé; représentation visuelle

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yao, L. (2018). Learning visual representations with neural networks for video captioning and image generation . (Thesis). Université de Montréal. Retrieved from http://hdl.handle.net/1866/20502

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Yao, Li. “Learning visual representations with neural networks for video captioning and image generation .” 2018. Thesis, Université de Montréal. Accessed June 19, 2019. http://hdl.handle.net/1866/20502.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Yao, Li. “Learning visual representations with neural networks for video captioning and image generation .” 2018. Web. 19 Jun 2019.

Vancouver:

Yao L. Learning visual representations with neural networks for video captioning and image generation . [Internet] [Thesis]. Université de Montréal; 2018. [cited 2019 Jun 19]. Available from: http://hdl.handle.net/1866/20502.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yao L. Learning visual representations with neural networks for video captioning and image generation . [Thesis]. Université de Montréal; 2018. Available from: http://hdl.handle.net/1866/20502

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.