Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"University of Texas – Austin" +contributor:("Grauman, Kristen Lorraine, 1979-"). Showing records 1 – 11 of 11 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters

1. Kim, Jaechul. Region detection and matching for object recognition.

Degree: PhD, Computer Science, 2013, University of Texas – Austin

 In this thesis, I explore region detection and consider its impact on image matching for exemplar-based object recognition. Detecting regions is important to provide semantically… (more)

Subjects/Keywords: Computer vision; Object recognition; Feature detection; Segmentation; Image matching; Shape

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kim, J. (2013). Region detection and matching for object recognition. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/21261

Chicago Manual of Style (16th Edition):

Kim, Jaechul. “Region detection and matching for object recognition.” 2013. Doctoral Dissertation, University of Texas – Austin. Accessed August 15, 2020. http://hdl.handle.net/2152/21261.

MLA Handbook (7th Edition):

Kim, Jaechul. “Region detection and matching for object recognition.” 2013. Web. 15 Aug 2020.

Vancouver:

Kim J. Region detection and matching for object recognition. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2013. [cited 2020 Aug 15]. Available from: http://hdl.handle.net/2152/21261.

Council of Science Editors:

Kim J. Region detection and matching for object recognition. [Doctoral Dissertation]. University of Texas – Austin; 2013. Available from: http://hdl.handle.net/2152/21261

2. Hwang, Sung Ju. Discriminative object categorization with external semantic knowledge.

Degree: PhD, Computer Science, 2013, University of Texas – Austin

 Visual object category recognition is one of the most challenging problems in computer vision. Even assuming that we can obtain a near-perfect instance level representation… (more)

Subjects/Keywords: Computer vision; Machine learning; Object categorization; Object recognition; Feature learning; Metric learning; Multitask learning; Multiple kernel learning; Embedding; Manifold learning; Regularization method; Structured sparsity; Structured regularization; Hierarchical model

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hwang, S. J. (2013). Discriminative object categorization with external semantic knowledge. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/21320

Chicago Manual of Style (16th Edition):

Hwang, Sung Ju. “Discriminative object categorization with external semantic knowledge.” 2013. Doctoral Dissertation, University of Texas – Austin. Accessed August 15, 2020. http://hdl.handle.net/2152/21320.

MLA Handbook (7th Edition):

Hwang, Sung Ju. “Discriminative object categorization with external semantic knowledge.” 2013. Web. 15 Aug 2020.

Vancouver:

Hwang SJ. Discriminative object categorization with external semantic knowledge. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2013. [cited 2020 Aug 15]. Available from: http://hdl.handle.net/2152/21320.

Council of Science Editors:

Hwang SJ. Discriminative object categorization with external semantic knowledge. [Doctoral Dissertation]. University of Texas – Austin; 2013. Available from: http://hdl.handle.net/2152/21320

3. Bandla, Sunil. Active learning of an action detector on untrimmed videos.

Degree: MSin Computer Sciences, Computer Science, 2013, University of Texas – Austin

 Collecting and annotating videos of realistic human actions is tedious, yet critical for training action recognition systems. We propose a method to actively request the… (more)

Subjects/Keywords: Computer vision; Action detection; Active learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bandla, S. (2013). Active learning of an action detector on untrimmed videos. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/25260

Chicago Manual of Style (16th Edition):

Bandla, Sunil. “Active learning of an action detector on untrimmed videos.” 2013. Masters Thesis, University of Texas – Austin. Accessed August 15, 2020. http://hdl.handle.net/2152/25260.

MLA Handbook (7th Edition):

Bandla, Sunil. “Active learning of an action detector on untrimmed videos.” 2013. Web. 15 Aug 2020.

Vancouver:

Bandla S. Active learning of an action detector on untrimmed videos. [Internet] [Masters thesis]. University of Texas – Austin; 2013. [cited 2020 Aug 15]. Available from: http://hdl.handle.net/2152/25260.

Council of Science Editors:

Bandla S. Active learning of an action detector on untrimmed videos. [Masters Thesis]. University of Texas – Austin; 2013. Available from: http://hdl.handle.net/2152/25260

4. Kelle, Joshua Allen. Frugal Forests : learning a dynamic and cost sensitive feature extraction policy for anytime activity classification.

Degree: MSin Computer Sciences, Computer Science, 2018, University of Texas – Austin

 Many approaches to activity classification use supervised learning and so rely on extracting some form of features from the video. This feature extraction process can… (more)

Subjects/Keywords: Frugal Forest; Feature extraction; Activity recognition; Cost; Dynamic

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kelle, J. A. (2018). Frugal Forests : learning a dynamic and cost sensitive feature extraction policy for anytime activity classification. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/68857

Chicago Manual of Style (16th Edition):

Kelle, Joshua Allen. “Frugal Forests : learning a dynamic and cost sensitive feature extraction policy for anytime activity classification.” 2018. Masters Thesis, University of Texas – Austin. Accessed August 15, 2020. http://hdl.handle.net/2152/68857.

MLA Handbook (7th Edition):

Kelle, Joshua Allen. “Frugal Forests : learning a dynamic and cost sensitive feature extraction policy for anytime activity classification.” 2018. Web. 15 Aug 2020.

Vancouver:

Kelle JA. Frugal Forests : learning a dynamic and cost sensitive feature extraction policy for anytime activity classification. [Internet] [Masters thesis]. University of Texas – Austin; 2018. [cited 2020 Aug 15]. Available from: http://hdl.handle.net/2152/68857.

Council of Science Editors:

Kelle JA. Frugal Forests : learning a dynamic and cost sensitive feature extraction policy for anytime activity classification. [Masters Thesis]. University of Texas – Austin; 2018. Available from: http://hdl.handle.net/2152/68857


University of Texas – Austin

5. Sheshadri, Aashish. A collaborative approach to IR evaluation.

Degree: MSin Computer Sciences, Computer Science, 2014, University of Texas – Austin

 In this thesis we investigate two main problems: 1) inferring consensus from disparate inputs to improve quality of crowd contributed data; and 2) developing a… (more)

Subjects/Keywords: Crowdsourcing; Evaluation; Information retrieval; Simulation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sheshadri, A. (2014). A collaborative approach to IR evaluation. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/25910

Chicago Manual of Style (16th Edition):

Sheshadri, Aashish. “A collaborative approach to IR evaluation.” 2014. Masters Thesis, University of Texas – Austin. Accessed August 15, 2020. http://hdl.handle.net/2152/25910.

MLA Handbook (7th Edition):

Sheshadri, Aashish. “A collaborative approach to IR evaluation.” 2014. Web. 15 Aug 2020.

Vancouver:

Sheshadri A. A collaborative approach to IR evaluation. [Internet] [Masters thesis]. University of Texas – Austin; 2014. [cited 2020 Aug 15]. Available from: http://hdl.handle.net/2152/25910.

Council of Science Editors:

Sheshadri A. A collaborative approach to IR evaluation. [Masters Thesis]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/25910

6. -6888-3095. Embodied learning for visual recognition.

Degree: PhD, Electrical and Computer Engineering, 2017, University of Texas – Austin

 The field of visual recognition in recent years has come to rely on large expensively curated and manually labeled "bags of disembodied images". In the… (more)

Subjects/Keywords: Computer vision; Unsupervised learning; Embodied learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-6888-3095. (2017). Embodied learning for visual recognition. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/63489

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-6888-3095. “Embodied learning for visual recognition.” 2017. Doctoral Dissertation, University of Texas – Austin. Accessed August 15, 2020. http://hdl.handle.net/2152/63489.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-6888-3095. “Embodied learning for visual recognition.” 2017. Web. 15 Aug 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-6888-3095. Embodied learning for visual recognition. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2017. [cited 2020 Aug 15]. Available from: http://hdl.handle.net/2152/63489.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-6888-3095. Embodied learning for visual recognition. [Doctoral Dissertation]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/63489

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


University of Texas – Austin

7. Chen, Chao-Yeh. Learning human activities and poses with interconnected data sources.

Degree: PhD, Computer science, 2016, University of Texas – Austin

 Understanding human actions and poses in images or videos is a challenging problem in computer vision. There are different topics related to this problem such… (more)

Subjects/Keywords: Activity recognition; Activity detection

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chen, C. (2016). Learning human activities and poses with interconnected data sources. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/40260

Chicago Manual of Style (16th Edition):

Chen, Chao-Yeh. “Learning human activities and poses with interconnected data sources.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed August 15, 2020. http://hdl.handle.net/2152/40260.

MLA Handbook (7th Edition):

Chen, Chao-Yeh. “Learning human activities and poses with interconnected data sources.” 2016. Web. 15 Aug 2020.

Vancouver:

Chen C. Learning human activities and poses with interconnected data sources. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2020 Aug 15]. Available from: http://hdl.handle.net/2152/40260.

Council of Science Editors:

Chen C. Learning human activities and poses with interconnected data sources. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/40260

8. Jain, Suyog Dutt. Human machine collaboration for foreground segmentation in images and videos.

Degree: PhD, Computer Science, 2018, University of Texas – Austin

 Foreground segmentation is defined as the problem of generating pixel level foreground masks for all the objects in a given image or video. Accurate foreground… (more)

Subjects/Keywords: Computer vision; Crowdsourcing; Human machine collaboration; Image and video segmentation; Image segmentation; Video segmentation; Foreground segmentation; Object segmentation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jain, S. D. (2018). Human machine collaboration for foreground segmentation in images and videos. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/63453

Chicago Manual of Style (16th Edition):

Jain, Suyog Dutt. “Human machine collaboration for foreground segmentation in images and videos.” 2018. Doctoral Dissertation, University of Texas – Austin. Accessed August 15, 2020. http://hdl.handle.net/2152/63453.

MLA Handbook (7th Edition):

Jain, Suyog Dutt. “Human machine collaboration for foreground segmentation in images and videos.” 2018. Web. 15 Aug 2020.

Vancouver:

Jain SD. Human machine collaboration for foreground segmentation in images and videos. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2018. [cited 2020 Aug 15]. Available from: http://hdl.handle.net/2152/63453.

Council of Science Editors:

Jain SD. Human machine collaboration for foreground segmentation in images and videos. [Doctoral Dissertation]. University of Texas – Austin; 2018. Available from: http://hdl.handle.net/2152/63453


University of Texas – Austin

9. Yu, Aron Yingbo. Fine-grained visual comparisons.

Degree: PhD, Electrical and Computer Engineering, 2019, University of Texas – Austin

 Beyond recognizing objects, a computer vision system ought to be able to compare them. A promising way to represent visual comparisons is through attributes, which… (more)

Subjects/Keywords: Computer vision; Visual search; Fine-grained; Ranking; Comparison; Attributes

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yu, A. Y. (2019). Fine-grained visual comparisons. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/5810

Chicago Manual of Style (16th Edition):

Yu, Aron Yingbo. “Fine-grained visual comparisons.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed August 15, 2020. http://dx.doi.org/10.26153/tsw/5810.

MLA Handbook (7th Edition):

Yu, Aron Yingbo. “Fine-grained visual comparisons.” 2019. Web. 15 Aug 2020.

Vancouver:

Yu AY. Fine-grained visual comparisons. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Aug 15]. Available from: http://dx.doi.org/10.26153/tsw/5810.

Council of Science Editors:

Yu AY. Fine-grained visual comparisons. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/5810


University of Texas – Austin

10. -2711-6738. Learning for 360° video compression, recognition, and display.

Degree: PhD, Computer Science, 2019, University of Texas – Austin

 360° cameras are a core building block of the Virtual Reality (VR) and Augmented Reality (AR) technology that bridges the real and digital worlds. It… (more)

Subjects/Keywords: 360° video; Omnidirectional media; Video analysis

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-2711-6738. (2019). Learning for 360° video compression, recognition, and display. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/5848

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-2711-6738. “Learning for 360° video compression, recognition, and display.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed August 15, 2020. http://dx.doi.org/10.26153/tsw/5848.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-2711-6738. “Learning for 360° video compression, recognition, and display.” 2019. Web. 15 Aug 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-2711-6738. Learning for 360° video compression, recognition, and display. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Aug 15]. Available from: http://dx.doi.org/10.26153/tsw/5848.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-2711-6738. Learning for 360° video compression, recognition, and display. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/5848

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


University of Texas – Austin

11. Xiong, Bo. Learning to compose photos and videos from passive cameras.

Degree: PhD, Computer Science, 2019, University of Texas – Austin

 Photo and video overload is well-known to most computer users. With cameras on mobile devices, it is all too easy to snap images and videos… (more)

Subjects/Keywords: Passive cameras; Video highlight detection; Snap point detection; Image segmentation; Video segmentation; Viewing panoramas

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Xiong, B. (2019). Learning to compose photos and videos from passive cameras. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/5847

Chicago Manual of Style (16th Edition):

Xiong, Bo. “Learning to compose photos and videos from passive cameras.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed August 15, 2020. http://dx.doi.org/10.26153/tsw/5847.

MLA Handbook (7th Edition):

Xiong, Bo. “Learning to compose photos and videos from passive cameras.” 2019. Web. 15 Aug 2020.

Vancouver:

Xiong B. Learning to compose photos and videos from passive cameras. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Aug 15]. Available from: http://dx.doi.org/10.26153/tsw/5847.

Council of Science Editors:

Xiong B. Learning to compose photos and videos from passive cameras. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/5847

.