Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · date | New search

Dates: Last 2 Years

You searched for +publisher:"University of Texas – Austin" +contributor:("Niekum, Scott"). Showing records 1 – 6 of 6 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Texas – Austin

1. -7411-0398. Data efficient reinforcement learning with off-policy and simulated data.

Degree: PhD, Computer Science, 2019, University of Texas – Austin

 Learning from interaction with the environment  – trying untested actions, observing successes and failures, and tying effects back to causes  – is one of the… (more)

Subjects/Keywords: Artificial intelligence; Reinforcement learning; Robotics; Off-policy; Sim-to-real

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-7411-0398. (2019). Data efficient reinforcement learning with off-policy and simulated data. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/7716

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-7411-0398. “Data efficient reinforcement learning with off-policy and simulated data.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed September 22, 2020. http://dx.doi.org/10.26153/tsw/7716.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-7411-0398. “Data efficient reinforcement learning with off-policy and simulated data.” 2019. Web. 22 Sep 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-7411-0398. Data efficient reinforcement learning with off-policy and simulated data. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Sep 22]. Available from: http://dx.doi.org/10.26153/tsw/7716.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-7411-0398. Data efficient reinforcement learning with off-policy and simulated data. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/7716

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


University of Texas – Austin

2. Xiong, Bo. Learning to compose photos and videos from passive cameras.

Degree: PhD, Computer Science, 2019, University of Texas – Austin

 Photo and video overload is well-known to most computer users. With cameras on mobile devices, it is all too easy to snap images and videos… (more)

Subjects/Keywords: Passive cameras; Video highlight detection; Snap point detection; Image segmentation; Video segmentation; Viewing panoramas

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Xiong, B. (2019). Learning to compose photos and videos from passive cameras. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/5847

Chicago Manual of Style (16th Edition):

Xiong, Bo. “Learning to compose photos and videos from passive cameras.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed September 22, 2020. http://dx.doi.org/10.26153/tsw/5847.

MLA Handbook (7th Edition):

Xiong, Bo. “Learning to compose photos and videos from passive cameras.” 2019. Web. 22 Sep 2020.

Vancouver:

Xiong B. Learning to compose photos and videos from passive cameras. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Sep 22]. Available from: http://dx.doi.org/10.26153/tsw/5847.

Council of Science Editors:

Xiong B. Learning to compose photos and videos from passive cameras. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/5847


University of Texas – Austin

3. -2711-6738. Learning for 360° video compression, recognition, and display.

Degree: PhD, Computer Science, 2019, University of Texas – Austin

 360° cameras are a core building block of the Virtual Reality (VR) and Augmented Reality (AR) technology that bridges the real and digital worlds. It… (more)

Subjects/Keywords: 360° video; Omnidirectional media; Video analysis

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-2711-6738. (2019). Learning for 360° video compression, recognition, and display. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/5848

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-2711-6738. “Learning for 360° video compression, recognition, and display.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed September 22, 2020. http://dx.doi.org/10.26153/tsw/5848.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-2711-6738. “Learning for 360° video compression, recognition, and display.” 2019. Web. 22 Sep 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-2711-6738. Learning for 360° video compression, recognition, and display. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Sep 22]. Available from: http://dx.doi.org/10.26153/tsw/5848.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-2711-6738. Learning for 360° video compression, recognition, and display. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/5848

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

4. Liebman, Elad. Sequential decision making in artificial musical intelligence.

Degree: PhD, Computer Science, 2019, University of Texas – Austin

 Over the past 60 years, artificial intelligence has grown from a largely academic field of research to a ubiquitous array of tools and approaches used… (more)

Subjects/Keywords: Music informatics; Reinforcement learning; Artificial intelligence; Sequential decision-making

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liebman, E. (2019). Sequential decision making in artificial musical intelligence. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/2858

Chicago Manual of Style (16th Edition):

Liebman, Elad. “Sequential decision making in artificial musical intelligence.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed September 22, 2020. http://dx.doi.org/10.26153/tsw/2858.

MLA Handbook (7th Edition):

Liebman, Elad. “Sequential decision making in artificial musical intelligence.” 2019. Web. 22 Sep 2020.

Vancouver:

Liebman E. Sequential decision making in artificial musical intelligence. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Sep 22]. Available from: http://dx.doi.org/10.26153/tsw/2858.

Council of Science Editors:

Liebman E. Sequential decision making in artificial musical intelligence. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/2858

5. Rawal, Aditya, Ph. D. in computer science. Discovering gated recurrent neural network architectures.

Degree: PhD, Computer science, 2019, University of Texas – Austin

 Reinforcement Learning agent networks with memory are a key component in solving POMDP tasks. Gated recurrent networks such as those composed of Long Short-Term Memory… (more)

Subjects/Keywords: Recurrent neural networks; Neuroevolution; Network architecture search; Meta-learning; Reinforcement learning; Language modeling; Music modeling

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rawal, Aditya, P. D. i. c. s. (2019). Discovering gated recurrent neural network architectures. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/72839

Chicago Manual of Style (16th Edition):

Rawal, Aditya, Ph D in computer science. “Discovering gated recurrent neural network architectures.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed September 22, 2020. http://hdl.handle.net/2152/72839.

MLA Handbook (7th Edition):

Rawal, Aditya, Ph D in computer science. “Discovering gated recurrent neural network architectures.” 2019. Web. 22 Sep 2020.

Vancouver:

Rawal, Aditya PDics. Discovering gated recurrent neural network architectures. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Sep 22]. Available from: http://hdl.handle.net/2152/72839.

Council of Science Editors:

Rawal, Aditya PDics. Discovering gated recurrent neural network architectures. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://hdl.handle.net/2152/72839

6. Mahjourian, Reza. Hierarchical policy design for sample-efficient learning of robot table tennis through self-play.

Degree: PhD, Computer Science, 2019, University of Texas – Austin

 Training robots with physical bodies requires developing new methods and action representations that allow the learning agents to explore the space of policies efficiently. This… (more)

Subjects/Keywords: Robotics; Table tennis; Self-play; Reinforcement learning; Hierarchical policy

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mahjourian, R. (2019). Hierarchical policy design for sample-efficient learning of robot table tennis through self-play. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/72812

Chicago Manual of Style (16th Edition):

Mahjourian, Reza. “Hierarchical policy design for sample-efficient learning of robot table tennis through self-play.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed September 22, 2020. http://hdl.handle.net/2152/72812.

MLA Handbook (7th Edition):

Mahjourian, Reza. “Hierarchical policy design for sample-efficient learning of robot table tennis through self-play.” 2019. Web. 22 Sep 2020.

Vancouver:

Mahjourian R. Hierarchical policy design for sample-efficient learning of robot table tennis through self-play. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2020 Sep 22]. Available from: http://hdl.handle.net/2152/72812.

Council of Science Editors:

Mahjourian R. Hierarchical policy design for sample-efficient learning of robot table tennis through self-play. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://hdl.handle.net/2152/72812

.