Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"Georgia Tech" +contributor:("Gordon, Geoff"). Showing records 1 – 2 of 2 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


Georgia Tech

1. Cheng, Ching An. Efficient and principled robot learning: Theory and algorithms.

Degree: PhD, Interactive Computing, 2020, Georgia Tech

Roboticists have long envisioned fully-automated robots that can operate reliably in unstructured environments. This is an exciting but extremely difficult problem; in order to succeed, robots must reason about sequential decisions and their consequences in face of uncertainty. As a result, in practice, the engineering effort required to build reliable robotic systems is both demanding and expensive. This research aims to provide a set of techniques for efficient and principled robot learning. We approach this challenge from a theoretical perspective that more closely integrates analysis and practical needs. These theoretical principles are applied to design better algorithms in two important aspects of robot learning: policy optimization and development of structural policies. This research uses and extends online learning, optimization, and control theory, and is demonstrated in applications including reinforcement learning, imitation learning, and structural policy fusion. A shared feature across this research is the reciprocal interaction between the development of practical algorithms and the advancement of abstract analyses. Real-world challenges force the rethinking of proper theoretical formulations, which in turn lead to refined analyses and new algorithms that can rigorously leverage these insights to achieve better performance. Advisors/Committee Members: Boots, Byron (advisor), Gordon, Geoff (committee member), Hutchinson, Seth (committee member), Liu, Karen (committee member), Theodorou, Evangelos A. (committee member).

Subjects/Keywords: Online learning; Control theory; Robotics; Optimization; Reinforcement learning; Imitation learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cheng, C. A. (2020). Efficient and principled robot learning: Theory and algorithms. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62733

Chicago Manual of Style (16th Edition):

Cheng, Ching An. “Efficient and principled robot learning: Theory and algorithms.” 2020. Doctoral Dissertation, Georgia Tech. Accessed March 05, 2021. http://hdl.handle.net/1853/62733.

MLA Handbook (7th Edition):

Cheng, Ching An. “Efficient and principled robot learning: Theory and algorithms.” 2020. Web. 05 Mar 2021.

Vancouver:

Cheng CA. Efficient and principled robot learning: Theory and algorithms. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2021 Mar 05]. Available from: http://hdl.handle.net/1853/62733.

Council of Science Editors:

Cheng CA. Efficient and principled robot learning: Theory and algorithms. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/62733

2. Mac Dermed, Liam Charles. Value methods for efficiently solving stochastic games of complete and incomplete information.

Degree: PhD, Computer Science, 2013, Georgia Tech

Multi-agent reinforcement learning (MARL) poses the same planning problem as traditional reinforcement learning (RL): What actions over time should an agent take in order to maximize its rewards? MARL tackles a challenging set of problems that can be better understood by modeling them as having a relatively simple environment but with complex dynamics attributed to the presence of other agents who are also attempting to maximize their rewards. A great wealth of research has developed around specific subsets of this problem, most notably when the rewards for each agent are either the same or directly opposite each other. However, there has been relatively little progress made for the general problem. This thesis address this lack. Our goal is to tackle the most general, least restrictive class of MARL problems. These are general-sum, non-deterministic, infinite horizon, multi-agent sequential decision problems of complete and incomplete information. Towards this goal, we engage in two complementary endeavors: the creation of tractable models and the construction of efficient algorithms to solve these models. We tackle three well known models: stochastic games, decentralized partially observable Markov decision problems, and partially observable stochastic games. We also present a new fourth model, Markov games of incomplete information, to help solve the partially observable models. For stochastic games and decentralized partially observable Markov decision problems, we develop novel and efficient value iteration algorithms to solve for game theoretic solutions. We empirically evaluate these algorithms on a range of problems, including well known benchmarks and show that our value iteration algorithms perform better than current policy iteration algorithms. Finally, we argue that our approach is easily extendable to new models and solution concepts, thus providing a foundation for a new class of multi-agent value iteration algorithms. Advisors/Committee Members: Isbell, Charles L. (advisor), Gordon, Geoff (committee member), Balcan, Maria-Florina (committee member), Weiss, Lora (committee member), Liu, C. Karen (committee member).

Subjects/Keywords: Multi-agent planning; Game theory; Reinforcement learning; Reinforcement learning; Multiagent systems; Game theory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mac Dermed, L. C. (2013). Value methods for efficiently solving stochastic games of complete and incomplete information. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/50270

Chicago Manual of Style (16th Edition):

Mac Dermed, Liam Charles. “Value methods for efficiently solving stochastic games of complete and incomplete information.” 2013. Doctoral Dissertation, Georgia Tech. Accessed March 05, 2021. http://hdl.handle.net/1853/50270.

MLA Handbook (7th Edition):

Mac Dermed, Liam Charles. “Value methods for efficiently solving stochastic games of complete and incomplete information.” 2013. Web. 05 Mar 2021.

Vancouver:

Mac Dermed LC. Value methods for efficiently solving stochastic games of complete and incomplete information. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Mar 05]. Available from: http://hdl.handle.net/1853/50270.

Council of Science Editors:

Mac Dermed LC. Value methods for efficiently solving stochastic games of complete and incomplete information. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/50270

.