Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Deep RL). Showing records 1 – 4 of 4 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Texas – Austin

1. Hausknecht, Matthew John. Cooperation and communication in multiagent deep reinforcement learning.

Degree: PhD, Computer science, 2016, University of Texas – Austin

 Reinforcement learning is the area of machine learning concerned with learning which actions to execute in an unknown environment in order to maximize cumulative reward.… (more)

Subjects/Keywords: Reinforcement learning; Deep learning; Multiagent learning; Cooperation; Communication; RoboCup; POMDP; Deep reinforcement learning; Deep RL

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hausknecht, M. J. (2016). Cooperation and communication in multiagent deep reinforcement learning. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/45681

Chicago Manual of Style (16th Edition):

Hausknecht, Matthew John. “Cooperation and communication in multiagent deep reinforcement learning.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed October 20, 2020. http://hdl.handle.net/2152/45681.

MLA Handbook (7th Edition):

Hausknecht, Matthew John. “Cooperation and communication in multiagent deep reinforcement learning.” 2016. Web. 20 Oct 2020.

Vancouver:

Hausknecht MJ. Cooperation and communication in multiagent deep reinforcement learning. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/2152/45681.

Council of Science Editors:

Hausknecht MJ. Cooperation and communication in multiagent deep reinforcement learning. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/45681


Delft University of Technology

2. Fris, Rein (author). The Landing of a Quadcopter on Inclined Surfaces using Reinforcement Learning.

Degree: 2020, Delft University of Technology

Deep Reinforcement Learning (DRL) enables us to design controllers for complex tasks with a deep learning approach. It allows us to design controllers that are… (more)

Subjects/Keywords: Reinforcement Learning (RL); Autonomous Control; Quadcopter; Deep Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Fris, R. (. (2020). The Landing of a Quadcopter on Inclined Surfaces using Reinforcement Learning. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:5b6fd0d1-5d18-4de7-878d-e22e4df45d3c

Chicago Manual of Style (16th Edition):

Fris, Rein (author). “The Landing of a Quadcopter on Inclined Surfaces using Reinforcement Learning.” 2020. Masters Thesis, Delft University of Technology. Accessed October 20, 2020. http://resolver.tudelft.nl/uuid:5b6fd0d1-5d18-4de7-878d-e22e4df45d3c.

MLA Handbook (7th Edition):

Fris, Rein (author). “The Landing of a Quadcopter on Inclined Surfaces using Reinforcement Learning.” 2020. Web. 20 Oct 2020.

Vancouver:

Fris R(. The Landing of a Quadcopter on Inclined Surfaces using Reinforcement Learning. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2020 Oct 20]. Available from: http://resolver.tudelft.nl/uuid:5b6fd0d1-5d18-4de7-878d-e22e4df45d3c.

Council of Science Editors:

Fris R(. The Landing of a Quadcopter on Inclined Surfaces using Reinforcement Learning. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:5b6fd0d1-5d18-4de7-878d-e22e4df45d3c


Texas A&M University

3. Yoo, Jae Wook. Sensorimotor Aspects of Brain Function: Development, Internal Dynamics, and Tool Use.

Degree: PhD, Computer Science, 2018, Texas A&M University

 Learning through the sensorimotor loop is essential for intelligent agents. While the important role of sensorimotor learning has been studied, several important aspects of sensorimotor… (more)

Subjects/Keywords: Sensorimotor learning; Motor map; Internal dynamics; Tool use; Continuous action space; Neuroevolution; Deep RL

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yoo, J. W. (2018). Sensorimotor Aspects of Brain Function: Development, Internal Dynamics, and Tool Use. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/173627

Chicago Manual of Style (16th Edition):

Yoo, Jae Wook. “Sensorimotor Aspects of Brain Function: Development, Internal Dynamics, and Tool Use.” 2018. Doctoral Dissertation, Texas A&M University. Accessed October 20, 2020. http://hdl.handle.net/1969.1/173627.

MLA Handbook (7th Edition):

Yoo, Jae Wook. “Sensorimotor Aspects of Brain Function: Development, Internal Dynamics, and Tool Use.” 2018. Web. 20 Oct 2020.

Vancouver:

Yoo JW. Sensorimotor Aspects of Brain Function: Development, Internal Dynamics, and Tool Use. [Internet] [Doctoral dissertation]. Texas A&M University; 2018. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/1969.1/173627.

Council of Science Editors:

Yoo JW. Sensorimotor Aspects of Brain Function: Development, Internal Dynamics, and Tool Use. [Doctoral Dissertation]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/173627

4. Marcus, Elwin. Simulating market maker behaviour using Deep Reinforcement Learning to understand market microstructure.

Degree: Electrical Engineering and Computer Science (EECS), 2018, KTH

Market microstructure studies the process of exchanging assets underexplicit trading rules. With algorithmic trading and high-frequencytrading, modern financial markets have seen profound changes in… (more)

Subjects/Keywords: Deep Reinforcement Learning; Machine Learning; Market Microstructure; Market Maker; Financial Agent; Agent Based Modelling; Financial Artificial Markets; Complex Systems; Algorithmic Trading; Tensorforce; keras-RL; PPO; DQN; Dealer Market; Limit Order book; Computer Sciences; Datavetenskap (datalogi)

…Exploitation 2.3.3 Algorithms & Learning in RL . . 2.3.4 Issues with deep RL… …x5D;. Therefore, the focus of this thesis will be to examine the usages of deep… …2.3 Reinforcement Learning Reinforcement learning (RL) is learning what to do… …19 Simply put, RL is an agent interacting via actions with its environment, and by doing… …simple agent interacting with its environment. A RL agent interacts with its environment over… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Marcus, E. (2018). Simulating market maker behaviour using Deep Reinforcement Learning to understand market microstructure. (Thesis). KTH. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240682

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Marcus, Elwin. “Simulating market maker behaviour using Deep Reinforcement Learning to understand market microstructure.” 2018. Thesis, KTH. Accessed October 20, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240682.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Marcus, Elwin. “Simulating market maker behaviour using Deep Reinforcement Learning to understand market microstructure.” 2018. Web. 20 Oct 2020.

Vancouver:

Marcus E. Simulating market maker behaviour using Deep Reinforcement Learning to understand market microstructure. [Internet] [Thesis]. KTH; 2018. [cited 2020 Oct 20]. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240682.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Marcus E. Simulating market maker behaviour using Deep Reinforcement Learning to understand market microstructure. [Thesis]. KTH; 2018. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240682

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.