Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

Language: English

You searched for subject:(Modular reinforcement learning). Showing records 1 – 3 of 3 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters

1. Heikkilä, Filip. Autonomous Mapping of Unknown Environments Using a UAV .

Degree: Chalmers tekniska högskola / Institutionen för matematiska vetenskaper, 2020, Chalmers University of Technology

Automatic object search in a bounded area can be accomplished using cameracarrying autonomous aerial robots. The system requires several functionalities to solve the task in a safe and efficient way, including finding a navigation and exploration strategy, creating a representation of the surrounding environment and detecting objects visually. Here we create a modular framework and provide solutions to the different subproblems in a simulated environment. The navigation and exploration subproblems are tackled using deep reinforcement learning (DRL). Object and obstacle detection is approached using methods based on the scale-invariant feature transform and the pinhole camera model. Information gathered by the system is used to build a 3D voxel map. We further show that the object detection system is capable of detecting certain target objects with high recall. The DRL approach is able to achieve navigation that avoids collisions to a high degree, but the performance of the exploration policy is suboptimal. Due to the modular character of the solution further improvements of each subsystems can easily be developed independently.

Subjects/Keywords: Deep reinforcement learning; autonomous exploration and navigation; feature extraction; object detection; voxel map; UAV; modular framework.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Heikkilä, F. (2020). Autonomous Mapping of Unknown Environments Using a UAV . (Thesis). Chalmers University of Technology. Retrieved from http://hdl.handle.net/20.500.12380/300894

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Heikkilä, Filip. “Autonomous Mapping of Unknown Environments Using a UAV .” 2020. Thesis, Chalmers University of Technology. Accessed November 28, 2020. http://hdl.handle.net/20.500.12380/300894.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Heikkilä, Filip. “Autonomous Mapping of Unknown Environments Using a UAV .” 2020. Web. 28 Nov 2020.

Vancouver:

Heikkilä F. Autonomous Mapping of Unknown Environments Using a UAV . [Internet] [Thesis]. Chalmers University of Technology; 2020. [cited 2020 Nov 28]. Available from: http://hdl.handle.net/20.500.12380/300894.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Heikkilä F. Autonomous Mapping of Unknown Environments Using a UAV . [Thesis]. Chalmers University of Technology; 2020. Available from: http://hdl.handle.net/20.500.12380/300894

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

2. Zhang, Ruohan. Action selection in modular reinforcement learning.

Degree: MSin Computer Sciences, Computer Sciences, 2014, University of Texas – Austin

Modular reinforcement learning is an approach to resolve the curse of dimensionality problem in traditional reinforcement learning. We design and implement a modular reinforcement learning algorithm, which is based on three major components: Markov decision process decomposition, module training, and global action selection. We define and formalize module class and module instance concepts in decomposition step. Under our framework of decomposition, we train each modules efficiently using SARSA(λ) algorithm. Then we design, implement, test, and compare three action selection algorithms based on different heuristics: Module Combination, Module Selection, and Module Voting. For last two algorithms, we propose a method to calculate module weights efficiently, by using standard deviation of Q-values of each module. We show that Module Combination and Module Voting algorithms produce satisfactory performance in our test domain. Advisors/Committee Members: Ballard, Dana H. (Dana Harry), 1946- (advisor).

Subjects/Keywords: Modular reinforcement learning; Action selection; Module weight

…in a RL problem with large state space. We propose to take a modular reinforcement learning… …introduces a test domain, and demonstrates our modular reinforcement learning algorithm. In Chapter… …Modular reinforcement learning [7, 10, 12, 20] decomposes original RL problem into… …results suggest modular reinforcement learning might be a promising approach to curse of… …dimensionality problem. A close relative to modular reinforcement learning is hierarchical… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zhang, R. (2014). Action selection in modular reinforcement learning. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/25916

Chicago Manual of Style (16th Edition):

Zhang, Ruohan. “Action selection in modular reinforcement learning.” 2014. Masters Thesis, University of Texas – Austin. Accessed November 28, 2020. http://hdl.handle.net/2152/25916.

MLA Handbook (7th Edition):

Zhang, Ruohan. “Action selection in modular reinforcement learning.” 2014. Web. 28 Nov 2020.

Vancouver:

Zhang R. Action selection in modular reinforcement learning. [Internet] [Masters thesis]. University of Texas – Austin; 2014. [cited 2020 Nov 28]. Available from: http://hdl.handle.net/2152/25916.

Council of Science Editors:

Zhang R. Action selection in modular reinforcement learning. [Masters Thesis]. University of Texas – Austin; 2014. Available from: http://hdl.handle.net/2152/25916

3. Simpkins, Christopher Lee. Integrating reinforcement learning into a programming language.

Degree: PhD, Computer Science, 2017, Georgia Tech

Reinforcement learning is a promising solution to the intelligent agent problem, namely, given the state of the world, which action should an agent take to maximize goal attainment. However, reinforcement learning algorithms are slow to converge for larger state spaces and using reinforcement learning in agent programs requires detailed knowledge of reinforcement learning algorithms. One approach to solving the curse of dimensionality in reinforcement learning is decomposition. Modular reinforcement learning, as it is called in the literature, decomposes an agent into concurrently running reinforcement learning modules that each learn a ``selfish'' solution to a subset of the original problem. For example, a bunny agent might be decomposed into a module that avoids predators and a module that finds food. Current approaches to modular reinforcement learning support decomposition but, because the reward scales of the modules must be comparable, they are not composable  – a module written for one agent cannot be reused in another agent without modifying its reward function. This dissertation makes two contributions: (1) a command arbitration algorithm for modular reinforcement learning that enables composability by decoupling the reward scales of reinforcement learning modules, and (2) a Scala-embedded domain-specific language  – AFABL (A Friendly Adaptive Behavior Language)  – that integrates modular reinforcement learning in a way that allows programmers to use reinforcement learning without knowing much about reinforcement learning algorithms. We empirically demonstrate the reward comparability problem and show that our command arbitration algorithm solves it, and we present the results of a study in which programmers used AFABL and traditional programming to write a simple agent and adapt it to a new domain, demonstrating the promise of language-integrated reinforcement learning for practical agent software engineering. Advisors/Committee Members: Isbell, Charles L. (advisor), Bodner, Douglas (committee member), Riedl, Mark (committee member), Rugaber, Spencer (committee member), Thomaz, Andrea (committee member).

Subjects/Keywords: Machine learning; Reinforcement learning; Modular reinforcement learning; Programming languages; Domain specific languages; Software engineering; Artificial intelligence; Intelligent agents

…Curse of Dimensionality in Reinforcement Learning . . . 20 2.2.3 Modular Reinforcement… …Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . 25 Modular reinforcement… …Modular reinforcement learning, as it is called in the literature, decomposes an agent into… …a command arbitration algorithm for modular reinforcement learning that enables… …x29; – that integrates modular reinforcement learning in a way that allows programmers to… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Simpkins, C. L. (2017). Integrating reinforcement learning into a programming language. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/58683

Chicago Manual of Style (16th Edition):

Simpkins, Christopher Lee. “Integrating reinforcement learning into a programming language.” 2017. Doctoral Dissertation, Georgia Tech. Accessed November 28, 2020. http://hdl.handle.net/1853/58683.

MLA Handbook (7th Edition):

Simpkins, Christopher Lee. “Integrating reinforcement learning into a programming language.” 2017. Web. 28 Nov 2020.

Vancouver:

Simpkins CL. Integrating reinforcement learning into a programming language. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2020 Nov 28]. Available from: http://hdl.handle.net/1853/58683.

Council of Science Editors:

Simpkins CL. Integrating reinforcement learning into a programming language. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/58683

.