Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Reinforcement learning). Showing records 1 – 30 of 943 total matches.

[1] [2] [3] [4] [5] … [32]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Languages

Country

▼ Search Limiters


Oregon State University

1. Proper, Scott. Scaling multiagent reinforcement learning.

Degree: PhD, Computer Science, 2009, Oregon State University

Reinforcement learning in real-world domains suffers from three curses of dimensionality: explosions in state and action spaces, and high stochasticity or "outcome space" explosion. Multiagent… (more)

Subjects/Keywords: Reinforcement learning; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Proper, S. (2009). Scaling multiagent reinforcement learning. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/13662

Chicago Manual of Style (16th Edition):

Proper, Scott. “Scaling multiagent reinforcement learning.” 2009. Doctoral Dissertation, Oregon State University. Accessed June 17, 2019. http://hdl.handle.net/1957/13662.

MLA Handbook (7th Edition):

Proper, Scott. “Scaling multiagent reinforcement learning.” 2009. Web. 17 Jun 2019.

Vancouver:

Proper S. Scaling multiagent reinforcement learning. [Internet] [Doctoral dissertation]. Oregon State University; 2009. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1957/13662.

Council of Science Editors:

Proper S. Scaling multiagent reinforcement learning. [Doctoral Dissertation]. Oregon State University; 2009. Available from: http://hdl.handle.net/1957/13662


Oregon State University

2. Mehta, Neville. Hierarchical structure discovery and transfer in sequential decision problems.

Degree: PhD, Computer Science, 2011, Oregon State University

 Acting intelligently to efficiently solve sequential decision problems requires the ability to extract hierarchical structure from the underlying domain dynamics, exploit it for optimal or… (more)

Subjects/Keywords: hierarchical reinforcement learning; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mehta, N. (2011). Hierarchical structure discovery and transfer in sequential decision problems. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/25199

Chicago Manual of Style (16th Edition):

Mehta, Neville. “Hierarchical structure discovery and transfer in sequential decision problems.” 2011. Doctoral Dissertation, Oregon State University. Accessed June 17, 2019. http://hdl.handle.net/1957/25199.

MLA Handbook (7th Edition):

Mehta, Neville. “Hierarchical structure discovery and transfer in sequential decision problems.” 2011. Web. 17 Jun 2019.

Vancouver:

Mehta N. Hierarchical structure discovery and transfer in sequential decision problems. [Internet] [Doctoral dissertation]. Oregon State University; 2011. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1957/25199.

Council of Science Editors:

Mehta N. Hierarchical structure discovery and transfer in sequential decision problems. [Doctoral Dissertation]. Oregon State University; 2011. Available from: http://hdl.handle.net/1957/25199


Oregon State University

3. Lauer, Christopher Joseph. Determining optimal timber harvest and fuel treatment on a fire-threatened landscape using approximate dynamic programming.

Degree: PhD, 2017, Oregon State University

 Forest management in the face of fire risk is a challenging problem because fire spreads across a landscape and because its occurrence is unpredictable. Additionally,… (more)

Subjects/Keywords: reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lauer, C. J. (2017). Determining optimal timber harvest and fuel treatment on a fire-threatened landscape using approximate dynamic programming. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/61678

Chicago Manual of Style (16th Edition):

Lauer, Christopher Joseph. “Determining optimal timber harvest and fuel treatment on a fire-threatened landscape using approximate dynamic programming.” 2017. Doctoral Dissertation, Oregon State University. Accessed June 17, 2019. http://hdl.handle.net/1957/61678.

MLA Handbook (7th Edition):

Lauer, Christopher Joseph. “Determining optimal timber harvest and fuel treatment on a fire-threatened landscape using approximate dynamic programming.” 2017. Web. 17 Jun 2019.

Vancouver:

Lauer CJ. Determining optimal timber harvest and fuel treatment on a fire-threatened landscape using approximate dynamic programming. [Internet] [Doctoral dissertation]. Oregon State University; 2017. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1957/61678.

Council of Science Editors:

Lauer CJ. Determining optimal timber harvest and fuel treatment on a fire-threatened landscape using approximate dynamic programming. [Doctoral Dissertation]. Oregon State University; 2017. Available from: http://hdl.handle.net/1957/61678


University of Illinois – Urbana-Champaign

4. Potok, Matthew. Safe reinforcement learning: An overview, a hybrid systems perspective, and a case study.

Degree: MS, Electrical & Computer Engr, 2018, University of Illinois – Urbana-Champaign

Reinforcement learning (RL) is a general method for agents to learn optimal control policies through exploration and experience. Due to its generality, RL can generate… (more)

Subjects/Keywords: Reinforcement Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Potok, M. (2018). Safe reinforcement learning: An overview, a hybrid systems perspective, and a case study. (Thesis). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/102518

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Potok, Matthew. “Safe reinforcement learning: An overview, a hybrid systems perspective, and a case study.” 2018. Thesis, University of Illinois – Urbana-Champaign. Accessed June 17, 2019. http://hdl.handle.net/2142/102518.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Potok, Matthew. “Safe reinforcement learning: An overview, a hybrid systems perspective, and a case study.” 2018. Web. 17 Jun 2019.

Vancouver:

Potok M. Safe reinforcement learning: An overview, a hybrid systems perspective, and a case study. [Internet] [Thesis]. University of Illinois – Urbana-Champaign; 2018. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/2142/102518.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Potok M. Safe reinforcement learning: An overview, a hybrid systems perspective, and a case study. [Thesis]. University of Illinois – Urbana-Champaign; 2018. Available from: http://hdl.handle.net/2142/102518

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

5. Frank, Mikhail Alexander. Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning.

Degree: 2014, Università della Svizzera italiana

 The next generation of intelligent robots will need to be able to plan reaches. Not just ballistic point to point reaches, but reaches around things… (more)

Subjects/Keywords: Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Frank, M. A. (2014). Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning. (Thesis). Università della Svizzera italiana. Retrieved from http://doc.rero.ch/record/234387

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Frank, Mikhail Alexander. “Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning.” 2014. Thesis, Università della Svizzera italiana. Accessed June 17, 2019. http://doc.rero.ch/record/234387.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Frank, Mikhail Alexander. “Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning.” 2014. Web. 17 Jun 2019.

Vancouver:

Frank MA. Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning. [Internet] [Thesis]. Università della Svizzera italiana; 2014. [cited 2019 Jun 17]. Available from: http://doc.rero.ch/record/234387.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Frank MA. Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning. [Thesis]. Università della Svizzera italiana; 2014. Available from: http://doc.rero.ch/record/234387

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Delft University of Technology

6. Van Diepen, M.D.M. Avoiding failure states during reinforcement learning:.

Degree: 2011, Delft University of Technology

 The Delft Biorobotics Laboratory develops bipedal humanoid robots. One of these robots, called LEO, is designed to learn to walk using reinforcement learning. During learning,… (more)

Subjects/Keywords: reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Van Diepen, M. D. M. (2011). Avoiding failure states during reinforcement learning:. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:1f03c580-9fd5-4807-87b5-d70890e05ff6

Chicago Manual of Style (16th Edition):

Van Diepen, M D M. “Avoiding failure states during reinforcement learning:.” 2011. Masters Thesis, Delft University of Technology. Accessed June 17, 2019. http://resolver.tudelft.nl/uuid:1f03c580-9fd5-4807-87b5-d70890e05ff6.

MLA Handbook (7th Edition):

Van Diepen, M D M. “Avoiding failure states during reinforcement learning:.” 2011. Web. 17 Jun 2019.

Vancouver:

Van Diepen MDM. Avoiding failure states during reinforcement learning:. [Internet] [Masters thesis]. Delft University of Technology; 2011. [cited 2019 Jun 17]. Available from: http://resolver.tudelft.nl/uuid:1f03c580-9fd5-4807-87b5-d70890e05ff6.

Council of Science Editors:

Van Diepen MDM. Avoiding failure states during reinforcement learning:. [Masters Thesis]. Delft University of Technology; 2011. Available from: http://resolver.tudelft.nl/uuid:1f03c580-9fd5-4807-87b5-d70890e05ff6


Delft University of Technology

7. Van Rooijen, J.C. Learning Parameter Selection in Continuous Reinforcement Learning: Attempting to Reduce Tuning Effords:.

Degree: 2012, Delft University of Technology

 The reinforcement learning (RL) framework enables to construct controllers that try to find find an optimal control strategy in an unknown environment by trial-and-error. After… (more)

Subjects/Keywords: reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Van Rooijen, J. C. (2012). Learning Parameter Selection in Continuous Reinforcement Learning: Attempting to Reduce Tuning Effords:. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:94b81bc2-aff6-457f-9b54-be5e005def38

Chicago Manual of Style (16th Edition):

Van Rooijen, J C. “Learning Parameter Selection in Continuous Reinforcement Learning: Attempting to Reduce Tuning Effords:.” 2012. Masters Thesis, Delft University of Technology. Accessed June 17, 2019. http://resolver.tudelft.nl/uuid:94b81bc2-aff6-457f-9b54-be5e005def38.

MLA Handbook (7th Edition):

Van Rooijen, J C. “Learning Parameter Selection in Continuous Reinforcement Learning: Attempting to Reduce Tuning Effords:.” 2012. Web. 17 Jun 2019.

Vancouver:

Van Rooijen JC. Learning Parameter Selection in Continuous Reinforcement Learning: Attempting to Reduce Tuning Effords:. [Internet] [Masters thesis]. Delft University of Technology; 2012. [cited 2019 Jun 17]. Available from: http://resolver.tudelft.nl/uuid:94b81bc2-aff6-457f-9b54-be5e005def38.

Council of Science Editors:

Van Rooijen JC. Learning Parameter Selection in Continuous Reinforcement Learning: Attempting to Reduce Tuning Effords:. [Masters Thesis]. Delft University of Technology; 2012. Available from: http://resolver.tudelft.nl/uuid:94b81bc2-aff6-457f-9b54-be5e005def38


University of Aberdeen

8. Alexander, John W. Transfer in reinforcement learning.

Degree: PhD, 2015, University of Aberdeen

 The problem of developing skill repertoires autonomously in robotics and artificial intelligence is becoming ever more pressing. Currently, the issues of how to apply prior… (more)

Subjects/Keywords: Reinforcement learning; Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Alexander, J. W. (2015). Transfer in reinforcement learning. (Doctoral Dissertation). University of Aberdeen. Retrieved from http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675561

Chicago Manual of Style (16th Edition):

Alexander, John W. “Transfer in reinforcement learning.” 2015. Doctoral Dissertation, University of Aberdeen. Accessed June 17, 2019. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675561.

MLA Handbook (7th Edition):

Alexander, John W. “Transfer in reinforcement learning.” 2015. Web. 17 Jun 2019.

Vancouver:

Alexander JW. Transfer in reinforcement learning. [Internet] [Doctoral dissertation]. University of Aberdeen; 2015. [cited 2019 Jun 17]. Available from: http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675561.

Council of Science Editors:

Alexander JW. Transfer in reinforcement learning. [Doctoral Dissertation]. University of Aberdeen; 2015. Available from: http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675561


Texas A&M University

9. Ahn, Seungjai. Energy-efficient Q-learning for Collision Avoidance of Autonomous Robots.

Degree: 2017, Texas A&M University

 Recently, many companies have been studying intelligent cars, and improvements in sensor technology and computing are required. The intelligent cars use GPS to know where… (more)

Subjects/Keywords: Reinforcement Learning; Robot

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ahn, S. (2017). Energy-efficient Q-learning for Collision Avoidance of Autonomous Robots. (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/161486

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Ahn, Seungjai. “Energy-efficient Q-learning for Collision Avoidance of Autonomous Robots.” 2017. Thesis, Texas A&M University. Accessed June 17, 2019. http://hdl.handle.net/1969.1/161486.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Ahn, Seungjai. “Energy-efficient Q-learning for Collision Avoidance of Autonomous Robots.” 2017. Web. 17 Jun 2019.

Vancouver:

Ahn S. Energy-efficient Q-learning for Collision Avoidance of Autonomous Robots. [Internet] [Thesis]. Texas A&M University; 2017. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1969.1/161486.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Ahn S. Energy-efficient Q-learning for Collision Avoidance of Autonomous Robots. [Thesis]. Texas A&M University; 2017. Available from: http://hdl.handle.net/1969.1/161486

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of St. Andrews

10. Aquili, Luca. Refinement of biologically inspired models of reinforcement learning .

Degree: 2010, University of St. Andrews

Reinforcement learning occurs when organisms adapt the propensities of given behaviours on the basis of associations with reward and punishment. Currently, reinforcement learning models have… (more)

Subjects/Keywords: Dopaminergic; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Aquili, L. (2010). Refinement of biologically inspired models of reinforcement learning . (Thesis). University of St. Andrews. Retrieved from http://hdl.handle.net/10023/886

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Aquili, Luca. “Refinement of biologically inspired models of reinforcement learning .” 2010. Thesis, University of St. Andrews. Accessed June 17, 2019. http://hdl.handle.net/10023/886.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Aquili, Luca. “Refinement of biologically inspired models of reinforcement learning .” 2010. Web. 17 Jun 2019.

Vancouver:

Aquili L. Refinement of biologically inspired models of reinforcement learning . [Internet] [Thesis]. University of St. Andrews; 2010. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10023/886.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Aquili L. Refinement of biologically inspired models of reinforcement learning . [Thesis]. University of St. Andrews; 2010. Available from: http://hdl.handle.net/10023/886

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of New South Wales

11. Ismail, Hafsa. A neural network framework for combining different task types and motivations in motivated reinforcement learning.

Degree: Engineering & Information Technology, 2014, University of New South Wales

 Combining different motivation models for different task types within artificial agents has the potential to produce agents capable of a greater range of behaviours in… (more)

Subjects/Keywords: Motivated Reinforcement Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ismail, H. (2014). A neural network framework for combining different task types and motivations in motivated reinforcement learning. (Masters Thesis). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/53975 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:12686/SOURCE02?view=true

Chicago Manual of Style (16th Edition):

Ismail, Hafsa. “A neural network framework for combining different task types and motivations in motivated reinforcement learning.” 2014. Masters Thesis, University of New South Wales. Accessed June 17, 2019. http://handle.unsw.edu.au/1959.4/53975 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:12686/SOURCE02?view=true.

MLA Handbook (7th Edition):

Ismail, Hafsa. “A neural network framework for combining different task types and motivations in motivated reinforcement learning.” 2014. Web. 17 Jun 2019.

Vancouver:

Ismail H. A neural network framework for combining different task types and motivations in motivated reinforcement learning. [Internet] [Masters thesis]. University of New South Wales; 2014. [cited 2019 Jun 17]. Available from: http://handle.unsw.edu.au/1959.4/53975 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:12686/SOURCE02?view=true.

Council of Science Editors:

Ismail H. A neural network framework for combining different task types and motivations in motivated reinforcement learning. [Masters Thesis]. University of New South Wales; 2014. Available from: http://handle.unsw.edu.au/1959.4/53975 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:12686/SOURCE02?view=true


Oregon State University

12. Wilson, Aaron (Aaron Creighton). Bayesian methods for knowledge transfer and policy search in reinforcement learning.

Degree: PhD, Computer Science, 2012, Oregon State University

 How can an agent generalize its knowledge to new circumstances? To learn effectively an agent acting in a sequential decision problem must make intelligent action… (more)

Subjects/Keywords: Machine Learning; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wilson, A. (. C. (2012). Bayesian methods for knowledge transfer and policy search in reinforcement learning. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/34550

Chicago Manual of Style (16th Edition):

Wilson, Aaron (Aaron Creighton). “Bayesian methods for knowledge transfer and policy search in reinforcement learning.” 2012. Doctoral Dissertation, Oregon State University. Accessed June 17, 2019. http://hdl.handle.net/1957/34550.

MLA Handbook (7th Edition):

Wilson, Aaron (Aaron Creighton). “Bayesian methods for knowledge transfer and policy search in reinforcement learning.” 2012. Web. 17 Jun 2019.

Vancouver:

Wilson A(C. Bayesian methods for knowledge transfer and policy search in reinforcement learning. [Internet] [Doctoral dissertation]. Oregon State University; 2012. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1957/34550.

Council of Science Editors:

Wilson A(C. Bayesian methods for knowledge transfer and policy search in reinforcement learning. [Doctoral Dissertation]. Oregon State University; 2012. Available from: http://hdl.handle.net/1957/34550


University of Texas – Austin

13. Jong, Nicholas K. Structured exploration for reinforcement learning.

Degree: Computer Sciences, 2010, University of Texas – Austin

Reinforcement Learning (RL) offers a promising approach towards achieving the dream of autonomous agents that can behave intelligently in the real world. Instead of requiring… (more)

Subjects/Keywords: Reinforcement learning; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jong, N. K. (2010). Structured exploration for reinforcement learning. (Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/ETD-UT-2010-12-2448

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Jong, Nicholas K. “Structured exploration for reinforcement learning.” 2010. Thesis, University of Texas – Austin. Accessed June 17, 2019. http://hdl.handle.net/2152/ETD-UT-2010-12-2448.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Jong, Nicholas K. “Structured exploration for reinforcement learning.” 2010. Web. 17 Jun 2019.

Vancouver:

Jong NK. Structured exploration for reinforcement learning. [Internet] [Thesis]. University of Texas – Austin; 2010. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2448.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Jong NK. Structured exploration for reinforcement learning. [Thesis]. University of Texas – Austin; 2010. Available from: http://hdl.handle.net/2152/ETD-UT-2010-12-2448

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

14. Lin, Kun-da. Deep Reinforcement Learning with a Gating Network.

Degree: Master, Electrical Engineering, 2017, NSYSU

Reinforcement Learning (RL) is a good way to train the robot since it doesn't need an exact model of the environment. All need is to… (more)

Subjects/Keywords: Reinforcement Learning; Deep Reinforcement Learning; Deep Learning; Gating network; Neural network

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, K. (2017). Deep Reinforcement Learning with a Gating Network. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lin, Kun-da. “Deep Reinforcement Learning with a Gating Network.” 2017. Thesis, NSYSU. Accessed June 17, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lin, Kun-da. “Deep Reinforcement Learning with a Gating Network.” 2017. Web. 17 Jun 2019.

Vancouver:

Lin K. Deep Reinforcement Learning with a Gating Network. [Internet] [Thesis]. NSYSU; 2017. [cited 2019 Jun 17]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lin K. Deep Reinforcement Learning with a Gating Network. [Thesis]. NSYSU; 2017. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0223117-131536

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

15. Tseng, Yi-Chia. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.

Degree: Master, Electrical Engineering, 2015, NSYSU

Reinforcement learning (RL) techniques use a reward function to correct a learning agent to solve sequential decision making problems through interactions with a dynamic environment,… (more)

Subjects/Keywords: Apprenticeship Learning; Feature weight; Inverse Reinforcement learning; Reward function; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tseng, Y. (2015). An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Thesis, NSYSU. Accessed June 17, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tseng, Yi-Chia. “An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations.” 2015. Web. 17 Jun 2019.

Vancouver:

Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Internet] [Thesis]. NSYSU; 2015. [cited 2019 Jun 17]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tseng Y. An Unified Approach to Inverse Reinforcement Learning by Oppositive Demonstrations. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727115-130716

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


NSYSU

16. Lin, Hung-shyuan. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.

Degree: Master, Electrical Engineering, 2015, NSYSU

 Itâs a study on Reinforcement Learning, learning interaction of agents and dynamic environment to get reward function R, and update the policy, converge learning and… (more)

Subjects/Keywords: Inverse reinforcement learning; Reward function; Fuzzy; Reinforcement learning; AdaBoost; Apprenticeship learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lin, H. (2015). Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Thesis, NSYSU. Accessed June 17, 2019. http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lin, Hung-shyuan. “Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning.” 2015. Web. 17 Jun 2019.

Vancouver:

Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Internet] [Thesis]. NSYSU; 2015. [cited 2019 Jun 17]. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lin H. Applying The Concept of Fuzzy Logic to Inverse Reinforcement Learning. [Thesis]. NSYSU; 2015. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-1025115-185021

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

17. Yunduan, Cui. Practical Model-free Reinforcement Learning in Complex Robot Systems with High Dimensional States : 高次元状態を有する複雑なロボットシステムにおける実用的なモデルフリー強化学習; コウジゲン ジョウタイ オ ユウスル フクザツナ ロボット システム ニ オケル ジツヨウテキナ モデルフリー キョウカ ガクシュウ.

Degree: 博士(工学), 2017, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: Reinforcement Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yunduan, C. (2017). Practical Model-free Reinforcement Learning in Complex Robot Systems with High Dimensional States : 高次元状態を有する複雑なロボットシステムにおける実用的なモデルフリー強化学習; コウジゲン ジョウタイ オ ユウスル フクザツナ ロボット システム ニ オケル ジツヨウテキナ モデルフリー キョウカ ガクシュウ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/12169

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Yunduan, Cui. “Practical Model-free Reinforcement Learning in Complex Robot Systems with High Dimensional States : 高次元状態を有する複雑なロボットシステムにおける実用的なモデルフリー強化学習; コウジゲン ジョウタイ オ ユウスル フクザツナ ロボット システム ニ オケル ジツヨウテキナ モデルフリー キョウカ ガクシュウ.” 2017. Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/12169.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Yunduan, Cui. “Practical Model-free Reinforcement Learning in Complex Robot Systems with High Dimensional States : 高次元状態を有する複雑なロボットシステムにおける実用的なモデルフリー強化学習; コウジゲン ジョウタイ オ ユウスル フクザツナ ロボット システム ニ オケル ジツヨウテキナ モデルフリー キョウカ ガクシュウ.” 2017. Web. 17 Jun 2019.

Vancouver:

Yunduan C. Practical Model-free Reinforcement Learning in Complex Robot Systems with High Dimensional States : 高次元状態を有する複雑なロボットシステムにおける実用的なモデルフリー強化学習; コウジゲン ジョウタイ オ ユウスル フクザツナ ロボット システム ニ オケル ジツヨウテキナ モデルフリー キョウカ ガクシュウ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; 2017. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/12169.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yunduan C. Practical Model-free Reinforcement Learning in Complex Robot Systems with High Dimensional States : 高次元状態を有する複雑なロボットシステムにおける実用的なモデルフリー強化学習; コウジゲン ジョウタイ オ ユウスル フクザツナ ロボット システム ニ オケル ジツヨウテキナ モデルフリー キョウカ ガクシュウ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; 2017. Available from: http://hdl.handle.net/10061/12169

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Alberta

18. Dick, Travis B. Policy Gradient Reinforcement Learning Without Regret.

Degree: MS, Department of Computing Science, 2015, University of Alberta

 This thesis consists of two independent projects, each contributing to a central goal of artificial intelligence research: to build computer systems that are capable of… (more)

Subjects/Keywords: Policy Gradient; Baseline; Reinforcement Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Dick, T. B. (2015). Policy Gradient Reinforcement Learning Without Regret. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/df65vb663

Chicago Manual of Style (16th Edition):

Dick, Travis B. “Policy Gradient Reinforcement Learning Without Regret.” 2015. Masters Thesis, University of Alberta. Accessed June 17, 2019. https://era.library.ualberta.ca/files/df65vb663.

MLA Handbook (7th Edition):

Dick, Travis B. “Policy Gradient Reinforcement Learning Without Regret.” 2015. Web. 17 Jun 2019.

Vancouver:

Dick TB. Policy Gradient Reinforcement Learning Without Regret. [Internet] [Masters thesis]. University of Alberta; 2015. [cited 2019 Jun 17]. Available from: https://era.library.ualberta.ca/files/df65vb663.

Council of Science Editors:

Dick TB. Policy Gradient Reinforcement Learning Without Regret. [Masters Thesis]. University of Alberta; 2015. Available from: https://era.library.ualberta.ca/files/df65vb663


University of Alberta

19. White, Adam, M. DEVELOPING A PREDICTIVE APPROACH TO KNOWLEDGE.

Degree: PhD, Department of Computing Science, 2015, University of Alberta

 Understanding how an artificial agent may represent, acquire, update, and use large amounts of knowledge has long been an important research challenge in artificial intelligence.… (more)

Subjects/Keywords: Reinforcement learning; Robotics; Knowledge

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

White, Adam, M. (2015). DEVELOPING A PREDICTIVE APPROACH TO KNOWLEDGE. (Doctoral Dissertation). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/bg257h75k

Chicago Manual of Style (16th Edition):

White, Adam, M. “DEVELOPING A PREDICTIVE APPROACH TO KNOWLEDGE.” 2015. Doctoral Dissertation, University of Alberta. Accessed June 17, 2019. https://era.library.ualberta.ca/files/bg257h75k.

MLA Handbook (7th Edition):

White, Adam, M. “DEVELOPING A PREDICTIVE APPROACH TO KNOWLEDGE.” 2015. Web. 17 Jun 2019.

Vancouver:

White, Adam M. DEVELOPING A PREDICTIVE APPROACH TO KNOWLEDGE. [Internet] [Doctoral dissertation]. University of Alberta; 2015. [cited 2019 Jun 17]. Available from: https://era.library.ualberta.ca/files/bg257h75k.

Council of Science Editors:

White, Adam M. DEVELOPING A PREDICTIVE APPROACH TO KNOWLEDGE. [Doctoral Dissertation]. University of Alberta; 2015. Available from: https://era.library.ualberta.ca/files/bg257h75k


Oregon State University

20. Zhang, Wei, 1960-. Reinforcement learning for job-shop scheduling.

Degree: PhD, Computer Science, 1996, Oregon State University

Subjects/Keywords: Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Zhang, Wei, 1. (1996). Reinforcement learning for job-shop scheduling. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/11721

Chicago Manual of Style (16th Edition):

Zhang, Wei, 1960-. “Reinforcement learning for job-shop scheduling.” 1996. Doctoral Dissertation, Oregon State University. Accessed June 17, 2019. http://hdl.handle.net/1957/11721.

MLA Handbook (7th Edition):

Zhang, Wei, 1960-. “Reinforcement learning for job-shop scheduling.” 1996. Web. 17 Jun 2019.

Vancouver:

Zhang, Wei 1. Reinforcement learning for job-shop scheduling. [Internet] [Doctoral dissertation]. Oregon State University; 1996. [cited 2019 Jun 17]. Available from: http://hdl.handle.net/1957/11721.

Council of Science Editors:

Zhang, Wei 1. Reinforcement learning for job-shop scheduling. [Doctoral Dissertation]. Oregon State University; 1996. Available from: http://hdl.handle.net/1957/11721

21. Clark, Kendrick Cheng Go. A Reinforcement Learning Model of the Shepherding Task : 羊飼い課題の強化学習モデル; ヒツジ カイ カダイ ノ キョウカ ガクシュウ モデル.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Clark, K. C. G. (n.d.). A Reinforcement Learning Model of the Shepherding Task : 羊飼い課題の強化学習モデル; ヒツジ カイ カダイ ノ キョウカ ガクシュウ モデル. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/10997

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Clark, Kendrick Cheng Go. “A Reinforcement Learning Model of the Shepherding Task : 羊飼い課題の強化学習モデル; ヒツジ カイ カダイ ノ キョウカ ガクシュウ モデル.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/10997.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Clark, Kendrick Cheng Go. “A Reinforcement Learning Model of the Shepherding Task : 羊飼い課題の強化学習モデル; ヒツジ カイ カダイ ノ キョウカ ガクシュウ モデル.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

Clark KCG. A Reinforcement Learning Model of the Shepherding Task : 羊飼い課題の強化学習モデル; ヒツジ カイ カダイ ノ キョウカ ガクシュウ モデル. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/10997.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

Clark KCG. A Reinforcement Learning Model of the Shepherding Task : 羊飼い課題の強化学習モデル; ヒツジ カイ カダイ ノ キョウカ ガクシュウ モデル. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/10997

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

22. Mauricio Alexandre Parente Burdelis. Temporal Difference Approach in Linearly Solvable Markov Decision Processes : 線形可解マルコフ決定過程における受動的ダイナミクスのモデリングと推定; センケイ カカイ マルコフ ケッテイ カテイ ニ オケル ジュドウテキ ダイナミクス ノ モデリング ト スイテイ.

Degree: 博士(工学), Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Burdelis, M. A. P. (n.d.). Temporal Difference Approach in Linearly Solvable Markov Decision Processes : 線形可解マルコフ決定過程における受動的ダイナミクスのモデリングと推定; センケイ カカイ マルコフ ケッテイ カテイ ニ オケル ジュドウテキ ダイナミクス ノ モデリング ト スイテイ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/9189

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Burdelis, Mauricio Alexandre Parente. “Temporal Difference Approach in Linearly Solvable Markov Decision Processes : 線形可解マルコフ決定過程における受動的ダイナミクスのモデリングと推定; センケイ カカイ マルコフ ケッテイ カテイ ニ オケル ジュドウテキ ダイナミクス ノ モデリング ト スイテイ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/9189.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Burdelis, Mauricio Alexandre Parente. “Temporal Difference Approach in Linearly Solvable Markov Decision Processes : 線形可解マルコフ決定過程における受動的ダイナミクスのモデリングと推定; センケイ カカイ マルコフ ケッテイ カテイ ニ オケル ジュドウテキ ダイナミクス ノ モデリング ト スイテイ.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

Burdelis MAP. Temporal Difference Approach in Linearly Solvable Markov Decision Processes : 線形可解マルコフ決定過程における受動的ダイナミクスのモデリングと推定; センケイ カカイ マルコフ ケッテイ カテイ ニ オケル ジュドウテキ ダイナミクス ノ モデリング ト スイテイ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/9189.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

Burdelis MAP. Temporal Difference Approach in Linearly Solvable Markov Decision Processes : 線形可解マルコフ決定過程における受動的ダイナミクスのモデリングと推定; センケイ カカイ マルコフ ケッテイ カテイ ニ オケル ジュドウテキ ダイナミクス ノ モデリング ト スイテイ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/9189

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

23. 森本, 淳. Hierarchical Decomposition and Min-max Strategy for Fast and Robust Reinforcement Learning in the Real Environment : 階層分割とMin-max戦略による実環境での高速かつロバストな強化学習; カイソウ ブンカツ ト Min-max センリャク ニヨル ジツカンキョウ デノ コウソク カツ ロバストナ キョウカ ガクシュウ.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

森本, . (n.d.). Hierarchical Decomposition and Min-max Strategy for Fast and Robust Reinforcement Learning in the Real Environment : 階層分割とMin-max戦略による実環境での高速かつロバストな強化学習; カイソウ ブンカツ ト Min-max センリャク ニヨル ジツカンキョウ デノ コウソク カツ ロバストナ キョウカ ガクシュウ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/2966

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

森本, 淳. “Hierarchical Decomposition and Min-max Strategy for Fast and Robust Reinforcement Learning in the Real Environment : 階層分割とMin-max戦略による実環境での高速かつロバストな強化学習; カイソウ ブンカツ ト Min-max センリャク ニヨル ジツカンキョウ デノ コウソク カツ ロバストナ キョウカ ガクシュウ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/2966.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

森本, 淳. “Hierarchical Decomposition and Min-max Strategy for Fast and Robust Reinforcement Learning in the Real Environment : 階層分割とMin-max戦略による実環境での高速かつロバストな強化学習; カイソウ ブンカツ ト Min-max センリャク ニヨル ジツカンキョウ デノ コウソク カツ ロバストナ キョウカ ガクシュウ.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

森本 . Hierarchical Decomposition and Min-max Strategy for Fast and Robust Reinforcement Learning in the Real Environment : 階層分割とMin-max戦略による実環境での高速かつロバストな強化学習; カイソウ ブンカツ ト Min-max センリャク ニヨル ジツカンキョウ デノ コウソク カツ ロバストナ キョウカ ガクシュウ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/2966.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

森本 . Hierarchical Decomposition and Min-max Strategy for Fast and Robust Reinforcement Learning in the Real Environment : 階層分割とMin-max戦略による実環境での高速かつロバストな強化学習; カイソウ ブンカツ ト Min-max センリャク ニヨル ジツカンキョウ デノ コウソク カツ ロバストナ キョウカ ガクシュウ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/2966

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

24. 南條, 信人. 動的な部分空間生成による価値の逐次推定を行う強化学習法 : An effective reinforcement learning with automatic construction of basis functions and sequential approximation; ドウテキナ ブブン クウカン セイセイ ニヨル カチ ノ チクジ スイテイ オ オコナウ キョウカ ガクシュウホウ.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

南條, . (n.d.). 動的な部分空間生成による価値の逐次推定を行う強化学習法 : An effective reinforcement learning with automatic construction of basis functions and sequential approximation; ドウテキナ ブブン クウカン セイセイ ニヨル カチ ノ チクジ スイテイ オ オコナウ キョウカ ガクシュウホウ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/4584

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

南條, 信人. “動的な部分空間生成による価値の逐次推定を行う強化学習法 : An effective reinforcement learning with automatic construction of basis functions and sequential approximation; ドウテキナ ブブン クウカン セイセイ ニヨル カチ ノ チクジ スイテイ オ オコナウ キョウカ ガクシュウホウ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/4584.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

南條, 信人. “動的な部分空間生成による価値の逐次推定を行う強化学習法 : An effective reinforcement learning with automatic construction of basis functions and sequential approximation; ドウテキナ ブブン クウカン セイセイ ニヨル カチ ノ チクジ スイテイ オ オコナウ キョウカ ガクシュウホウ.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

南條 . 動的な部分空間生成による価値の逐次推定を行う強化学習法 : An effective reinforcement learning with automatic construction of basis functions and sequential approximation; ドウテキナ ブブン クウカン セイセイ ニヨル カチ ノ チクジ スイテイ オ オコナウ キョウカ ガクシュウホウ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/4584.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

南條 . 動的な部分空間生成による価値の逐次推定を行う強化学習法 : An effective reinforcement learning with automatic construction of basis functions and sequential approximation; ドウテキナ ブブン クウカン セイセイ ニヨル カチ ノ チクジ スイテイ オ オコナウ キョウカ ガクシュウホウ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/4584

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

25. Rodrigues, Alan de Souza. Model-Free and Model-Based Reinforcement Learning Strategies in the Acquisition of Sequential Behaviors : 系列運動の獲得におけるモデルフリーとモデルベース強化学習戦略; ケイレツ ウンドウ ノ カクトク ニオケル モデル フリー ト モデル ベース キョウカ ガクシュウ センリャク.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: Reinforcement Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rodrigues, A. d. S. (n.d.). Model-Free and Model-Based Reinforcement Learning Strategies in the Acquisition of Sequential Behaviors : 系列運動の獲得におけるモデルフリーとモデルベース強化学習戦略; ケイレツ ウンドウ ノ カクトク ニオケル モデル フリー ト モデル ベース キョウカ ガクシュウ センリャク. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/4681

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Rodrigues, Alan de Souza. “Model-Free and Model-Based Reinforcement Learning Strategies in the Acquisition of Sequential Behaviors : 系列運動の獲得におけるモデルフリーとモデルベース強化学習戦略; ケイレツ ウンドウ ノ カクトク ニオケル モデル フリー ト モデル ベース キョウカ ガクシュウ センリャク.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/4681.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Rodrigues, Alan de Souza. “Model-Free and Model-Based Reinforcement Learning Strategies in the Acquisition of Sequential Behaviors : 系列運動の獲得におけるモデルフリーとモデルベース強化学習戦略; ケイレツ ウンドウ ノ カクトク ニオケル モデル フリー ト モデル ベース キョウカ ガクシュウ センリャク.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

Rodrigues AdS. Model-Free and Model-Based Reinforcement Learning Strategies in the Acquisition of Sequential Behaviors : 系列運動の獲得におけるモデルフリーとモデルベース強化学習戦略; ケイレツ ウンドウ ノ カクトク ニオケル モデル フリー ト モデル ベース キョウカ ガクシュウ センリャク. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/4681.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

Rodrigues AdS. Model-Free and Model-Based Reinforcement Learning Strategies in the Acquisition of Sequential Behaviors : 系列運動の獲得におけるモデルフリーとモデルベース強化学習戦略; ケイレツ ウンドウ ノ カクトク ニオケル モデル フリー ト モデル ベース キョウカ ガクシュウ センリャク. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/4681

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

26. Morimura, Tetsuro. Efficient Task-independent Reinforcement Learning based on Policy Gradient : 方策勾配に基づく効率の良い課題非依存な強化学習法; ホウサク コウバイ ニ モトヅク コウリツ ノ ヨイ カダイ ヒ イゾン ナ キョウカ ガクシュウ ホウ.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Morimura, T. (n.d.). Efficient Task-independent Reinforcement Learning based on Policy Gradient : 方策勾配に基づく効率の良い課題非依存な強化学習法; ホウサク コウバイ ニ モトヅク コウリツ ノ ヨイ カダイ ヒ イゾン ナ キョウカ ガクシュウ ホウ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/4693

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Morimura, Tetsuro. “Efficient Task-independent Reinforcement Learning based on Policy Gradient : 方策勾配に基づく効率の良い課題非依存な強化学習法; ホウサク コウバイ ニ モトヅク コウリツ ノ ヨイ カダイ ヒ イゾン ナ キョウカ ガクシュウ ホウ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/4693.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Morimura, Tetsuro. “Efficient Task-independent Reinforcement Learning based on Policy Gradient : 方策勾配に基づく効率の良い課題非依存な強化学習法; ホウサク コウバイ ニ モトヅク コウリツ ノ ヨイ カダイ ヒ イゾン ナ キョウカ ガクシュウ ホウ.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

Morimura T. Efficient Task-independent Reinforcement Learning based on Policy Gradient : 方策勾配に基づく効率の良い課題非依存な強化学習法; ホウサク コウバイ ニ モトヅク コウリツ ノ ヨイ カダイ ヒ イゾン ナ キョウカ ガクシュウ ホウ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/4693.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

Morimura T. Efficient Task-independent Reinforcement Learning based on Policy Gradient : 方策勾配に基づく効率の良い課題非依存な強化学習法; ホウサク コウバイ ニ モトヅク コウリツ ノ ヨイ カダイ ヒ イゾン ナ キョウカ ガクシュウ ホウ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/4693

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

27. Otsuka, Makoto. Goal-Oriented Representations of the External World : A Free-Energy-Based Approach : 目的指向的な外界の表現に関する研究 : 自由エネルギーからのアプローチ; モクテキ シコウテキナ ガイカイ ノ ヒョウゲン ニ カンスル ケンキュウ : ジユウ エネルギー カラノ アプローチ.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Otsuka, M. (n.d.). Goal-Oriented Representations of the External World : A Free-Energy-Based Approach : 目的指向的な外界の表現に関する研究 : 自由エネルギーからのアプローチ; モクテキ シコウテキナ ガイカイ ノ ヒョウゲン ニ カンスル ケンキュウ : ジユウ エネルギー カラノ アプローチ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/5548

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Otsuka, Makoto. “Goal-Oriented Representations of the External World : A Free-Energy-Based Approach : 目的指向的な外界の表現に関する研究 : 自由エネルギーからのアプローチ; モクテキ シコウテキナ ガイカイ ノ ヒョウゲン ニ カンスル ケンキュウ : ジユウ エネルギー カラノ アプローチ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/5548.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Otsuka, Makoto. “Goal-Oriented Representations of the External World : A Free-Energy-Based Approach : 目的指向的な外界の表現に関する研究 : 自由エネルギーからのアプローチ; モクテキ シコウテキナ ガイカイ ノ ヒョウゲン ニ カンスル ケンキュウ : ジユウ エネルギー カラノ アプローチ.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

Otsuka M. Goal-Oriented Representations of the External World : A Free-Energy-Based Approach : 目的指向的な外界の表現に関する研究 : 自由エネルギーからのアプローチ; モクテキ シコウテキナ ガイカイ ノ ヒョウゲン ニ カンスル ケンキュウ : ジユウ エネルギー カラノ アプローチ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/5548.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

Otsuka M. Goal-Oriented Representations of the External World : A Free-Energy-Based Approach : 目的指向的な外界の表現に関する研究 : 自由エネルギーからのアプローチ; モクテキ シコウテキナ ガイカイ ノ ヒョウゲン ニ カンスル ケンキュウ : ジユウ エネルギー カラノ アプローチ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/5548

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

28. Koyanagi, Izumi. Reinforcement Learning-based Lightpath Establishment in All-Optical WDM Networks : 全光WDM網における強化学習を用いた光パス設定法; ゼンコウ WDM モウ ニ オケル キョウカ ガクシュウ オ モチイタ ヒカリ パス セッテイ ホウ.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Koyanagi, I. (n.d.). Reinforcement Learning-based Lightpath Establishment in All-Optical WDM Networks : 全光WDM網における強化学習を用いた光パス設定法; ゼンコウ WDM モウ ニ オケル キョウカ ガクシュウ オ モチイタ ヒカリ パス セッテイ ホウ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/5630

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Koyanagi, Izumi. “Reinforcement Learning-based Lightpath Establishment in All-Optical WDM Networks : 全光WDM網における強化学習を用いた光パス設定法; ゼンコウ WDM モウ ニ オケル キョウカ ガクシュウ オ モチイタ ヒカリ パス セッテイ ホウ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/5630.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Koyanagi, Izumi. “Reinforcement Learning-based Lightpath Establishment in All-Optical WDM Networks : 全光WDM網における強化学習を用いた光パス設定法; ゼンコウ WDM モウ ニ オケル キョウカ ガクシュウ オ モチイタ ヒカリ パス セッテイ ホウ.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

Koyanagi I. Reinforcement Learning-based Lightpath Establishment in All-Optical WDM Networks : 全光WDM網における強化学習を用いた光パス設定法; ゼンコウ WDM モウ ニ オケル キョウカ ガクシュウ オ モチイタ ヒカリ パス セッテイ ホウ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/5630.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

Koyanagi I. Reinforcement Learning-based Lightpath Establishment in All-Optical WDM Networks : 全光WDM網における強化学習を用いた光パス設定法; ゼンコウ WDM モウ ニ オケル キョウカ ガクシュウ オ モチイタ ヒカリ パス セッテイ ホウ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/5630

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

29. Rodrigues, Alan de Souza. Multiple Reinforcement Learning Action Selection Strategies in Prefrontal-Basal Ganglia and Cerebellar Networks : 強化学習理論に基づく意思決定戦略における前頭前野-大脳基底核-小脳系神経回路の計算論的機能に関する研究; キョウカ ガクシュウ リロン ニ モトズク イシ ケッテイ センリャク ニオケル ゼントウゼンヤ ダイノウ キテイ カク ショウノウ ケイ シンケイ カイロ ノ ケイサンロンテキ キノウ ニカンスル ケンキュウ.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Rodrigues, A. d. S. (n.d.). Multiple Reinforcement Learning Action Selection Strategies in Prefrontal-Basal Ganglia and Cerebellar Networks : 強化学習理論に基づく意思決定戦略における前頭前野-大脳基底核-小脳系神経回路の計算論的機能に関する研究; キョウカ ガクシュウ リロン ニ モトズク イシ ケッテイ センリャク ニオケル ゼントウゼンヤ ダイノウ キテイ カク ショウノウ ケイ シンケイ カイロ ノ ケイサンロンテキ キノウ ニカンスル ケンキュウ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/6637

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Rodrigues, Alan de Souza. “Multiple Reinforcement Learning Action Selection Strategies in Prefrontal-Basal Ganglia and Cerebellar Networks : 強化学習理論に基づく意思決定戦略における前頭前野-大脳基底核-小脳系神経回路の計算論的機能に関する研究; キョウカ ガクシュウ リロン ニ モトズク イシ ケッテイ センリャク ニオケル ゼントウゼンヤ ダイノウ キテイ カク ショウノウ ケイ シンケイ カイロ ノ ケイサンロンテキ キノウ ニカンスル ケンキュウ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/6637.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Rodrigues, Alan de Souza. “Multiple Reinforcement Learning Action Selection Strategies in Prefrontal-Basal Ganglia and Cerebellar Networks : 強化学習理論に基づく意思決定戦略における前頭前野-大脳基底核-小脳系神経回路の計算論的機能に関する研究; キョウカ ガクシュウ リロン ニ モトズク イシ ケッテイ センリャク ニオケル ゼントウゼンヤ ダイノウ キテイ カク ショウノウ ケイ シンケイ カイロ ノ ケイサンロンテキ キノウ ニカンスル ケンキュウ.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

Rodrigues AdS. Multiple Reinforcement Learning Action Selection Strategies in Prefrontal-Basal Ganglia and Cerebellar Networks : 強化学習理論に基づく意思決定戦略における前頭前野-大脳基底核-小脳系神経回路の計算論的機能に関する研究; キョウカ ガクシュウ リロン ニ モトズク イシ ケッテイ センリャク ニオケル ゼントウゼンヤ ダイノウ キテイ カク ショウノウ ケイ シンケイ カイロ ノ ケイサンロンテキ キノウ ニカンスル ケンキュウ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/6637.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

Rodrigues AdS. Multiple Reinforcement Learning Action Selection Strategies in Prefrontal-Basal Ganglia and Cerebellar Networks : 強化学習理論に基づく意思決定戦略における前頭前野-大脳基底核-小脳系神経回路の計算論的機能に関する研究; キョウカ ガクシュウ リロン ニ モトズク イシ ケッテイ センリャク ニオケル ゼントウゼンヤ ダイノウ キテイ カク ショウノウ ケイ シンケイ カイロ ノ ケイサンロンテキ キノウ ニカンスル ケンキュウ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/6637

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

30. 大下, 将宗. 強化学習を用いた多様な環境における歩容獲得手法の実機蜘蛛型ロボットにおける検証 : Reinforcement Learning Method to Acquire Walking Patterns in Varying Environment: Verification with an Actual Spider-type Robot; キョウカ ガクシュウ オ モチイタ タヨウナ カンキョウ ニ オケル ホヨウ カクトク シュホウ ノ ジッキ クモガタ ロボット ニ オケル ケンショウ.

Degree: Nara Institute of Science and Technology / 奈良先端科学技術大学院大学

Subjects/Keywords: Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

大下, . (n.d.). 強化学習を用いた多様な環境における歩容獲得手法の実機蜘蛛型ロボットにおける検証 : Reinforcement Learning Method to Acquire Walking Patterns in Varying Environment: Verification with an Actual Spider-type Robot; キョウカ ガクシュウ オ モチイタ タヨウナ カンキョウ ニ オケル ホヨウ カクトク シュホウ ノ ジッキ クモガタ ロボット ニ オケル ケンショウ. (Thesis). Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Retrieved from http://hdl.handle.net/10061/9374

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

大下, 将宗. “強化学習を用いた多様な環境における歩容獲得手法の実機蜘蛛型ロボットにおける検証 : Reinforcement Learning Method to Acquire Walking Patterns in Varying Environment: Verification with an Actual Spider-type Robot; キョウカ ガクシュウ オ モチイタ タヨウナ カンキョウ ニ オケル ホヨウ カクトク シュホウ ノ ジッキ クモガタ ロボット ニ オケル ケンショウ.” Thesis, Nara Institute of Science and Technology / 奈良先端科学技術大学院大学. Accessed June 17, 2019. http://hdl.handle.net/10061/9374.

Note: this citation may be lacking information needed for this citation format:
No year of publication.
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

大下, 将宗. “強化学習を用いた多様な環境における歩容獲得手法の実機蜘蛛型ロボットにおける検証 : Reinforcement Learning Method to Acquire Walking Patterns in Varying Environment: Verification with an Actual Spider-type Robot; キョウカ ガクシュウ オ モチイタ タヨウナ カンキョウ ニ オケル ホヨウ カクトク シュホウ ノ ジッキ クモガタ ロボット ニ オケル ケンショウ.” Web. 17 Jun 2019.

Note: this citation may be lacking information needed for this citation format:
No year of publication.

Vancouver:

大下 . 強化学習を用いた多様な環境における歩容獲得手法の実機蜘蛛型ロボットにおける検証 : Reinforcement Learning Method to Acquire Walking Patterns in Varying Environment: Verification with an Actual Spider-type Robot; キョウカ ガクシュウ オ モチイタ タヨウナ カンキョウ ニ オケル ホヨウ カクトク シュホウ ノ ジッキ クモガタ ロボット ニ オケル ケンショウ. [Internet] [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; [cited 2019 Jun 17]. Available from: http://hdl.handle.net/10061/9374.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

Council of Science Editors:

大下 . 強化学習を用いた多様な環境における歩容獲得手法の実機蜘蛛型ロボットにおける検証 : Reinforcement Learning Method to Acquire Walking Patterns in Varying Environment: Verification with an Actual Spider-type Robot; キョウカ ガクシュウ オ モチイタ タヨウナ カンキョウ ニ オケル ホヨウ カクトク シュホウ ノ ジッキ クモガタ ロボット ニ オケル ケンショウ. [Thesis]. Nara Institute of Science and Technology / 奈良先端科学技術大学院大学; Available from: http://hdl.handle.net/10061/9374

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
No year of publication.

[1] [2] [3] [4] [5] … [32]

.