Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

Language: English

You searched for subject:(reinforcement learning). Showing records 91 – 120 of 745 total matches.

[1] [2] [3] [4] [5] [6] … [25]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Country

▼ Search Limiters


Kansas State University

91. Behzadan, Vahid. Security of deep reinforcement learning.

Degree: PhD, Department of Computer Science, 2019, Kansas State University

 Since the inception of Deep Reinforcement Learning (DRL) algorithms, there has been a growing interest from both the research and the industrial communities in the… (more)

Subjects/Keywords: Reinforcement learning; Machine learning; Adversarial machine learning; Policy learning; Security; Artificial Intelligence

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Behzadan, V. (2019). Security of deep reinforcement learning. (Doctoral Dissertation). Kansas State University. Retrieved from http://hdl.handle.net/2097/39799

Chicago Manual of Style (16th Edition):

Behzadan, Vahid. “Security of deep reinforcement learning.” 2019. Doctoral Dissertation, Kansas State University. Accessed May 30, 2020. http://hdl.handle.net/2097/39799.

MLA Handbook (7th Edition):

Behzadan, Vahid. “Security of deep reinforcement learning.” 2019. Web. 30 May 2020.

Vancouver:

Behzadan V. Security of deep reinforcement learning. [Internet] [Doctoral dissertation]. Kansas State University; 2019. [cited 2020 May 30]. Available from: http://hdl.handle.net/2097/39799.

Council of Science Editors:

Behzadan V. Security of deep reinforcement learning. [Doctoral Dissertation]. Kansas State University; 2019. Available from: http://hdl.handle.net/2097/39799


Oregon State University

92. Wang, Xin. Model-based approximation methods for reinforcement learning.

Degree: PhD, Computer Science, 2006, Oregon State University

 The thesis focuses on model-based approximation methods for reinforcement learning with large scale applications such as combinatorial optimization problems. First, the thesis proposes two new… (more)

Subjects/Keywords: Reinforcement Learning; Reinforcement learning (Machine learning)  – Mathematical models

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wang, X. (2006). Model-based approximation methods for reinforcement learning. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/2581

Chicago Manual of Style (16th Edition):

Wang, Xin. “Model-based approximation methods for reinforcement learning.” 2006. Doctoral Dissertation, Oregon State University. Accessed May 30, 2020. http://hdl.handle.net/1957/2581.

MLA Handbook (7th Edition):

Wang, Xin. “Model-based approximation methods for reinforcement learning.” 2006. Web. 30 May 2020.

Vancouver:

Wang X. Model-based approximation methods for reinforcement learning. [Internet] [Doctoral dissertation]. Oregon State University; 2006. [cited 2020 May 30]. Available from: http://hdl.handle.net/1957/2581.

Council of Science Editors:

Wang X. Model-based approximation methods for reinforcement learning. [Doctoral Dissertation]. Oregon State University; 2006. Available from: http://hdl.handle.net/1957/2581


The Ohio State University

93. Yang, Zhaoyuan, Yang. Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach.

Degree: MS, Electrical and Computer Engineering, 2018, The Ohio State University

 We adapt idea of adversarial reinforcement learning to numerical state inputs of controllers. We propose an idea of generating adversarial noises for inputs of controllers… (more)

Subjects/Keywords: Computer Science; Electrical Engineering; Artificial Intelligence; Engineering; deep reinforcement learning; control system; adversarial reinforcement learning; machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Yang, Zhaoyuan, Y. (2018). Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452

Chicago Manual of Style (16th Edition):

Yang, Zhaoyuan, Yang. “Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach.” 2018. Masters Thesis, The Ohio State University. Accessed May 30, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452.

MLA Handbook (7th Edition):

Yang, Zhaoyuan, Yang. “Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach.” 2018. Web. 30 May 2020.

Vancouver:

Yang, Zhaoyuan Y. Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach. [Internet] [Masters thesis]. The Ohio State University; 2018. [cited 2020 May 30]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452.

Council of Science Editors:

Yang, Zhaoyuan Y. Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach. [Masters Thesis]. The Ohio State University; 2018. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452


Delft University of Technology

94. de Bruin, T.D. Sample Efficient Deep Reinforcement Learning for Control.

Degree: 2020, Delft University of Technology

 The arrival of intelligent, general-purpose robots that can learn to perform new tasks autonomously has been promised for a long time now. Deep reinforcement learning,… (more)

Subjects/Keywords: Reinforcement Learning; Deep Learning; Robotics; Deep reinforcement learning; Control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

de Bruin, T. D. (2020). Sample Efficient Deep Reinforcement Learning for Control. (Doctoral Dissertation). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; urn:NBN:nl:ui:24-uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; f8faacb0-9a55-453d-97fd-0388a3c848ee ; 10.4233/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; urn:isbn:978-94-6384-096-5 ; urn:NBN:nl:ui:24-uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee

Chicago Manual of Style (16th Edition):

de Bruin, T D. “Sample Efficient Deep Reinforcement Learning for Control.” 2020. Doctoral Dissertation, Delft University of Technology. Accessed May 30, 2020. http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; urn:NBN:nl:ui:24-uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; f8faacb0-9a55-453d-97fd-0388a3c848ee ; 10.4233/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; urn:isbn:978-94-6384-096-5 ; urn:NBN:nl:ui:24-uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee.

MLA Handbook (7th Edition):

de Bruin, T D. “Sample Efficient Deep Reinforcement Learning for Control.” 2020. Web. 30 May 2020.

Vancouver:

de Bruin TD. Sample Efficient Deep Reinforcement Learning for Control. [Internet] [Doctoral dissertation]. Delft University of Technology; 2020. [cited 2020 May 30]. Available from: http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; urn:NBN:nl:ui:24-uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; f8faacb0-9a55-453d-97fd-0388a3c848ee ; 10.4233/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; urn:isbn:978-94-6384-096-5 ; urn:NBN:nl:ui:24-uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee.

Council of Science Editors:

de Bruin TD. Sample Efficient Deep Reinforcement Learning for Control. [Doctoral Dissertation]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; urn:NBN:nl:ui:24-uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; f8faacb0-9a55-453d-97fd-0388a3c848ee ; 10.4233/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; urn:isbn:978-94-6384-096-5 ; urn:NBN:nl:ui:24-uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee ; http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee


University of California – San Diego

95. Lipton, Zachary Chase. Learning from Temporally-Structured Human Activities Data.

Degree: Computer Science, 2017, University of California – San Diego

 Despite the extraordinary success of deep learning on diverse problems, these triumphs are too often confined to large, clean datasets and well-defined objectives. Face recognition… (more)

Subjects/Keywords: Artificial intelligence; Deep Learning; Fairness; Interpretability; Medical informatics; Reinforcement Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lipton, Z. C. (2017). Learning from Temporally-Structured Human Activities Data. (Thesis). University of California – San Diego. Retrieved from http://www.escholarship.org/uc/item/6mw0q3j8

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Lipton, Zachary Chase. “Learning from Temporally-Structured Human Activities Data.” 2017. Thesis, University of California – San Diego. Accessed May 30, 2020. http://www.escholarship.org/uc/item/6mw0q3j8.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Lipton, Zachary Chase. “Learning from Temporally-Structured Human Activities Data.” 2017. Web. 30 May 2020.

Vancouver:

Lipton ZC. Learning from Temporally-Structured Human Activities Data. [Internet] [Thesis]. University of California – San Diego; 2017. [cited 2020 May 30]. Available from: http://www.escholarship.org/uc/item/6mw0q3j8.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Lipton ZC. Learning from Temporally-Structured Human Activities Data. [Thesis]. University of California – San Diego; 2017. Available from: http://www.escholarship.org/uc/item/6mw0q3j8

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Ryerson University

96. Salmon, Ricardo. Reinforcement learning using associative memory networks.

Degree: 2009, Ryerson University

 It is shown that associative memory networks are capable of solving immediate and general reinforcement learning (RL) problems by combining techniques from associative neural networks… (more)

Subjects/Keywords: Neural networks (Computer science); Reinforcement learning (Machine learning); Memory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Salmon, R. (2009). Reinforcement learning using associative memory networks. (Thesis). Ryerson University. Retrieved from https://digital.library.ryerson.ca/islandora/object/RULA%3A1069

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Salmon, Ricardo. “Reinforcement learning using associative memory networks.” 2009. Thesis, Ryerson University. Accessed May 30, 2020. https://digital.library.ryerson.ca/islandora/object/RULA%3A1069.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Salmon, Ricardo. “Reinforcement learning using associative memory networks.” 2009. Web. 30 May 2020.

Vancouver:

Salmon R. Reinforcement learning using associative memory networks. [Internet] [Thesis]. Ryerson University; 2009. [cited 2020 May 30]. Available from: https://digital.library.ryerson.ca/islandora/object/RULA%3A1069.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Salmon R. Reinforcement learning using associative memory networks. [Thesis]. Ryerson University; 2009. Available from: https://digital.library.ryerson.ca/islandora/object/RULA%3A1069

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Alberta

97. Abbasi-Yadkori, Yasin. Online Learning for Linearly Parametrized Control Problems.

Degree: PhD, Department of Computing Science, 2012, University of Alberta

 In a discrete-time online control problem, a learner makes an effort to control the state of an initially unknown environment so as to minimize the… (more)

Subjects/Keywords: Online Learning; Reinforcement Learning; Confidence Sets; Linear Bandits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Abbasi-Yadkori, Y. (2012). Online Learning for Linearly Parametrized Control Problems. (Doctoral Dissertation). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/6969z199c

Chicago Manual of Style (16th Edition):

Abbasi-Yadkori, Yasin. “Online Learning for Linearly Parametrized Control Problems.” 2012. Doctoral Dissertation, University of Alberta. Accessed May 30, 2020. https://era.library.ualberta.ca/files/6969z199c.

MLA Handbook (7th Edition):

Abbasi-Yadkori, Yasin. “Online Learning for Linearly Parametrized Control Problems.” 2012. Web. 30 May 2020.

Vancouver:

Abbasi-Yadkori Y. Online Learning for Linearly Parametrized Control Problems. [Internet] [Doctoral dissertation]. University of Alberta; 2012. [cited 2020 May 30]. Available from: https://era.library.ualberta.ca/files/6969z199c.

Council of Science Editors:

Abbasi-Yadkori Y. Online Learning for Linearly Parametrized Control Problems. [Doctoral Dissertation]. University of Alberta; 2012. Available from: https://era.library.ualberta.ca/files/6969z199c


University of Alberta

98. Silver, David. Reinforcement Learning and Simulation-Based Search in Computer Go.

Degree: PhD, Department of Computing Science, 2009, University of Alberta

Learning and planning are two fundamental problems in artificial intelligence. The learning problem can be tackled by reinforcement learning methods, such as temporal-difference learning, which… (more)

Subjects/Keywords: Reinforcement learning, simulation-based search, computer Go, temporal-difference learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Silver, D. (2009). Reinforcement Learning and Simulation-Based Search in Computer Go. (Doctoral Dissertation). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/cf95jb59d

Chicago Manual of Style (16th Edition):

Silver, David. “Reinforcement Learning and Simulation-Based Search in Computer Go.” 2009. Doctoral Dissertation, University of Alberta. Accessed May 30, 2020. https://era.library.ualberta.ca/files/cf95jb59d.

MLA Handbook (7th Edition):

Silver, David. “Reinforcement Learning and Simulation-Based Search in Computer Go.” 2009. Web. 30 May 2020.

Vancouver:

Silver D. Reinforcement Learning and Simulation-Based Search in Computer Go. [Internet] [Doctoral dissertation]. University of Alberta; 2009. [cited 2020 May 30]. Available from: https://era.library.ualberta.ca/files/cf95jb59d.

Council of Science Editors:

Silver D. Reinforcement Learning and Simulation-Based Search in Computer Go. [Doctoral Dissertation]. University of Alberta; 2009. Available from: https://era.library.ualberta.ca/files/cf95jb59d


University of Alberta

99. Das Gupta, Ujjwal. Adaptive Representation for Policy Gradient.

Degree: MS, Department of Computing Science, 2015, University of Alberta

 Much of the focus on finding good representations in reinforcement learning has been on learning complex non-linear predictors of value. Methods like policy gradient, that… (more)

Subjects/Keywords: Representation Learning; Decision Trees; Policy Gradient; Reinforcement Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Das Gupta, U. (2015). Adaptive Representation for Policy Gradient. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/zk51vk289

Chicago Manual of Style (16th Edition):

Das Gupta, Ujjwal. “Adaptive Representation for Policy Gradient.” 2015. Masters Thesis, University of Alberta. Accessed May 30, 2020. https://era.library.ualberta.ca/files/zk51vk289.

MLA Handbook (7th Edition):

Das Gupta, Ujjwal. “Adaptive Representation for Policy Gradient.” 2015. Web. 30 May 2020.

Vancouver:

Das Gupta U. Adaptive Representation for Policy Gradient. [Internet] [Masters thesis]. University of Alberta; 2015. [cited 2020 May 30]. Available from: https://era.library.ualberta.ca/files/zk51vk289.

Council of Science Editors:

Das Gupta U. Adaptive Representation for Policy Gradient. [Masters Thesis]. University of Alberta; 2015. Available from: https://era.library.ualberta.ca/files/zk51vk289


University of Alberta

100. Ávila Pires, Bernardo. Statistical analysis of L1-penalized linear estimation with applications.

Degree: MS, Department of Computing Science, 2011, University of Alberta

 We study linear estimation based on perturbed data when performance is measured by a matrix norm of the expected residual error, in particular, the case… (more)

Subjects/Keywords: linear estimation; linear regression; machine learning; Lasso; excess risk; reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ávila Pires, B. (2011). Statistical analysis of L1-penalized linear estimation with applications. (Masters Thesis). University of Alberta. Retrieved from https://era.library.ualberta.ca/files/dr26xz283

Chicago Manual of Style (16th Edition):

Ávila Pires, Bernardo. “Statistical analysis of L1-penalized linear estimation with applications.” 2011. Masters Thesis, University of Alberta. Accessed May 30, 2020. https://era.library.ualberta.ca/files/dr26xz283.

MLA Handbook (7th Edition):

Ávila Pires, Bernardo. “Statistical analysis of L1-penalized linear estimation with applications.” 2011. Web. 30 May 2020.

Vancouver:

Ávila Pires B. Statistical analysis of L1-penalized linear estimation with applications. [Internet] [Masters thesis]. University of Alberta; 2011. [cited 2020 May 30]. Available from: https://era.library.ualberta.ca/files/dr26xz283.

Council of Science Editors:

Ávila Pires B. Statistical analysis of L1-penalized linear estimation with applications. [Masters Thesis]. University of Alberta; 2011. Available from: https://era.library.ualberta.ca/files/dr26xz283


Victoria University of Wellington

101. Bebbington, James. Learning Actions That Reduce Variation in Objects.

Degree: 2011, Victoria University of Wellington

 The variation in the data that a robot in the real world receives from its sensory inputs (i.e. its sensory data) will come from many… (more)

Subjects/Keywords: Reinforcement learning; Restricted Boltzmann Machine; Recognition; Machine learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bebbington, J. (2011). Learning Actions That Reduce Variation in Objects. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/2295

Chicago Manual of Style (16th Edition):

Bebbington, James. “Learning Actions That Reduce Variation in Objects.” 2011. Masters Thesis, Victoria University of Wellington. Accessed May 30, 2020. http://hdl.handle.net/10063/2295.

MLA Handbook (7th Edition):

Bebbington, James. “Learning Actions That Reduce Variation in Objects.” 2011. Web. 30 May 2020.

Vancouver:

Bebbington J. Learning Actions That Reduce Variation in Objects. [Internet] [Masters thesis]. Victoria University of Wellington; 2011. [cited 2020 May 30]. Available from: http://hdl.handle.net/10063/2295.

Council of Science Editors:

Bebbington J. Learning Actions That Reduce Variation in Objects. [Masters Thesis]. Victoria University of Wellington; 2011. Available from: http://hdl.handle.net/10063/2295


Oregon State University

102. Wynkoop, Michael S. Learning MDP action models via discrete mixture trees.

Degree: MS, Computer Science, 2008, Oregon State University

 This thesis addresses the problem of learning dynamic Bayesian network (DBN) models to support reinforcement learning. It focuses on learning regression tree models of the… (more)

Subjects/Keywords: Dynamic Bayesian Network; Reinforcement learning (Machine learning)  – Mathematical models

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Wynkoop, M. S. (2008). Learning MDP action models via discrete mixture trees. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/9096

Chicago Manual of Style (16th Edition):

Wynkoop, Michael S. “Learning MDP action models via discrete mixture trees.” 2008. Masters Thesis, Oregon State University. Accessed May 30, 2020. http://hdl.handle.net/1957/9096.

MLA Handbook (7th Edition):

Wynkoop, Michael S. “Learning MDP action models via discrete mixture trees.” 2008. Web. 30 May 2020.

Vancouver:

Wynkoop MS. Learning MDP action models via discrete mixture trees. [Internet] [Masters thesis]. Oregon State University; 2008. [cited 2020 May 30]. Available from: http://hdl.handle.net/1957/9096.

Council of Science Editors:

Wynkoop MS. Learning MDP action models via discrete mixture trees. [Masters Thesis]. Oregon State University; 2008. Available from: http://hdl.handle.net/1957/9096


University of Western Australia

103. Jin, Lu. Reinforcement learning based energy efficient routing protocols for underwater acoustic wireless sensor networks.

Degree: PhD, 2012, University of Western Australia

[Truncated abstract] The unique properties of underwater acoustic communications, such as large and time-varying propagation delay, low and range dependent channel bandwidth, and adverse operating… (more)

Subjects/Keywords: Energy efficient; Reinforcement learning; Q-learning; Wireless sensor networks; Underwater acoustic

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jin, L. (2012). Reinforcement learning based energy efficient routing protocols for underwater acoustic wireless sensor networks. (Doctoral Dissertation). University of Western Australia. Retrieved from http://repository.uwa.edu.au:80/R/?func=dbin-jump-full&object_id=33676&local_base=GEN01-INS01

Chicago Manual of Style (16th Edition):

Jin, Lu. “Reinforcement learning based energy efficient routing protocols for underwater acoustic wireless sensor networks.” 2012. Doctoral Dissertation, University of Western Australia. Accessed May 30, 2020. http://repository.uwa.edu.au:80/R/?func=dbin-jump-full&object_id=33676&local_base=GEN01-INS01.

MLA Handbook (7th Edition):

Jin, Lu. “Reinforcement learning based energy efficient routing protocols for underwater acoustic wireless sensor networks.” 2012. Web. 30 May 2020.

Vancouver:

Jin L. Reinforcement learning based energy efficient routing protocols for underwater acoustic wireless sensor networks. [Internet] [Doctoral dissertation]. University of Western Australia; 2012. [cited 2020 May 30]. Available from: http://repository.uwa.edu.au:80/R/?func=dbin-jump-full&object_id=33676&local_base=GEN01-INS01.

Council of Science Editors:

Jin L. Reinforcement learning based energy efficient routing protocols for underwater acoustic wireless sensor networks. [Doctoral Dissertation]. University of Western Australia; 2012. Available from: http://repository.uwa.edu.au:80/R/?func=dbin-jump-full&object_id=33676&local_base=GEN01-INS01


Universiteit Utrecht

104. Denissen, N.P.M. Predicting App Launches on Mobile Devices Using Intelligent Agents and Machine Learning.

Degree: 2015, Universiteit Utrecht

 Data rich applications often have to load large amounts of data upon launch. The launch times for these applications, e.g. Facebook and NU.nl, can be… (more)

Subjects/Keywords: Intelligent; Agents; Machine; Learning; Mobile; Application; Prediction; Q-learning; reinforcement; MAS

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Denissen, N. P. M. (2015). Predicting App Launches on Mobile Devices Using Intelligent Agents and Machine Learning. (Masters Thesis). Universiteit Utrecht. Retrieved from http://dspace.library.uu.nl:8080/handle/1874/318233

Chicago Manual of Style (16th Edition):

Denissen, N P M. “Predicting App Launches on Mobile Devices Using Intelligent Agents and Machine Learning.” 2015. Masters Thesis, Universiteit Utrecht. Accessed May 30, 2020. http://dspace.library.uu.nl:8080/handle/1874/318233.

MLA Handbook (7th Edition):

Denissen, N P M. “Predicting App Launches on Mobile Devices Using Intelligent Agents and Machine Learning.” 2015. Web. 30 May 2020.

Vancouver:

Denissen NPM. Predicting App Launches on Mobile Devices Using Intelligent Agents and Machine Learning. [Internet] [Masters thesis]. Universiteit Utrecht; 2015. [cited 2020 May 30]. Available from: http://dspace.library.uu.nl:8080/handle/1874/318233.

Council of Science Editors:

Denissen NPM. Predicting App Launches on Mobile Devices Using Intelligent Agents and Machine Learning. [Masters Thesis]. Universiteit Utrecht; 2015. Available from: http://dspace.library.uu.nl:8080/handle/1874/318233


Case Western Reserve University

105. Ewing, Gabriel. Knowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces.

Degree: MSs, EECS - Computer and Information Sciences, 2018, Case Western Reserve University

 In this thesis, we address the task of reinforcement learning in continuous state and action spaces. Specifically, we consider multi-task reinforcement learning, where a sequence… (more)

Subjects/Keywords: Computer Science; Machine learning; reinforcement learning; continuous actions; knowledge transfer; prostheses

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ewing, G. (2018). Knowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces. (Masters Thesis). Case Western Reserve University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=case1512748071082221

Chicago Manual of Style (16th Edition):

Ewing, Gabriel. “Knowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces.” 2018. Masters Thesis, Case Western Reserve University. Accessed May 30, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1512748071082221.

MLA Handbook (7th Edition):

Ewing, Gabriel. “Knowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces.” 2018. Web. 30 May 2020.

Vancouver:

Ewing G. Knowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces. [Internet] [Masters thesis]. Case Western Reserve University; 2018. [cited 2020 May 30]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=case1512748071082221.

Council of Science Editors:

Ewing G. Knowledge Transfer from Expert Demonstrations in Continuous State-Action Spaces. [Masters Thesis]. Case Western Reserve University; 2018. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=case1512748071082221


Oregon State University

106. Ok, DoKyeong. A study of model-based average reward reinforcement learning.

Degree: PhD, Computer Science, 1996, Oregon State University

Reinforcement Learning (RL) is the study of learning agents that improve their performance from rewards and punishments. Most reinforcement learning methods optimize the discounted total… (more)

Subjects/Keywords: Reinforcement learning (Machine learning)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ok, D. (1996). A study of model-based average reward reinforcement learning. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/34698

Chicago Manual of Style (16th Edition):

Ok, DoKyeong. “A study of model-based average reward reinforcement learning.” 1996. Doctoral Dissertation, Oregon State University. Accessed May 30, 2020. http://hdl.handle.net/1957/34698.

MLA Handbook (7th Edition):

Ok, DoKyeong. “A study of model-based average reward reinforcement learning.” 1996. Web. 30 May 2020.

Vancouver:

Ok D. A study of model-based average reward reinforcement learning. [Internet] [Doctoral dissertation]. Oregon State University; 1996. [cited 2020 May 30]. Available from: http://hdl.handle.net/1957/34698.

Council of Science Editors:

Ok D. A study of model-based average reward reinforcement learning. [Doctoral Dissertation]. Oregon State University; 1996. Available from: http://hdl.handle.net/1957/34698


Oregon State University

107. Natarajan, Sriraam. Multi-criteria average reward reinforcement learning.

Degree: MS, Computer Science, 2004, Oregon State University

Reinforcement learning (RL) is the study of systems that learn from interaction with their environment. The current framework of Reinforcement Learning is based on receiving… (more)

Subjects/Keywords: Reinforcement learning (Machine learning)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Natarajan, S. (2004). Multi-criteria average reward reinforcement learning. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/22859

Chicago Manual of Style (16th Edition):

Natarajan, Sriraam. “Multi-criteria average reward reinforcement learning.” 2004. Masters Thesis, Oregon State University. Accessed May 30, 2020. http://hdl.handle.net/1957/22859.

MLA Handbook (7th Edition):

Natarajan, Sriraam. “Multi-criteria average reward reinforcement learning.” 2004. Web. 30 May 2020.

Vancouver:

Natarajan S. Multi-criteria average reward reinforcement learning. [Internet] [Masters thesis]. Oregon State University; 2004. [cited 2020 May 30]. Available from: http://hdl.handle.net/1957/22859.

Council of Science Editors:

Natarajan S. Multi-criteria average reward reinforcement learning. [Masters Thesis]. Oregon State University; 2004. Available from: http://hdl.handle.net/1957/22859

108. Hyde, Gregory. Modeling user behavior to construct counter strategies.

Degree: 2019, University of Wisconsin – Whitewater

This file was last viewed in Adobe Acrobat Pro.

We are working on the development of an adaptive learning framework addressing covariate shift, experienced in… (more)

Subjects/Keywords: Reinforcement learning; Machine learning; Neural networks (Computer science); Computer users

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hyde, G. (2019). Modeling user behavior to construct counter strategies. (Thesis). University of Wisconsin – Whitewater. Retrieved from http://digital.library.wisc.edu/1793/79307

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hyde, Gregory. “Modeling user behavior to construct counter strategies.” 2019. Thesis, University of Wisconsin – Whitewater. Accessed May 30, 2020. http://digital.library.wisc.edu/1793/79307.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hyde, Gregory. “Modeling user behavior to construct counter strategies.” 2019. Web. 30 May 2020.

Vancouver:

Hyde G. Modeling user behavior to construct counter strategies. [Internet] [Thesis]. University of Wisconsin – Whitewater; 2019. [cited 2020 May 30]. Available from: http://digital.library.wisc.edu/1793/79307.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hyde G. Modeling user behavior to construct counter strategies. [Thesis]. University of Wisconsin – Whitewater; 2019. Available from: http://digital.library.wisc.edu/1793/79307

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Otago

109. Lee-Hand, Jeremy Sein Ong. A Neural Network Model of Causative Actions .

Degree: 2014, University of Otago

 Many of the actions we perform are defined by the effects they bring about, rather than as stereotypical sequences of motor movements. For instance, to… (more)

Subjects/Keywords: Motor Learning; Neural Networks; Language; Reinforcement Learning Rewards

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lee-Hand, J. S. O. (2014). A Neural Network Model of Causative Actions . (Masters Thesis). University of Otago. Retrieved from http://hdl.handle.net/10523/4549

Chicago Manual of Style (16th Edition):

Lee-Hand, Jeremy Sein Ong. “A Neural Network Model of Causative Actions .” 2014. Masters Thesis, University of Otago. Accessed May 30, 2020. http://hdl.handle.net/10523/4549.

MLA Handbook (7th Edition):

Lee-Hand, Jeremy Sein Ong. “A Neural Network Model of Causative Actions .” 2014. Web. 30 May 2020.

Vancouver:

Lee-Hand JSO. A Neural Network Model of Causative Actions . [Internet] [Masters thesis]. University of Otago; 2014. [cited 2020 May 30]. Available from: http://hdl.handle.net/10523/4549.

Council of Science Editors:

Lee-Hand JSO. A Neural Network Model of Causative Actions . [Masters Thesis]. University of Otago; 2014. Available from: http://hdl.handle.net/10523/4549


University of Oklahoma

110. Palmer, Thomas J. Learning Action-State Representation Forests for Implicitly Relational Worlds.

Degree: PhD, 2015, University of Oklahoma

 Real world tasks, in homes or other unstructured environments, require interacting with objects (including people) and understanding the variety of physical relationships between them. For… (more)

Subjects/Keywords: Relational Reinforcement Learning; Multiple Instance Learning; Symbol Grounding

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Palmer, T. J. (2015). Learning Action-State Representation Forests for Implicitly Relational Worlds. (Doctoral Dissertation). University of Oklahoma. Retrieved from http://hdl.handle.net/11244/14592

Chicago Manual of Style (16th Edition):

Palmer, Thomas J. “Learning Action-State Representation Forests for Implicitly Relational Worlds.” 2015. Doctoral Dissertation, University of Oklahoma. Accessed May 30, 2020. http://hdl.handle.net/11244/14592.

MLA Handbook (7th Edition):

Palmer, Thomas J. “Learning Action-State Representation Forests for Implicitly Relational Worlds.” 2015. Web. 30 May 2020.

Vancouver:

Palmer TJ. Learning Action-State Representation Forests for Implicitly Relational Worlds. [Internet] [Doctoral dissertation]. University of Oklahoma; 2015. [cited 2020 May 30]. Available from: http://hdl.handle.net/11244/14592.

Council of Science Editors:

Palmer TJ. Learning Action-State Representation Forests for Implicitly Relational Worlds. [Doctoral Dissertation]. University of Oklahoma; 2015. Available from: http://hdl.handle.net/11244/14592


Delft University of Technology

111. Van der Laan, T.A. Consolidated Deep Actor Critic Networks:.

Degree: 2015, Delft University of Technology

 The works [Volodymyr et al. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.] and [Volodymyr et al. Human-level control through deep reinforcement learning.… (more)

Subjects/Keywords: reinforcement learning; deep learning; actor critic model; experience replay

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Van der Laan, T. A. (2015). Consolidated Deep Actor Critic Networks:. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:682a56ed-8e21-4b70-af11-0e8e9e298fa2

Chicago Manual of Style (16th Edition):

Van der Laan, T A. “Consolidated Deep Actor Critic Networks:.” 2015. Masters Thesis, Delft University of Technology. Accessed May 30, 2020. http://resolver.tudelft.nl/uuid:682a56ed-8e21-4b70-af11-0e8e9e298fa2.

MLA Handbook (7th Edition):

Van der Laan, T A. “Consolidated Deep Actor Critic Networks:.” 2015. Web. 30 May 2020.

Vancouver:

Van der Laan TA. Consolidated Deep Actor Critic Networks:. [Internet] [Masters thesis]. Delft University of Technology; 2015. [cited 2020 May 30]. Available from: http://resolver.tudelft.nl/uuid:682a56ed-8e21-4b70-af11-0e8e9e298fa2.

Council of Science Editors:

Van der Laan TA. Consolidated Deep Actor Critic Networks:. [Masters Thesis]. Delft University of Technology; 2015. Available from: http://resolver.tudelft.nl/uuid:682a56ed-8e21-4b70-af11-0e8e9e298fa2


University of Waterloo

112. Liang, Jia. Machine Learning for SAT Solvers.

Degree: 2018, University of Waterloo

 Boolean SAT solvers are indispensable tools in a variety of domains in computer science and engineering where efficient search is required. Not only does this… (more)

Subjects/Keywords: Branching heuristic; Restart; Reinforcement learning; Sat solver; Machine learning; Optimization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liang, J. (2018). Machine Learning for SAT Solvers. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/14207

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Liang, Jia. “Machine Learning for SAT Solvers.” 2018. Thesis, University of Waterloo. Accessed May 30, 2020. http://hdl.handle.net/10012/14207.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Liang, Jia. “Machine Learning for SAT Solvers.” 2018. Web. 30 May 2020.

Vancouver:

Liang J. Machine Learning for SAT Solvers. [Internet] [Thesis]. University of Waterloo; 2018. [cited 2020 May 30]. Available from: http://hdl.handle.net/10012/14207.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Liang J. Machine Learning for SAT Solvers. [Thesis]. University of Waterloo; 2018. Available from: http://hdl.handle.net/10012/14207

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of California – San Francisco

113. Charlesworth, Jonathan David. Principles of trial-and-error learning in adult birdsong.

Degree: Neuroscience, 2012, University of California – San Francisco

 Trial-and-error skill learning involves generating variation in behavioral performance ('exploratory variation') and modifying the motor program to produce the behavioral variants associated with better reinforcement.… (more)

Subjects/Keywords: Neurosciences; basal ganglia; behavior; learning; motor; reinforcement learning; systems neuroscience

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Charlesworth, J. D. (2012). Principles of trial-and-error learning in adult birdsong. (Thesis). University of California – San Francisco. Retrieved from http://www.escholarship.org/uc/item/1nr6r58b

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Charlesworth, Jonathan David. “Principles of trial-and-error learning in adult birdsong.” 2012. Thesis, University of California – San Francisco. Accessed May 30, 2020. http://www.escholarship.org/uc/item/1nr6r58b.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Charlesworth, Jonathan David. “Principles of trial-and-error learning in adult birdsong.” 2012. Web. 30 May 2020.

Vancouver:

Charlesworth JD. Principles of trial-and-error learning in adult birdsong. [Internet] [Thesis]. University of California – San Francisco; 2012. [cited 2020 May 30]. Available from: http://www.escholarship.org/uc/item/1nr6r58b.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Charlesworth JD. Principles of trial-and-error learning in adult birdsong. [Thesis]. University of California – San Francisco; 2012. Available from: http://www.escholarship.org/uc/item/1nr6r58b

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of North Texas

114. Silguero, Russell V. Do contingency-conflicting elements drop out of equivalence classes? Re-testing Sidman's (2000) theory.

Degree: 2015, University of North Texas

 Sidman's (2000) theory of stimulus equivalence states that all positive elements in a reinforcement contingency enter an equivalence class. The theory also states that if… (more)

Subjects/Keywords: stimulus equivalence; relational responding; partition; Reinforcement learning.; Discrimination learning.; Categorization (Psychology)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Silguero, R. V. (2015). Do contingency-conflicting elements drop out of equivalence classes? Re-testing Sidman's (2000) theory. (Thesis). University of North Texas. Retrieved from https://digital.library.unt.edu/ark:/67531/metadc848078/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Silguero, Russell V. “Do contingency-conflicting elements drop out of equivalence classes? Re-testing Sidman's (2000) theory.” 2015. Thesis, University of North Texas. Accessed May 30, 2020. https://digital.library.unt.edu/ark:/67531/metadc848078/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Silguero, Russell V. “Do contingency-conflicting elements drop out of equivalence classes? Re-testing Sidman's (2000) theory.” 2015. Web. 30 May 2020.

Vancouver:

Silguero RV. Do contingency-conflicting elements drop out of equivalence classes? Re-testing Sidman's (2000) theory. [Internet] [Thesis]. University of North Texas; 2015. [cited 2020 May 30]. Available from: https://digital.library.unt.edu/ark:/67531/metadc848078/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Silguero RV. Do contingency-conflicting elements drop out of equivalence classes? Re-testing Sidman's (2000) theory. [Thesis]. University of North Texas; 2015. Available from: https://digital.library.unt.edu/ark:/67531/metadc848078/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of New South Wales

115. Hengst, Bernhard. Discovering hierarchy in reinforcement learning.

Degree: Computer Science & Engineering, 2003, University of New South Wales

 This thesis addresses the open problem of automatically discovering hierarchical structure in reinforcement learning. Current algorithms for reinforcement learning fail to scale as problems become… (more)

Subjects/Keywords: Reinforcement learning (Machine learning)

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hengst, B. (2003). Discovering hierarchy in reinforcement learning. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/20497 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:620/SOURCE01?view=true

Chicago Manual of Style (16th Edition):

Hengst, Bernhard. “Discovering hierarchy in reinforcement learning.” 2003. Doctoral Dissertation, University of New South Wales. Accessed May 30, 2020. http://handle.unsw.edu.au/1959.4/20497 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:620/SOURCE01?view=true.

MLA Handbook (7th Edition):

Hengst, Bernhard. “Discovering hierarchy in reinforcement learning.” 2003. Web. 30 May 2020.

Vancouver:

Hengst B. Discovering hierarchy in reinforcement learning. [Internet] [Doctoral dissertation]. University of New South Wales; 2003. [cited 2020 May 30]. Available from: http://handle.unsw.edu.au/1959.4/20497 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:620/SOURCE01?view=true.

Council of Science Editors:

Hengst B. Discovering hierarchy in reinforcement learning. [Doctoral Dissertation]. University of New South Wales; 2003. Available from: http://handle.unsw.edu.au/1959.4/20497 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:620/SOURCE01?view=true


Princeton University

116. Mcdougle, Samuel David. Action Selection and Action Execution in Human Learning .

Degree: PhD, 2018, Princeton University

 Intelligent behavior requires knowing both what to do in a given situation, and how to do it. Knowledge of the requisite whats and hows in… (more)

Subjects/Keywords: cognitive; computational; motor control; motor learning; neuroscience; reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Mcdougle, S. D. (2018). Action Selection and Action Execution in Human Learning . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01gt54kq72q

Chicago Manual of Style (16th Edition):

Mcdougle, Samuel David. “Action Selection and Action Execution in Human Learning .” 2018. Doctoral Dissertation, Princeton University. Accessed May 30, 2020. http://arks.princeton.edu/ark:/88435/dsp01gt54kq72q.

MLA Handbook (7th Edition):

Mcdougle, Samuel David. “Action Selection and Action Execution in Human Learning .” 2018. Web. 30 May 2020.

Vancouver:

Mcdougle SD. Action Selection and Action Execution in Human Learning . [Internet] [Doctoral dissertation]. Princeton University; 2018. [cited 2020 May 30]. Available from: http://arks.princeton.edu/ark:/88435/dsp01gt54kq72q.

Council of Science Editors:

Mcdougle SD. Action Selection and Action Execution in Human Learning . [Doctoral Dissertation]. Princeton University; 2018. Available from: http://arks.princeton.edu/ark:/88435/dsp01gt54kq72q


Princeton University

117. Stachenfeld, Kimberly Lauren. Learning Neural Representations That Support Efficient Reinforcement Learning .

Degree: PhD, 2018, Princeton University

 RL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with… (more)

Subjects/Keywords: Grid Cell; Hippocampus; Place Cell; Reinforcement Learning; Representation Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Stachenfeld, K. L. (2018). Learning Neural Representations That Support Efficient Reinforcement Learning . (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01qb98mj16v

Chicago Manual of Style (16th Edition):

Stachenfeld, Kimberly Lauren. “Learning Neural Representations That Support Efficient Reinforcement Learning .” 2018. Doctoral Dissertation, Princeton University. Accessed May 30, 2020. http://arks.princeton.edu/ark:/88435/dsp01qb98mj16v.

MLA Handbook (7th Edition):

Stachenfeld, Kimberly Lauren. “Learning Neural Representations That Support Efficient Reinforcement Learning .” 2018. Web. 30 May 2020.

Vancouver:

Stachenfeld KL. Learning Neural Representations That Support Efficient Reinforcement Learning . [Internet] [Doctoral dissertation]. Princeton University; 2018. [cited 2020 May 30]. Available from: http://arks.princeton.edu/ark:/88435/dsp01qb98mj16v.

Council of Science Editors:

Stachenfeld KL. Learning Neural Representations That Support Efficient Reinforcement Learning . [Doctoral Dissertation]. Princeton University; 2018. Available from: http://arks.princeton.edu/ark:/88435/dsp01qb98mj16v


University of Adelaide

118. Gibbons, Daniel Steve. Deep Learning for Bipartite Assignment Problems.

Degree: 2019, University of Adelaide

 A recurring problem in autonomy is the optimal assignment of agents to tasks. Often, such assignments cannot be computed efficiently. Therefore, the existing literature tends… (more)

Subjects/Keywords: Deep learning; combinatorial optimisation; neural networks; reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gibbons, D. S. (2019). Deep Learning for Bipartite Assignment Problems. (Thesis). University of Adelaide. Retrieved from http://hdl.handle.net/2440/121335

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gibbons, Daniel Steve. “Deep Learning for Bipartite Assignment Problems.” 2019. Thesis, University of Adelaide. Accessed May 30, 2020. http://hdl.handle.net/2440/121335.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gibbons, Daniel Steve. “Deep Learning for Bipartite Assignment Problems.” 2019. Web. 30 May 2020.

Vancouver:

Gibbons DS. Deep Learning for Bipartite Assignment Problems. [Internet] [Thesis]. University of Adelaide; 2019. [cited 2020 May 30]. Available from: http://hdl.handle.net/2440/121335.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gibbons DS. Deep Learning for Bipartite Assignment Problems. [Thesis]. University of Adelaide; 2019. Available from: http://hdl.handle.net/2440/121335

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Texas – Austin

119. -6763-2625. Multilayered skill learning and movement coordination for autonomous robotic agents.

Degree: PhD, Computer Science, 2017, University of Texas – Austin

 With advances in technology expanding the capabilities of robots, while at the same time making robots cheaper to manufacture, robots are rapidly becoming more prevalent… (more)

Subjects/Keywords: Overlapping layered learning; Role assignment; Reinforcement learning; Robotics; Robot soccer

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-6763-2625. (2017). Multilayered skill learning and movement coordination for autonomous robotic agents. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/62889

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-6763-2625. “Multilayered skill learning and movement coordination for autonomous robotic agents.” 2017. Doctoral Dissertation, University of Texas – Austin. Accessed May 30, 2020. http://hdl.handle.net/2152/62889.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-6763-2625. “Multilayered skill learning and movement coordination for autonomous robotic agents.” 2017. Web. 30 May 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-6763-2625. Multilayered skill learning and movement coordination for autonomous robotic agents. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2017. [cited 2020 May 30]. Available from: http://hdl.handle.net/2152/62889.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-6763-2625. Multilayered skill learning and movement coordination for autonomous robotic agents. [Doctoral Dissertation]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/62889

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


University of Ghana

120. Turkson, J.A. The Effect of Omnibank’s SME Clinic on Business Management Capacity and Knowledge: The Case of the Accra Metropolis .

Degree: 2019, University of Ghana

 The purpose of this research was to determine the extent to which OmniBank’s SME Clinic has had a positive effect on the business management capacity… (more)

Subjects/Keywords: Omnibank; Reinforcement; Experiential Learning; Social Learning; Accra Metropolis

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Turkson, J. A. (2019). The Effect of Omnibank’s SME Clinic on Business Management Capacity and Knowledge: The Case of the Accra Metropolis . (Masters Thesis). University of Ghana. Retrieved from http://ugspace.ug.edu.gh/handle/123456789/33497

Chicago Manual of Style (16th Edition):

Turkson, J A. “The Effect of Omnibank’s SME Clinic on Business Management Capacity and Knowledge: The Case of the Accra Metropolis .” 2019. Masters Thesis, University of Ghana. Accessed May 30, 2020. http://ugspace.ug.edu.gh/handle/123456789/33497.

MLA Handbook (7th Edition):

Turkson, J A. “The Effect of Omnibank’s SME Clinic on Business Management Capacity and Knowledge: The Case of the Accra Metropolis .” 2019. Web. 30 May 2020.

Vancouver:

Turkson JA. The Effect of Omnibank’s SME Clinic on Business Management Capacity and Knowledge: The Case of the Accra Metropolis . [Internet] [Masters thesis]. University of Ghana; 2019. [cited 2020 May 30]. Available from: http://ugspace.ug.edu.gh/handle/123456789/33497.

Council of Science Editors:

Turkson JA. The Effect of Omnibank’s SME Clinic on Business Management Capacity and Knowledge: The Case of the Accra Metropolis . [Masters Thesis]. University of Ghana; 2019. Available from: http://ugspace.ug.edu.gh/handle/123456789/33497

[1] [2] [3] [4] [5] [6] … [25]

.