Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Reinforcement learning algorithms). Showing records 1 – 30 of 37 total matches.

[1] [2]

Search Limiters

Last 2 Years | English Only

▼ Search Limiters


Rutgers University

1. Marivate, Vukosi N. Improved empirical methods in reinforcement-learning evaluation.

Degree: PhD, Computer Science, 2015, Rutgers University

The central question addressed in this research is ”can we define evaluation methodologies that encourage reinforcement-learning (RL) algorithms to work effectively with real-life data?” First,… (more)

Subjects/Keywords: Reinforcement learning; Machine learning; Algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Marivate, V. N. (2015). Improved empirical methods in reinforcement-learning evaluation. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/46389/

Chicago Manual of Style (16th Edition):

Marivate, Vukosi N. “Improved empirical methods in reinforcement-learning evaluation.” 2015. Doctoral Dissertation, Rutgers University. Accessed October 20, 2020. https://rucore.libraries.rutgers.edu/rutgers-lib/46389/.

MLA Handbook (7th Edition):

Marivate, Vukosi N. “Improved empirical methods in reinforcement-learning evaluation.” 2015. Web. 20 Oct 2020.

Vancouver:

Marivate VN. Improved empirical methods in reinforcement-learning evaluation. [Internet] [Doctoral dissertation]. Rutgers University; 2015. [cited 2020 Oct 20]. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/46389/.

Council of Science Editors:

Marivate VN. Improved empirical methods in reinforcement-learning evaluation. [Doctoral Dissertation]. Rutgers University; 2015. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/46389/


Delft University of Technology

2. Cornelissen, Arjan (author). Quantum gradient estimation and its application to quantum reinforcement learning.

Degree: 2018, Delft University of Technology

 In 2005, Jordan showed how to estimate the gradient of a real-valued function with a high-dimensional domain on a quantum computer. Subsequently, in 2017, it… (more)

Subjects/Keywords: quantum computing; quantum algorithms; gradient; reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cornelissen, A. (. (2018). Quantum gradient estimation and its application to quantum reinforcement learning. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:26fe945f-f02e-4ef7-bdcb-0a2369eb867e

Chicago Manual of Style (16th Edition):

Cornelissen, Arjan (author). “Quantum gradient estimation and its application to quantum reinforcement learning.” 2018. Masters Thesis, Delft University of Technology. Accessed October 20, 2020. http://resolver.tudelft.nl/uuid:26fe945f-f02e-4ef7-bdcb-0a2369eb867e.

MLA Handbook (7th Edition):

Cornelissen, Arjan (author). “Quantum gradient estimation and its application to quantum reinforcement learning.” 2018. Web. 20 Oct 2020.

Vancouver:

Cornelissen A(. Quantum gradient estimation and its application to quantum reinforcement learning. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2020 Oct 20]. Available from: http://resolver.tudelft.nl/uuid:26fe945f-f02e-4ef7-bdcb-0a2369eb867e.

Council of Science Editors:

Cornelissen A(. Quantum gradient estimation and its application to quantum reinforcement learning. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:26fe945f-f02e-4ef7-bdcb-0a2369eb867e


University of Texas – Austin

3. -8073-3276. Parameterized modular inverse reinforcement learning.

Degree: MSin Computer Sciences, Computer Science, 2015, University of Texas – Austin

Reinforcement learning and inverse reinforcement learning can be used to model and understand human behaviors. However, due to the curse of dimensionality, their use as… (more)

Subjects/Keywords: Reinforcement learning; Artificial intelligence; Inverse reinforcement learning; Modular inverse reinforcement learning; Reinforcement learning algorithms; Human navigation behaviors

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-8073-3276. (2015). Parameterized modular inverse reinforcement learning. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Masters Thesis, University of Texas – Austin. Accessed October 20, 2020. http://hdl.handle.net/2152/46987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-8073-3276. “Parameterized modular inverse reinforcement learning.” 2015. Web. 20 Oct 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-8073-3276. Parameterized modular inverse reinforcement learning. [Internet] [Masters thesis]. University of Texas – Austin; 2015. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/2152/46987.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-8073-3276. Parameterized modular inverse reinforcement learning. [Masters Thesis]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/46987

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

4. Kaisers, Michael. Learning against learning : evolutionary dynamics of reinforcement learning algorithms in strategic interactions.

Degree: 2012, Maastricht University

 Imagine computer programs (agents) that learn to coordinate or to compete. This study investigates how their learning processes influence each other. Such adaptive agents already… (more)

Subjects/Keywords: reinforcement learning; algorithms; strategic interactions

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kaisers, M. (2012). Learning against learning : evolutionary dynamics of reinforcement learning algorithms in strategic interactions. (Doctoral Dissertation). Maastricht University. Retrieved from https://cris.maastrichtuniversity.nl/en/publications/e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; urn:nbn:nl:ui:27-e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; urn:isbn:9789461693310 ; urn:nbn:nl:ui:27-e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; https://cris.maastrichtuniversity.nl/en/publications/e1aefcd2-32fc-4f4f-bc00-78e63543db72

Chicago Manual of Style (16th Edition):

Kaisers, Michael. “Learning against learning : evolutionary dynamics of reinforcement learning algorithms in strategic interactions.” 2012. Doctoral Dissertation, Maastricht University. Accessed October 20, 2020. https://cris.maastrichtuniversity.nl/en/publications/e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; urn:nbn:nl:ui:27-e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; urn:isbn:9789461693310 ; urn:nbn:nl:ui:27-e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; https://cris.maastrichtuniversity.nl/en/publications/e1aefcd2-32fc-4f4f-bc00-78e63543db72.

MLA Handbook (7th Edition):

Kaisers, Michael. “Learning against learning : evolutionary dynamics of reinforcement learning algorithms in strategic interactions.” 2012. Web. 20 Oct 2020.

Vancouver:

Kaisers M. Learning against learning : evolutionary dynamics of reinforcement learning algorithms in strategic interactions. [Internet] [Doctoral dissertation]. Maastricht University; 2012. [cited 2020 Oct 20]. Available from: https://cris.maastrichtuniversity.nl/en/publications/e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; urn:nbn:nl:ui:27-e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; urn:isbn:9789461693310 ; urn:nbn:nl:ui:27-e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; https://cris.maastrichtuniversity.nl/en/publications/e1aefcd2-32fc-4f4f-bc00-78e63543db72.

Council of Science Editors:

Kaisers M. Learning against learning : evolutionary dynamics of reinforcement learning algorithms in strategic interactions. [Doctoral Dissertation]. Maastricht University; 2012. Available from: https://cris.maastrichtuniversity.nl/en/publications/e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; urn:nbn:nl:ui:27-e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; urn:isbn:9789461693310 ; urn:nbn:nl:ui:27-e1aefcd2-32fc-4f4f-bc00-78e63543db72 ; https://cris.maastrichtuniversity.nl/en/publications/e1aefcd2-32fc-4f4f-bc00-78e63543db72


University of Vermont

5. Felag, Jack. Co-optimization of a Robot's Body and Brain via Evolution and Reinforcement Learning.

Degree: MS, Complex Systems, 2020, University of Vermont

  Agents are often trained to perform a task via optimization algorithms. One class of algorithms used is evolution, which is ``survival of the fitness''… (more)

Subjects/Keywords: Evolutionary Algorithms; Evolutionary Computation; Optimization; Reinforcement Learning; RL; Robotics; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Felag, J. (2020). Co-optimization of a Robot's Body and Brain via Evolution and Reinforcement Learning. (Thesis). University of Vermont. Retrieved from https://scholarworks.uvm.edu/graddis/1224

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Felag, Jack. “Co-optimization of a Robot's Body and Brain via Evolution and Reinforcement Learning.” 2020. Thesis, University of Vermont. Accessed October 20, 2020. https://scholarworks.uvm.edu/graddis/1224.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Felag, Jack. “Co-optimization of a Robot's Body and Brain via Evolution and Reinforcement Learning.” 2020. Web. 20 Oct 2020.

Vancouver:

Felag J. Co-optimization of a Robot's Body and Brain via Evolution and Reinforcement Learning. [Internet] [Thesis]. University of Vermont; 2020. [cited 2020 Oct 20]. Available from: https://scholarworks.uvm.edu/graddis/1224.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Felag J. Co-optimization of a Robot's Body and Brain via Evolution and Reinforcement Learning. [Thesis]. University of Vermont; 2020. Available from: https://scholarworks.uvm.edu/graddis/1224

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Georgia Tech

6. Saha, Nirvik. The space allocation problem.

Degree: PhD, Architecture, 2020, Georgia Tech

 In the domain of architecture and planning, the space allocation problem (SAP) is a general class of computable problems which is employed by numerous design… (more)

Subjects/Keywords: Architecture; Urban design; Space allocation; Reinforcement learning; Computer algorithms; Computer graphics

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Saha, N. (2020). The space allocation problem. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/63641

Chicago Manual of Style (16th Edition):

Saha, Nirvik. “The space allocation problem.” 2020. Doctoral Dissertation, Georgia Tech. Accessed October 20, 2020. http://hdl.handle.net/1853/63641.

MLA Handbook (7th Edition):

Saha, Nirvik. “The space allocation problem.” 2020. Web. 20 Oct 2020.

Vancouver:

Saha N. The space allocation problem. [Internet] [Doctoral dissertation]. Georgia Tech; 2020. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/1853/63641.

Council of Science Editors:

Saha N. The space allocation problem. [Doctoral Dissertation]. Georgia Tech; 2020. Available from: http://hdl.handle.net/1853/63641


Dalhousie University

7. Kelly, Stephen. ON DEVELOPMENTAL VARIATION IN HIERARCHICAL SYMBIOTIC POLICY SEARCH.

Degree: Master of Computer Science, Faculty of Computer Science, 2012, Dalhousie University

 A hierarchical symbiotic framework for policy search with genetic programming (GP) is evaluated in two control-style temporal sequence learning domains. The symbiotic formulation assumes each… (more)

Subjects/Keywords: Genetic programming; Symbiosis; Coevolution; Reinforcement Learning; Temporal Sequence Learning; Policy Search; Gennetic Algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kelly, S. (2012). ON DEVELOPMENTAL VARIATION IN HIERARCHICAL SYMBIOTIC POLICY SEARCH. (Masters Thesis). Dalhousie University. Retrieved from http://hdl.handle.net/10222/15376

Chicago Manual of Style (16th Edition):

Kelly, Stephen. “ON DEVELOPMENTAL VARIATION IN HIERARCHICAL SYMBIOTIC POLICY SEARCH.” 2012. Masters Thesis, Dalhousie University. Accessed October 20, 2020. http://hdl.handle.net/10222/15376.

MLA Handbook (7th Edition):

Kelly, Stephen. “ON DEVELOPMENTAL VARIATION IN HIERARCHICAL SYMBIOTIC POLICY SEARCH.” 2012. Web. 20 Oct 2020.

Vancouver:

Kelly S. ON DEVELOPMENTAL VARIATION IN HIERARCHICAL SYMBIOTIC POLICY SEARCH. [Internet] [Masters thesis]. Dalhousie University; 2012. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/10222/15376.

Council of Science Editors:

Kelly S. ON DEVELOPMENTAL VARIATION IN HIERARCHICAL SYMBIOTIC POLICY SEARCH. [Masters Thesis]. Dalhousie University; 2012. Available from: http://hdl.handle.net/10222/15376

8. Maresso, Brian. Emergent behavior in neuroevolved agents.

Degree: 2018, University of Wisconsin – Whitewater

This file was last viewed in Microsoft Edge.

Neural networks have been widely used for their ability to create generalized rulesets for a given set… (more)

Subjects/Keywords: Neural networks (Computer science); Genetic algorithms; Artificial intelligence; Machine learning; Video games; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Maresso, B. (2018). Emergent behavior in neuroevolved agents. (Thesis). University of Wisconsin – Whitewater. Retrieved from http://digital.library.wisc.edu/1793/78967

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Maresso, Brian. “Emergent behavior in neuroevolved agents.” 2018. Thesis, University of Wisconsin – Whitewater. Accessed October 20, 2020. http://digital.library.wisc.edu/1793/78967.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Maresso, Brian. “Emergent behavior in neuroevolved agents.” 2018. Web. 20 Oct 2020.

Vancouver:

Maresso B. Emergent behavior in neuroevolved agents. [Internet] [Thesis]. University of Wisconsin – Whitewater; 2018. [cited 2020 Oct 20]. Available from: http://digital.library.wisc.edu/1793/78967.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Maresso B. Emergent behavior in neuroevolved agents. [Thesis]. University of Wisconsin – Whitewater; 2018. Available from: http://digital.library.wisc.edu/1793/78967

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Delft University of Technology

9. Meulman, Erik (author). Towards Self-Learning Model-Based Evolutionary Algorithms.

Degree: 2019, Delft University of Technology

Model-based evolutionary algorithms (MBEAs) are praised for their broad applicability to black-box optimization problems. In practical applications however, they are mostly used to repeatedly optimize… (more)

Subjects/Keywords: Estimation of distribution algorithms; Machine Learning; Reinforcement Learning (RL); Black-box optimization

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Meulman, E. (. (2019). Towards Self-Learning Model-Based Evolutionary Algorithms. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:d7e9fdb1-7ced-43bb-b3ab-c4888fcc2482

Chicago Manual of Style (16th Edition):

Meulman, Erik (author). “Towards Self-Learning Model-Based Evolutionary Algorithms.” 2019. Masters Thesis, Delft University of Technology. Accessed October 20, 2020. http://resolver.tudelft.nl/uuid:d7e9fdb1-7ced-43bb-b3ab-c4888fcc2482.

MLA Handbook (7th Edition):

Meulman, Erik (author). “Towards Self-Learning Model-Based Evolutionary Algorithms.” 2019. Web. 20 Oct 2020.

Vancouver:

Meulman E(. Towards Self-Learning Model-Based Evolutionary Algorithms. [Internet] [Masters thesis]. Delft University of Technology; 2019. [cited 2020 Oct 20]. Available from: http://resolver.tudelft.nl/uuid:d7e9fdb1-7ced-43bb-b3ab-c4888fcc2482.

Council of Science Editors:

Meulman E(. Towards Self-Learning Model-Based Evolutionary Algorithms. [Masters Thesis]. Delft University of Technology; 2019. Available from: http://resolver.tudelft.nl/uuid:d7e9fdb1-7ced-43bb-b3ab-c4888fcc2482


Florida Atlantic University

10. Vashishtha, Sumit. SUSTAINING CHAOS USING DEEP REINFORCEMENT LEARNING.

Degree: MS, 2020, Florida Atlantic University

Numerous examples arise in fields ranging from mechanics to biology where disappearance of Chaos can be detrimental. Preventing such transient nature of chaos has been… (more)

Subjects/Keywords: Machine learning – Technique; Reinforcement learning; Algorithms; Chaotic behavior in systems; Nonlinear systems

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Vashishtha, S. (2020). SUSTAINING CHAOS USING DEEP REINFORCEMENT LEARNING. (Masters Thesis). Florida Atlantic University. Retrieved from http://fau.digital.flvc.org/islandora/object/fau:42657

Chicago Manual of Style (16th Edition):

Vashishtha, Sumit. “SUSTAINING CHAOS USING DEEP REINFORCEMENT LEARNING.” 2020. Masters Thesis, Florida Atlantic University. Accessed October 20, 2020. http://fau.digital.flvc.org/islandora/object/fau:42657.

MLA Handbook (7th Edition):

Vashishtha, Sumit. “SUSTAINING CHAOS USING DEEP REINFORCEMENT LEARNING.” 2020. Web. 20 Oct 2020.

Vancouver:

Vashishtha S. SUSTAINING CHAOS USING DEEP REINFORCEMENT LEARNING. [Internet] [Masters thesis]. Florida Atlantic University; 2020. [cited 2020 Oct 20]. Available from: http://fau.digital.flvc.org/islandora/object/fau:42657.

Council of Science Editors:

Vashishtha S. SUSTAINING CHAOS USING DEEP REINFORCEMENT LEARNING. [Masters Thesis]. Florida Atlantic University; 2020. Available from: http://fau.digital.flvc.org/islandora/object/fau:42657

11. Jackson, Ethan C. Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning.

Degree: 2019, University of Western Ontario

 Evolutionary algorithms have recently re-emerged as powerful tools for machine learning and artificial intelligence, especially when combined with advances in deep learning developed over the… (more)

Subjects/Keywords: Artificial neural networks; reinforcement learning; algebraic methods; genetic algorithms; novelty search; neural architecture search; Artificial Intelligence and Robotics; Theory and Algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jackson, E. C. (2019). Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning. (Thesis). University of Western Ontario. Retrieved from https://ir.lib.uwo.ca/etd/6510

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Jackson, Ethan C. “Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning.” 2019. Thesis, University of Western Ontario. Accessed October 20, 2020. https://ir.lib.uwo.ca/etd/6510.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Jackson, Ethan C. “Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning.” 2019. Web. 20 Oct 2020.

Vancouver:

Jackson EC. Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning. [Internet] [Thesis]. University of Western Ontario; 2019. [cited 2020 Oct 20]. Available from: https://ir.lib.uwo.ca/etd/6510.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Jackson EC. Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning. [Thesis]. University of Western Ontario; 2019. Available from: https://ir.lib.uwo.ca/etd/6510

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Colorado School of Mines

12. Weinstein, Alejandro J. Inference and learning in high-dimensional spaces.

Degree: PhD, Electrical Engineering and Computer Sciences, 2007, Colorado School of Mines

 High-dimensional problems have received a considerable amount of attention in the last decade by numerous scientific communities. This thesis considers three research thrusts that fall… (more)

Subjects/Keywords: sparse models; signal restoration; signal estimation; reinforcement learning; Reinforcement learning; Signal processing; Algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Weinstein, A. J. (2007). Inference and learning in high-dimensional spaces. (Doctoral Dissertation). Colorado School of Mines. Retrieved from http://hdl.handle.net/11124/78298

Chicago Manual of Style (16th Edition):

Weinstein, Alejandro J. “Inference and learning in high-dimensional spaces.” 2007. Doctoral Dissertation, Colorado School of Mines. Accessed October 20, 2020. http://hdl.handle.net/11124/78298.

MLA Handbook (7th Edition):

Weinstein, Alejandro J. “Inference and learning in high-dimensional spaces.” 2007. Web. 20 Oct 2020.

Vancouver:

Weinstein AJ. Inference and learning in high-dimensional spaces. [Internet] [Doctoral dissertation]. Colorado School of Mines; 2007. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/11124/78298.

Council of Science Editors:

Weinstein AJ. Inference and learning in high-dimensional spaces. [Doctoral Dissertation]. Colorado School of Mines; 2007. Available from: http://hdl.handle.net/11124/78298


University of Waterloo

13. Alhussein, Omar. On the Orchestration and Provisioning of NFV-enabled Multicast Services.

Degree: 2020, University of Waterloo

 The paradigm of network function virtualization (NFV) with the support of software-defined networking has emerged as a prominent approach to foster innovation in the networking… (more)

Subjects/Keywords: NFV; 5G networks; multicast services; NF chain embedding; online algorithms; primal-dual scheme; competitive analysis; reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Alhussein, O. (2020). On the Orchestration and Provisioning of NFV-enabled Multicast Services. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/15850

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Alhussein, Omar. “On the Orchestration and Provisioning of NFV-enabled Multicast Services.” 2020. Thesis, University of Waterloo. Accessed October 20, 2020. http://hdl.handle.net/10012/15850.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Alhussein, Omar. “On the Orchestration and Provisioning of NFV-enabled Multicast Services.” 2020. Web. 20 Oct 2020.

Vancouver:

Alhussein O. On the Orchestration and Provisioning of NFV-enabled Multicast Services. [Internet] [Thesis]. University of Waterloo; 2020. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/10012/15850.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Alhussein O. On the Orchestration and Provisioning of NFV-enabled Multicast Services. [Thesis]. University of Waterloo; 2020. Available from: http://hdl.handle.net/10012/15850

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

14. Fernandez, Yasser Gonzalez. Efficient Calculation of Optimal Configuration Processes.

Degree: MA -MA, Information Systems and Technology, 2015, York University

 Customers are getting increasingly involved in the design of the products and services they choose by specifying their desired characteristics. As a result, configuration systems… (more)

Subjects/Keywords: Artificial intelligence; Computer science; Information technology; Knowledge-based configuration; Markov decision processes; Reinforcement learning; Genetic algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Fernandez, Y. G. (2015). Efficient Calculation of Optimal Configuration Processes. (Masters Thesis). York University. Retrieved from http://hdl.handle.net/10315/30739

Chicago Manual of Style (16th Edition):

Fernandez, Yasser Gonzalez. “Efficient Calculation of Optimal Configuration Processes.” 2015. Masters Thesis, York University. Accessed October 20, 2020. http://hdl.handle.net/10315/30739.

MLA Handbook (7th Edition):

Fernandez, Yasser Gonzalez. “Efficient Calculation of Optimal Configuration Processes.” 2015. Web. 20 Oct 2020.

Vancouver:

Fernandez YG. Efficient Calculation of Optimal Configuration Processes. [Internet] [Masters thesis]. York University; 2015. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/10315/30739.

Council of Science Editors:

Fernandez YG. Efficient Calculation of Optimal Configuration Processes. [Masters Thesis]. York University; 2015. Available from: http://hdl.handle.net/10315/30739


University of Manchester

15. Allmendinger, Richard. Tuning evolutionary search for closed-loop optimization.

Degree: PhD, 2012, University of Manchester

 Closed-loop optimization deals with problems in which candidate solutions are evaluated by conducting experiments, e.g. physical or biochemical experiments. Although this form of optimization is… (more)

Subjects/Keywords: 519.6; Optimization; Closed-loop optimization; Evolutionary computation; Constrained optimization; Dynamic optimization; Reinforcement learning; Adaptation; Bandit algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Allmendinger, R. (2012). Tuning evolutionary search for closed-loop optimization. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/tuning-evolutionary-search-for-closedloop-optimization(d54e63e2-7927-42aa-b974-c41e717298cb).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553386

Chicago Manual of Style (16th Edition):

Allmendinger, Richard. “Tuning evolutionary search for closed-loop optimization.” 2012. Doctoral Dissertation, University of Manchester. Accessed October 20, 2020. https://www.research.manchester.ac.uk/portal/en/theses/tuning-evolutionary-search-for-closedloop-optimization(d54e63e2-7927-42aa-b974-c41e717298cb).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553386.

MLA Handbook (7th Edition):

Allmendinger, Richard. “Tuning evolutionary search for closed-loop optimization.” 2012. Web. 20 Oct 2020.

Vancouver:

Allmendinger R. Tuning evolutionary search for closed-loop optimization. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2020 Oct 20]. Available from: https://www.research.manchester.ac.uk/portal/en/theses/tuning-evolutionary-search-for-closedloop-optimization(d54e63e2-7927-42aa-b974-c41e717298cb).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553386.

Council of Science Editors:

Allmendinger R. Tuning evolutionary search for closed-loop optimization. [Doctoral Dissertation]. University of Manchester; 2012. Available from: https://www.research.manchester.ac.uk/portal/en/theses/tuning-evolutionary-search-for-closedloop-optimization(d54e63e2-7927-42aa-b974-c41e717298cb).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553386


Virginia Tech

16. Thirunavukkarasu, Muthukumar. Reinforcing Reachable Routes.

Degree: MS, Computer Science, 2004, Virginia Tech

 Reachability routing is a newly emerging paradigm in networking, where the goal is to determine all paths between a sender and a receiver. It is… (more)

Subjects/Keywords: Multipath routing; Probabilistic algorithms; Reachability; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Thirunavukkarasu, M. (2004). Reinforcing Reachable Routes. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/9904

Chicago Manual of Style (16th Edition):

Thirunavukkarasu, Muthukumar. “Reinforcing Reachable Routes.” 2004. Masters Thesis, Virginia Tech. Accessed October 20, 2020. http://hdl.handle.net/10919/9904.

MLA Handbook (7th Edition):

Thirunavukkarasu, Muthukumar. “Reinforcing Reachable Routes.” 2004. Web. 20 Oct 2020.

Vancouver:

Thirunavukkarasu M. Reinforcing Reachable Routes. [Internet] [Masters thesis]. Virginia Tech; 2004. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/10919/9904.

Council of Science Editors:

Thirunavukkarasu M. Reinforcing Reachable Routes. [Masters Thesis]. Virginia Tech; 2004. Available from: http://hdl.handle.net/10919/9904


Northeastern University

17. Andra, Mitha. Reinforcement learning approach to product allocation.

Degree: MS, Department of Mechanical and Industrial Engineering, 2010, Northeastern University

 In this thesis I investigated a reinforcement learning (RL) approach to address effective space utilization for warehouse management. RL in the domain of machine intelligence,… (more)

Subjects/Keywords: Artificial Intelligence; Learning Algorithms; Optimization; Product Allocation and Storage; Q Learning; Reinforcement Learning; Industrial Engineering; Operations Research, Systems Engineering and Industrial Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Andra, M. (2010). Reinforcement learning approach to product allocation. (Masters Thesis). Northeastern University. Retrieved from http://hdl.handle.net/2047/d20003370

Chicago Manual of Style (16th Edition):

Andra, Mitha. “Reinforcement learning approach to product allocation.” 2010. Masters Thesis, Northeastern University. Accessed October 20, 2020. http://hdl.handle.net/2047/d20003370.

MLA Handbook (7th Edition):

Andra, Mitha. “Reinforcement learning approach to product allocation.” 2010. Web. 20 Oct 2020.

Vancouver:

Andra M. Reinforcement learning approach to product allocation. [Internet] [Masters thesis]. Northeastern University; 2010. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/2047/d20003370.

Council of Science Editors:

Andra M. Reinforcement learning approach to product allocation. [Masters Thesis]. Northeastern University; 2010. Available from: http://hdl.handle.net/2047/d20003370

18. Φαλάς, Αναστάσιος. Προηγμένοι αλγόριθμοι μάθησης σε υβριδικά ευφυή συστήματα.

Degree: 2005, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ)

Subjects/Keywords: Υβριδικά συστήματα; Αλγόριθμοι μάθησης; Ενισχυτική μάθηση; Hybrid systems; Learning algorithms; Reinforcement learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Φαλάς, . . (2005). Προηγμένοι αλγόριθμοι μάθησης σε υβριδικά ευφυή συστήματα. (Thesis). National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Retrieved from http://hdl.handle.net/10442/hedi/16267

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Φαλάς, Αναστάσιος. “Προηγμένοι αλγόριθμοι μάθησης σε υβριδικά ευφυή συστήματα.” 2005. Thesis, National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ). Accessed October 20, 2020. http://hdl.handle.net/10442/hedi/16267.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Φαλάς, Αναστάσιος. “Προηγμένοι αλγόριθμοι μάθησης σε υβριδικά ευφυή συστήματα.” 2005. Web. 20 Oct 2020.

Vancouver:

Φαλάς . Προηγμένοι αλγόριθμοι μάθησης σε υβριδικά ευφυή συστήματα. [Internet] [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2005. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/10442/hedi/16267.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Φαλάς . Προηγμένοι αλγόριθμοι μάθησης σε υβριδικά ευφυή συστήματα. [Thesis]. National Technical University of Athens (NTUA); Εθνικό Μετσόβιο Πολυτεχνείο (ΕΜΠ); 2005. Available from: http://hdl.handle.net/10442/hedi/16267

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Indian Institute of Science

19. Joseph, Ajin George. Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings.

Degree: PhD, Faculty of Engineering, 2018, Indian Institute of Science

 Optimization is a very important field with diverse applications in physical, social and biological sciences and in various areas of engineering. It appears widely in… (more)

Subjects/Keywords: Optimization Algorithms; Reinforcement Learning; Machine Learning; Markov Decision Process; Stochastic Approximation Algorithm; Stochastic Optimization; Cross Entropy Method; Stochastic Global Optimization; Cross Entropy Optimization Method; Quantile Estimation; Continuous Optimization; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Joseph, A. G. (2018). Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/3645

Chicago Manual of Style (16th Edition):

Joseph, Ajin George. “Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings.” 2018. Doctoral Dissertation, Indian Institute of Science. Accessed October 20, 2020. http://etd.iisc.ac.in/handle/2005/3645.

MLA Handbook (7th Edition):

Joseph, Ajin George. “Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings.” 2018. Web. 20 Oct 2020.

Vancouver:

Joseph AG. Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2018. [cited 2020 Oct 20]. Available from: http://etd.iisc.ac.in/handle/2005/3645.

Council of Science Editors:

Joseph AG. Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings. [Doctoral Dissertation]. Indian Institute of Science; 2018. Available from: http://etd.iisc.ac.in/handle/2005/3645


University of Hong Kong

20. 李少強. Reinforcement learning for intelligent assembly automation.

Degree: 2002, University of Hong Kong

Subjects/Keywords: Reinforcement learning (Machine learning); Vector processing (Computer science); Algorithms.; Assembling machines - Automatic control.

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

李少強.. (2002). Reinforcement learning for intelligent assembly automation. (Thesis). University of Hong Kong. Retrieved from http://hdl.handle.net/10722/35606

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

李少強.. “Reinforcement learning for intelligent assembly automation.” 2002. Thesis, University of Hong Kong. Accessed October 20, 2020. http://hdl.handle.net/10722/35606.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

李少強.. “Reinforcement learning for intelligent assembly automation.” 2002. Web. 20 Oct 2020.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

李少強.. Reinforcement learning for intelligent assembly automation. [Internet] [Thesis]. University of Hong Kong; 2002. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/10722/35606.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

李少強.. Reinforcement learning for intelligent assembly automation. [Thesis]. University of Hong Kong; 2002. Available from: http://hdl.handle.net/10722/35606

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Not specified: Masters Thesis or Doctoral Dissertation


Indian Institute of Science

21. Lakshmanan, K. Online Learning and Simulation Based Algorithms for Stochastic Optimization.

Degree: PhD, Faculty of Engineering, 2018, Indian Institute of Science

 In many optimization problems, the relationship between the objective and parameters is not known. The objective function itself may be stochastic such as a long-run… (more)

Subjects/Keywords: Stochastic Approximation Algorithms; Stochastic Optimization; Markov Decision Process; Reinforcement Learning Algorithm; Queueing Networks; Queuing Theory; Quasi-Newton Stochastic Approximation Algorithm; Online Q-Learning Algorithm; Online Actor-Critic Algorithm; Markov Decision Processes; Q-learning Algorithm; Linear Function Approximation; Quasi-Newton Smoothed Functional Algorithms; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Lakshmanan, K. (2018). Online Learning and Simulation Based Algorithms for Stochastic Optimization. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/3245

Chicago Manual of Style (16th Edition):

Lakshmanan, K. “Online Learning and Simulation Based Algorithms for Stochastic Optimization.” 2018. Doctoral Dissertation, Indian Institute of Science. Accessed October 20, 2020. http://etd.iisc.ac.in/handle/2005/3245.

MLA Handbook (7th Edition):

Lakshmanan, K. “Online Learning and Simulation Based Algorithms for Stochastic Optimization.” 2018. Web. 20 Oct 2020.

Vancouver:

Lakshmanan K. Online Learning and Simulation Based Algorithms for Stochastic Optimization. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2018. [cited 2020 Oct 20]. Available from: http://etd.iisc.ac.in/handle/2005/3245.

Council of Science Editors:

Lakshmanan K. Online Learning and Simulation Based Algorithms for Stochastic Optimization. [Doctoral Dissertation]. Indian Institute of Science; 2018. Available from: http://etd.iisc.ac.in/handle/2005/3245


Indian Institute of Science

22. Prabuchandran, K J. Feature Adaptation Algorithms for Reinforcement Learning with Applications to Wireless Sensor Networks And Road Traffic Control.

Degree: PhD, Faculty of Engineering, 2017, Indian Institute of Science

 Many sequential decision making problems under uncertainty arising in engineering, science and economics are often modelled as Markov Decision Processes (MDPs). In the setting of… (more)

Subjects/Keywords: Wireless Sensor Networks; Road Traffic Control; Reinforcement Learning Algorithms; Markov Decision Processes (MDPs); Sensor Networks; Traffic Signal Control (TSC); Reinforcement Learning; Energy Harvesting Sensor Nodes; Stochastic Approximation; Grassmannian Search; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Prabuchandran, K. J. (2017). Feature Adaptation Algorithms for Reinforcement Learning with Applications to Wireless Sensor Networks And Road Traffic Control. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/2664

Chicago Manual of Style (16th Edition):

Prabuchandran, K J. “Feature Adaptation Algorithms for Reinforcement Learning with Applications to Wireless Sensor Networks And Road Traffic Control.” 2017. Doctoral Dissertation, Indian Institute of Science. Accessed October 20, 2020. http://etd.iisc.ac.in/handle/2005/2664.

MLA Handbook (7th Edition):

Prabuchandran, K J. “Feature Adaptation Algorithms for Reinforcement Learning with Applications to Wireless Sensor Networks And Road Traffic Control.” 2017. Web. 20 Oct 2020.

Vancouver:

Prabuchandran KJ. Feature Adaptation Algorithms for Reinforcement Learning with Applications to Wireless Sensor Networks And Road Traffic Control. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2017. [cited 2020 Oct 20]. Available from: http://etd.iisc.ac.in/handle/2005/2664.

Council of Science Editors:

Prabuchandran KJ. Feature Adaptation Algorithms for Reinforcement Learning with Applications to Wireless Sensor Networks And Road Traffic Control. [Doctoral Dissertation]. Indian Institute of Science; 2017. Available from: http://etd.iisc.ac.in/handle/2005/2664

23. Chatzikokolakis, Konstantinos. Spectrum sharing and management techniques in mobile networks.

Degree: 2016, National and Kapodistrian University of Athens; Εθνικό και Καποδιστριακό Πανεπιστήμιο Αθηνών (ΕΚΠΑ)

 Radio spectrum has loomed out to be a scarce resource that needs to be carefully considered when designing 5G communication systems and Mobile Network Operators… (more)

Subjects/Keywords: Μερισμός φάσματος; Διαχείριση φάσματος; Ασαφής λογική; Ενισχυμένη μάθηση; Δίκαιη χρήση πόρων; Γενετικοί αλγόριθμοι; Spectrum sharing; Spectrum management; Fuzzy logic; Reinforcement learning; Fair resource usage; Genetic algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chatzikokolakis, K. (2016). Spectrum sharing and management techniques in mobile networks. (Thesis). National and Kapodistrian University of Athens; Εθνικό και Καποδιστριακό Πανεπιστήμιο Αθηνών (ΕΚΠΑ). Retrieved from http://hdl.handle.net/10442/hedi/38223

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Chatzikokolakis, Konstantinos. “Spectrum sharing and management techniques in mobile networks.” 2016. Thesis, National and Kapodistrian University of Athens; Εθνικό και Καποδιστριακό Πανεπιστήμιο Αθηνών (ΕΚΠΑ). Accessed October 20, 2020. http://hdl.handle.net/10442/hedi/38223.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Chatzikokolakis, Konstantinos. “Spectrum sharing and management techniques in mobile networks.” 2016. Web. 20 Oct 2020.

Vancouver:

Chatzikokolakis K. Spectrum sharing and management techniques in mobile networks. [Internet] [Thesis]. National and Kapodistrian University of Athens; Εθνικό και Καποδιστριακό Πανεπιστήμιο Αθηνών (ΕΚΠΑ); 2016. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/10442/hedi/38223.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Chatzikokolakis K. Spectrum sharing and management techniques in mobile networks. [Thesis]. National and Kapodistrian University of Athens; Εθνικό και Καποδιστριακό Πανεπιστήμιο Αθηνών (ΕΚΠΑ); 2016. Available from: http://hdl.handle.net/10442/hedi/38223

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Indian Institute of Science

24. Prashanth, L A. Resource Allocation for Sequential Decision Making Under Uncertainaty : Studies in Vehicular Traffic Control, Service Systems, Sensor Networks and Mechanism Design.

Degree: PhD, Faculty of Engineering, 2017, Indian Institute of Science

 A fundamental question in a sequential decision making setting under uncertainty is “how to allocate resources amongst competing entities so as to maximize the rewards… (more)

Subjects/Keywords: Vehicular Traffic Control; Service Systems; Sensor Networks; Mechanism Design; Traffic Signal Control - Q-Learning; Traffic Signal Control; Signal Control - Threshold Tuning; Traffic Light Control Algorithm; Adaptive Labor Staffing; Sleep-Wake Scheduling Algorithms; Reinforcement Learning; Vehicular Control; Graded Signal Control; Adaptive Sleep–wake Control; Computer Science

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Prashanth, L. A. (2017). Resource Allocation for Sequential Decision Making Under Uncertainaty : Studies in Vehicular Traffic Control, Service Systems, Sensor Networks and Mechanism Design. (Doctoral Dissertation). Indian Institute of Science. Retrieved from http://etd.iisc.ac.in/handle/2005/2810

Chicago Manual of Style (16th Edition):

Prashanth, L A. “Resource Allocation for Sequential Decision Making Under Uncertainaty : Studies in Vehicular Traffic Control, Service Systems, Sensor Networks and Mechanism Design.” 2017. Doctoral Dissertation, Indian Institute of Science. Accessed October 20, 2020. http://etd.iisc.ac.in/handle/2005/2810.

MLA Handbook (7th Edition):

Prashanth, L A. “Resource Allocation for Sequential Decision Making Under Uncertainaty : Studies in Vehicular Traffic Control, Service Systems, Sensor Networks and Mechanism Design.” 2017. Web. 20 Oct 2020.

Vancouver:

Prashanth LA. Resource Allocation for Sequential Decision Making Under Uncertainaty : Studies in Vehicular Traffic Control, Service Systems, Sensor Networks and Mechanism Design. [Internet] [Doctoral dissertation]. Indian Institute of Science; 2017. [cited 2020 Oct 20]. Available from: http://etd.iisc.ac.in/handle/2005/2810.

Council of Science Editors:

Prashanth LA. Resource Allocation for Sequential Decision Making Under Uncertainaty : Studies in Vehicular Traffic Control, Service Systems, Sensor Networks and Mechanism Design. [Doctoral Dissertation]. Indian Institute of Science; 2017. Available from: http://etd.iisc.ac.in/handle/2005/2810


University of Oxford

25. McInerney, Robert E. Decision making under uncertainty.

Degree: PhD, 2014, University of Oxford

 Operating and interacting in an environment requires the ability to manage uncertainty and to choose definite courses of action. In this thesis we look to… (more)

Subjects/Keywords: 006.3; Probability theory and stochastic processes; Artificial Intelligence; Probability; Stochastic processes; Computing; Applications and algorithms; Information engineering; Robotics; Engineering & allied sciences; machine learning; probability theory; Bayesian; decision making; Reinforcement Learning; Gaussian Process; inference; approximate inference; Multi-armed Bandit; optimal decision making; uncertainty; managing uncertainty

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

McInerney, R. E. (2014). Decision making under uncertainty. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:a34e87ad-8330-42df-8ba6-d55f10529331 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.692860

Chicago Manual of Style (16th Edition):

McInerney, Robert E. “Decision making under uncertainty.” 2014. Doctoral Dissertation, University of Oxford. Accessed October 20, 2020. http://ora.ox.ac.uk/objects/uuid:a34e87ad-8330-42df-8ba6-d55f10529331 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.692860.

MLA Handbook (7th Edition):

McInerney, Robert E. “Decision making under uncertainty.” 2014. Web. 20 Oct 2020.

Vancouver:

McInerney RE. Decision making under uncertainty. [Internet] [Doctoral dissertation]. University of Oxford; 2014. [cited 2020 Oct 20]. Available from: http://ora.ox.ac.uk/objects/uuid:a34e87ad-8330-42df-8ba6-d55f10529331 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.692860.

Council of Science Editors:

McInerney RE. Decision making under uncertainty. [Doctoral Dissertation]. University of Oxford; 2014. Available from: http://ora.ox.ac.uk/objects/uuid:a34e87ad-8330-42df-8ba6-d55f10529331 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.692860


Georgia Tech

26. Bountourelis, Theologos. Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs.

Degree: PhD, Industrial and Systems Engineering, 2008, Georgia Tech

 The first part of this research program concerns the development of customized and easily implementable Probably Approximately Correct (PAC)-learning algorithms for episodic tasks over acyclic… (more)

Subjects/Keywords: Computational complexity; Stochastic control; Approximate dynamic programming; Dynamic programming; PAC learning; Scheduling; Fluid relaxation; Reinforcement learning; Machine learning; Stochastic control theory; Algorithms

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bountourelis, T. (2008). Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/28144

Chicago Manual of Style (16th Edition):

Bountourelis, Theologos. “Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs.” 2008. Doctoral Dissertation, Georgia Tech. Accessed October 20, 2020. http://hdl.handle.net/1853/28144.

MLA Handbook (7th Edition):

Bountourelis, Theologos. “Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs.” 2008. Web. 20 Oct 2020.

Vancouver:

Bountourelis T. Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs. [Internet] [Doctoral dissertation]. Georgia Tech; 2008. [cited 2020 Oct 20]. Available from: http://hdl.handle.net/1853/28144.

Council of Science Editors:

Bountourelis T. Efficient pac-learning for episodic tasks with acyclic state spaces and the optimal node visitation problem in acyclic stochastic digaphs. [Doctoral Dissertation]. Georgia Tech; 2008. Available from: http://hdl.handle.net/1853/28144


Delft University of Technology

27. Delipetrev, B. Nested algorithms for optimal reservoir operation and their embedding in a decision support platform.

Degree: 2016, Delft University of Technology

 Reservoir operation is a multi-objective optimization problem traditionally solved with dynamic programming (DP) and stochastic dynamic programming (SDP) algorithms. The thesis presents novel algorithms for… (more)

Subjects/Keywords: novel optimization algorithms; nested dynamic programming; nested stochastic dynamic programming; nested reinforcement learning; cloud application

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Delipetrev, B. (2016). Nested algorithms for optimal reservoir operation and their embedding in a decision support platform. (Doctoral Dissertation). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; urn:NBN:nl:ui:24-uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; urn:isbn:9781138029828 ; urn:NBN:nl:ui:24-uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; http://resolver.tudelft.nl/uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16

Chicago Manual of Style (16th Edition):

Delipetrev, B. “Nested algorithms for optimal reservoir operation and their embedding in a decision support platform.” 2016. Doctoral Dissertation, Delft University of Technology. Accessed October 20, 2020. http://resolver.tudelft.nl/uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; urn:NBN:nl:ui:24-uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; urn:isbn:9781138029828 ; urn:NBN:nl:ui:24-uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; http://resolver.tudelft.nl/uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16.

MLA Handbook (7th Edition):

Delipetrev, B. “Nested algorithms for optimal reservoir operation and their embedding in a decision support platform.” 2016. Web. 20 Oct 2020.

Vancouver:

Delipetrev B. Nested algorithms for optimal reservoir operation and their embedding in a decision support platform. [Internet] [Doctoral dissertation]. Delft University of Technology; 2016. [cited 2020 Oct 20]. Available from: http://resolver.tudelft.nl/uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; urn:NBN:nl:ui:24-uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; urn:isbn:9781138029828 ; urn:NBN:nl:ui:24-uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; http://resolver.tudelft.nl/uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16.

Council of Science Editors:

Delipetrev B. Nested algorithms for optimal reservoir operation and their embedding in a decision support platform. [Doctoral Dissertation]. Delft University of Technology; 2016. Available from: http://resolver.tudelft.nl/uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; urn:NBN:nl:ui:24-uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; urn:isbn:9781138029828 ; urn:NBN:nl:ui:24-uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16 ; http://resolver.tudelft.nl/uuid:2fdcbbc4-0905-4500-b0a5-3424880dfa16

28. Allmendinger, Richard. Tuning Evolutionary Search for Closed-Loop Optimization.

Degree: 2012, University of Manchester

Closed-loop optimization deals with problems in which candidate solutions are evaluated by conducting experiments, e.g. physical or biochemical experiments. Although this form of optimization is… (more)

Subjects/Keywords: Optimization; Closed-loop optimization; Evolutionary computation; Constrained optimization; Dynamic optimization; Reinforcement learning; Adaptation; Bandit algorithms

…152 8 5.16 Performance of reinforcement learning agent as a function of different… …constraint-handling strategies, and investigate the application of reinforcement learning… …this task offline using a reinforcement learning method, the other learns it online using a… …this chapter is devoted to the field of reinforcement learning (RL), which offers a… …reinforcement learning approach) when to switch between the static strategies during the… 

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Allmendinger, R. (2012). Tuning Evolutionary Search for Closed-Loop Optimization. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:156551

Chicago Manual of Style (16th Edition):

Allmendinger, Richard. “Tuning Evolutionary Search for Closed-Loop Optimization.” 2012. Doctoral Dissertation, University of Manchester. Accessed October 20, 2020. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:156551.

MLA Handbook (7th Edition):

Allmendinger, Richard. “Tuning Evolutionary Search for Closed-Loop Optimization.” 2012. Web. 20 Oct 2020.

Vancouver:

Allmendinger R. Tuning Evolutionary Search for Closed-Loop Optimization. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2020 Oct 20]. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:156551.

Council of Science Editors:

Allmendinger R. Tuning Evolutionary Search for Closed-Loop Optimization. [Doctoral Dissertation]. University of Manchester; 2012. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:156551

29. Lídia Bononi Paiva Tomaz. D-MA-Draughts: um sistema multiagente jogador de damas automático que atua em um ambiente de alto desempenho.

Degree: 2013, Federal University of Uberlândia

 O objetivo deste trabalho é propor um sistema de aprendizagem de damas, D-MA-Draughts (Distributed Multiagent Draughts): um sistema multiagente jogador de damas que atua em… (more)

Subjects/Keywords: Sistema jogador multiagente distribuído; Damas; Aprendizagem por reforço de máquina; Redes neurais artificiais; Busca distribuída; Representação de estados por características; Algoritmos de clusterização; Tabelas de transposição; CIENCIA DA COMPUTACAO; Jogo de damas por computador; Redes neurais (Computação); Distributed multi-agent player system; Draughts; Machine reinforcement learning; Artificial neural networks; Distributed search; Representation of states through features; Cluster algorithms; Transposition tables

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Tomaz, L. B. P. (2013). D-MA-Draughts: um sistema multiagente jogador de damas automático que atua em um ambiente de alto desempenho. (Thesis). Federal University of Uberlândia. Retrieved from http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=5199

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Tomaz, Lídia Bononi Paiva. “D-MA-Draughts: um sistema multiagente jogador de damas automático que atua em um ambiente de alto desempenho.” 2013. Thesis, Federal University of Uberlândia. Accessed October 20, 2020. http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=5199.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Tomaz, Lídia Bononi Paiva. “D-MA-Draughts: um sistema multiagente jogador de damas automático que atua em um ambiente de alto desempenho.” 2013. Web. 20 Oct 2020.

Vancouver:

Tomaz LBP. D-MA-Draughts: um sistema multiagente jogador de damas automático que atua em um ambiente de alto desempenho. [Internet] [Thesis]. Federal University of Uberlândia; 2013. [cited 2020 Oct 20]. Available from: http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=5199.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Tomaz LBP. D-MA-Draughts: um sistema multiagente jogador de damas automático que atua em um ambiente de alto desempenho. [Thesis]. Federal University of Uberlândia; 2013. Available from: http://www.bdtd.ufu.br//tde_busca/arquivo.php?codArquivo=5199

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Pontifical Catholic University of Rio de Janeiro

30. [No author]. [en] MACHINE LEARNING-BASED MAC PROTOCOLS FOR LORA IOT NETWORKS.

Degree: 2020, Pontifical Catholic University of Rio de Janeiro

[pt] Com o rápido crescimento da Internet das Coisas (IoT), surgiram novas tecnologias de comunicação sem fio para atender aos requisitos de longo alcance, baixo… (more)

Subjects/Keywords: [pt] REDES DE LONGA DISTANCIA DE BAIXA POTENCIA; [en] LOW POWER WIDE AREA NETWORKS; [pt] PROTOCOLOS DE CONTROLE DE ACESSO AO MEDIO; [en] MEDIUM ACCESS CONTROL PROTOCOLS; [pt] MODULACAO LORA; [en] LORA MODULATION; [pt] LORAWAN; [en] LORAWAN; [pt] PARAMETROS DE TRANSMISSAO; [en] TRANSMISSION PARAMETERS; [pt] ALGORITMOS DE APRENDIZAGEM POR REFORCO; [en] REINFORCEMENT LEARNING ALGORITHMS

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

author], [. (2020). [en] MACHINE LEARNING-BASED MAC PROTOCOLS FOR LORA IOT NETWORKS. (Thesis). Pontifical Catholic University of Rio de Janeiro. Retrieved from http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=48753

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

author], [No. “[en] MACHINE LEARNING-BASED MAC PROTOCOLS FOR LORA IOT NETWORKS.” 2020. Thesis, Pontifical Catholic University of Rio de Janeiro. Accessed October 20, 2020. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=48753.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

author], [No. “[en] MACHINE LEARNING-BASED MAC PROTOCOLS FOR LORA IOT NETWORKS.” 2020. Web. 20 Oct 2020.

Vancouver:

author] [. [en] MACHINE LEARNING-BASED MAC PROTOCOLS FOR LORA IOT NETWORKS. [Internet] [Thesis]. Pontifical Catholic University of Rio de Janeiro; 2020. [cited 2020 Oct 20]. Available from: http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=48753.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

author] [. [en] MACHINE LEARNING-BASED MAC PROTOCOLS FOR LORA IOT NETWORKS. [Thesis]. Pontifical Catholic University of Rio de Janeiro; 2020. Available from: http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=48753

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

[1] [2]

.