Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Markov Decision Process). Showing records 1 – 30 of 271 total matches.

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Country

▼ Search Limiters


University of Manitoba

1. Gunasekara, Charith. Optimal threshold policy for opportunistic network coding under phase type arrivals.

Degree: Electrical and Computer Engineering, 2016, University of Manitoba

 Network coding allows each node in a network to perform some coding operations on the data packets and improve the overall throughput of communication. However,… (more)

Subjects/Keywords: Queueing Theory; Network Coding; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gunasekara, C. (2016). Optimal threshold policy for opportunistic network coding under phase type arrivals. (Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/31615

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gunasekara, Charith. “Optimal threshold policy for opportunistic network coding under phase type arrivals.” 2016. Thesis, University of Manitoba. Accessed January 16, 2021. http://hdl.handle.net/1993/31615.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gunasekara, Charith. “Optimal threshold policy for opportunistic network coding under phase type arrivals.” 2016. Web. 16 Jan 2021.

Vancouver:

Gunasekara C. Optimal threshold policy for opportunistic network coding under phase type arrivals. [Internet] [Thesis]. University of Manitoba; 2016. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/1993/31615.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gunasekara C. Optimal threshold policy for opportunistic network coding under phase type arrivals. [Thesis]. University of Manitoba; 2016. Available from: http://hdl.handle.net/1993/31615

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Minnesota

2. Liu, Yuhang. Optimal serving schedules for multiple queues with size-independent service times.

Degree: MS, Industrial and Systems Engineering, 2013, University of Minnesota

University of Minnesota M.S. thesis. May 2013. Major: Industrial and Systems Engineering. Advisor: Zizhuo Wang. 1 computer file (PDF); v, 37 pages.

We consider a… (more)

Subjects/Keywords: Markov decision process; Queuing theory; Traffic schedule

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liu, Y. (2013). Optimal serving schedules for multiple queues with size-independent service times. (Masters Thesis). University of Minnesota. Retrieved from http://purl.umn.edu/160186

Chicago Manual of Style (16th Edition):

Liu, Yuhang. “Optimal serving schedules for multiple queues with size-independent service times.” 2013. Masters Thesis, University of Minnesota. Accessed January 16, 2021. http://purl.umn.edu/160186.

MLA Handbook (7th Edition):

Liu, Yuhang. “Optimal serving schedules for multiple queues with size-independent service times.” 2013. Web. 16 Jan 2021.

Vancouver:

Liu Y. Optimal serving schedules for multiple queues with size-independent service times. [Internet] [Masters thesis]. University of Minnesota; 2013. [cited 2021 Jan 16]. Available from: http://purl.umn.edu/160186.

Council of Science Editors:

Liu Y. Optimal serving schedules for multiple queues with size-independent service times. [Masters Thesis]. University of Minnesota; 2013. Available from: http://purl.umn.edu/160186


Iowa State University

3. Bertram, Joshua R. A new solution for Markov Decision Processes and its aerospace applications.

Degree: 2020, Iowa State University

Markov Decision Processes (MDPs) are a powerful technique for modelling sequential decisionmaking problems which have been used over many decades to solve problems including robotics,finance,… (more)

Subjects/Keywords: FastMDP; Markov Decision Process; MDP; MDPs

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bertram, J. R. (2020). A new solution for Markov Decision Processes and its aerospace applications. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/17832

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bertram, Joshua R. “A new solution for Markov Decision Processes and its aerospace applications.” 2020. Thesis, Iowa State University. Accessed January 16, 2021. https://lib.dr.iastate.edu/etd/17832.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bertram, Joshua R. “A new solution for Markov Decision Processes and its aerospace applications.” 2020. Web. 16 Jan 2021.

Vancouver:

Bertram JR. A new solution for Markov Decision Processes and its aerospace applications. [Internet] [Thesis]. Iowa State University; 2020. [cited 2021 Jan 16]. Available from: https://lib.dr.iastate.edu/etd/17832.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bertram JR. A new solution for Markov Decision Processes and its aerospace applications. [Thesis]. Iowa State University; 2020. Available from: https://lib.dr.iastate.edu/etd/17832

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Minnesota

4. Liu, Yuhang. Optimal serving schedules for multiple queues with size-independent service times.

Degree: MS, Industrial and Systems Engineering, 2013, University of Minnesota

 We consider a service system with two Poisson arrival queues. There is a single server that chooses which queue to serve at each moment. Once… (more)

Subjects/Keywords: Markov decision process; Queuing theory; Traffic schedule

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liu, Y. (2013). Optimal serving schedules for multiple queues with size-independent service times. (Masters Thesis). University of Minnesota. Retrieved from http://purl.umn.edu/160186

Chicago Manual of Style (16th Edition):

Liu, Yuhang. “Optimal serving schedules for multiple queues with size-independent service times.” 2013. Masters Thesis, University of Minnesota. Accessed January 16, 2021. http://purl.umn.edu/160186.

MLA Handbook (7th Edition):

Liu, Yuhang. “Optimal serving schedules for multiple queues with size-independent service times.” 2013. Web. 16 Jan 2021.

Vancouver:

Liu Y. Optimal serving schedules for multiple queues with size-independent service times. [Internet] [Masters thesis]. University of Minnesota; 2013. [cited 2021 Jan 16]. Available from: http://purl.umn.edu/160186.

Council of Science Editors:

Liu Y. Optimal serving schedules for multiple queues with size-independent service times. [Masters Thesis]. University of Minnesota; 2013. Available from: http://purl.umn.edu/160186

5. Shirmohammadi, Mahsa. Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants.

Degree: Docteur es, Informatique, 2014, Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....)

 Les Markov Decision Process (MDP) sont des systèmes finis probabilistes avec à la fois des choix aléatoires et des stratégies, et sont ainsi reconnus comme… (more)

Subjects/Keywords: Markov decision process; Automates probabilistes; Mots synchronisants; Markov decision process; Probabilistic automata; Synchronising words

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shirmohammadi, M. (2014). Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants. (Doctoral Dissertation). Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....). Retrieved from http://www.theses.fr/2014DENS0054

Chicago Manual of Style (16th Edition):

Shirmohammadi, Mahsa. “Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants.” 2014. Doctoral Dissertation, Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....). Accessed January 16, 2021. http://www.theses.fr/2014DENS0054.

MLA Handbook (7th Edition):

Shirmohammadi, Mahsa. “Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants.” 2014. Web. 16 Jan 2021.

Vancouver:

Shirmohammadi M. Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants. [Internet] [Doctoral dissertation]. Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....); 2014. [cited 2021 Jan 16]. Available from: http://www.theses.fr/2014DENS0054.

Council of Science Editors:

Shirmohammadi M. Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants. [Doctoral Dissertation]. Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....); 2014. Available from: http://www.theses.fr/2014DENS0054


University of Akron

6. Chippa, Mukesh K. Goal-seeking Decision Support System to Empower Personal Wellness Management.

Degree: PhD, Computer Engineering, 2016, University of Akron

 Obesity has reached epidemic proportions globally, withmore than one billion adults overweight with at least threehundred million of them clinically obese; this is a major… (more)

Subjects/Keywords: Computer Engineering; decision support system, personalized wellness management, Goal seeking paradigm, markov decision process, partially observable markov decision process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chippa, M. K. (2016). Goal-seeking Decision Support System to Empower Personal Wellness Management. (Doctoral Dissertation). University of Akron. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467

Chicago Manual of Style (16th Edition):

Chippa, Mukesh K. “Goal-seeking Decision Support System to Empower Personal Wellness Management.” 2016. Doctoral Dissertation, University of Akron. Accessed January 16, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467.

MLA Handbook (7th Edition):

Chippa, Mukesh K. “Goal-seeking Decision Support System to Empower Personal Wellness Management.” 2016. Web. 16 Jan 2021.

Vancouver:

Chippa MK. Goal-seeking Decision Support System to Empower Personal Wellness Management. [Internet] [Doctoral dissertation]. University of Akron; 2016. [cited 2021 Jan 16]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467.

Council of Science Editors:

Chippa MK. Goal-seeking Decision Support System to Empower Personal Wellness Management. [Doctoral Dissertation]. University of Akron; 2016. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467


University of Georgia

7. Perez Barrenechea, Dennis David. Anytime point based approximations for interactive POMDPs.

Degree: 2014, University of Georgia

 Partially observable Markov decision processes (POMDPs) have been largely accepted as a rich-framework for planning and control problems. In settings where multiple agents interact, POMDPs… (more)

Subjects/Keywords: Markov Decision Process; Multiagent systems; Decision making; POMDP

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Perez Barrenechea, D. D. (2014). Anytime point based approximations for interactive POMDPs. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/24464

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Perez Barrenechea, Dennis David. “Anytime point based approximations for interactive POMDPs.” 2014. Thesis, University of Georgia. Accessed January 16, 2021. http://hdl.handle.net/10724/24464.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Perez Barrenechea, Dennis David. “Anytime point based approximations for interactive POMDPs.” 2014. Web. 16 Jan 2021.

Vancouver:

Perez Barrenechea DD. Anytime point based approximations for interactive POMDPs. [Internet] [Thesis]. University of Georgia; 2014. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/10724/24464.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Perez Barrenechea DD. Anytime point based approximations for interactive POMDPs. [Thesis]. University of Georgia; 2014. Available from: http://hdl.handle.net/10724/24464

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Texas – Austin

8. -7840-2726. Decision analysis perspectives on sequential testing.

Degree: MSin Engineering, Operations Research and Industrial Engineering, 2020, University of Texas – Austin

 Expanding from proof-load testing, this paper utilizes a decision analysis framework to determine adequate bounds for how much one should be willing to pay to… (more)

Subjects/Keywords: Sequential testing; Decision analysis; Bayesian updating; Markov decision process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

-7840-2726. (2020). Decision analysis perspectives on sequential testing. (Masters Thesis). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/7495

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Chicago Manual of Style (16th Edition):

-7840-2726. “Decision analysis perspectives on sequential testing.” 2020. Masters Thesis, University of Texas – Austin. Accessed January 16, 2021. http://dx.doi.org/10.26153/tsw/7495.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

MLA Handbook (7th Edition):

-7840-2726. “Decision analysis perspectives on sequential testing.” 2020. Web. 16 Jan 2021.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Vancouver:

-7840-2726. Decision analysis perspectives on sequential testing. [Internet] [Masters thesis]. University of Texas – Austin; 2020. [cited 2021 Jan 16]. Available from: http://dx.doi.org/10.26153/tsw/7495.

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Council of Science Editors:

-7840-2726. Decision analysis perspectives on sequential testing. [Masters Thesis]. University of Texas – Austin; 2020. Available from: http://dx.doi.org/10.26153/tsw/7495

Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete


University of Newcastle

9. Abed-Alguni, Bilal Hashem Kalil. Cooperative reinforcement learning for independent learners.

Degree: PhD, 2014, University of Newcastle

Research Doctorate - Doctor of Philosophy (PhD)

Machine learning in multi-agent domains poses several research challenges. One challenge is how to model cooperation between reinforcement… (more)

Subjects/Keywords: reinforcement learning; Q-learning; multi-agent system; distributed system; Markov decision process; factored Markov decision process; cooperation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Abed-Alguni, B. H. K. (2014). Cooperative reinforcement learning for independent learners. (Doctoral Dissertation). University of Newcastle. Retrieved from http://hdl.handle.net/1959.13/1052917

Chicago Manual of Style (16th Edition):

Abed-Alguni, Bilal Hashem Kalil. “Cooperative reinforcement learning for independent learners.” 2014. Doctoral Dissertation, University of Newcastle. Accessed January 16, 2021. http://hdl.handle.net/1959.13/1052917.

MLA Handbook (7th Edition):

Abed-Alguni, Bilal Hashem Kalil. “Cooperative reinforcement learning for independent learners.” 2014. Web. 16 Jan 2021.

Vancouver:

Abed-Alguni BHK. Cooperative reinforcement learning for independent learners. [Internet] [Doctoral dissertation]. University of Newcastle; 2014. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/1959.13/1052917.

Council of Science Editors:

Abed-Alguni BHK. Cooperative reinforcement learning for independent learners. [Doctoral Dissertation]. University of Newcastle; 2014. Available from: http://hdl.handle.net/1959.13/1052917


Queensland University of Technology

10. Al Sabban, Wesam H. Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process.

Degree: 2015, Queensland University of Technology

 One of the main challenges facing online and offline path planners is the uncertainty in the magnitude and direction of the environmental energy because it… (more)

Subjects/Keywords: Autonomous Vehicle Path Planning; Path Planning Under Uncertainty; Markov Decision Process; Gaussian Based Markov Decision Process; GMDP; UAV; UAS; AUV

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Al Sabban, W. H. (2015). Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process. (Thesis). Queensland University of Technology. Retrieved from https://eprints.qut.edu.au/82297/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Al Sabban, Wesam H. “Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process.” 2015. Thesis, Queensland University of Technology. Accessed January 16, 2021. https://eprints.qut.edu.au/82297/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Al Sabban, Wesam H. “Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process.” 2015. Web. 16 Jan 2021.

Vancouver:

Al Sabban WH. Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process. [Internet] [Thesis]. Queensland University of Technology; 2015. [cited 2021 Jan 16]. Available from: https://eprints.qut.edu.au/82297/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Al Sabban WH. Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process. [Thesis]. Queensland University of Technology; 2015. Available from: https://eprints.qut.edu.au/82297/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Georgia

11. Trivedi, Maulesh. Inverse learning of robot behavior for ad-hoc teamwork.

Degree: 2017, University of Georgia

 Machine Learning and Robotics present a very intriguing combination of research in Artificial Intelligence. Inverse Reinforcement Learning (IRL) algorithms have generated a great deal of… (more)

Subjects/Keywords: Inverse Reinforcement Learning; Markov Decision Process; Bayes Adaptive Markov Decision Process; Best Response Model; Dec MDP; Optimal Policy; Reward Function

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Trivedi, M. (2017). Inverse learning of robot behavior for ad-hoc teamwork. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36912

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Trivedi, Maulesh. “Inverse learning of robot behavior for ad-hoc teamwork.” 2017. Thesis, University of Georgia. Accessed January 16, 2021. http://hdl.handle.net/10724/36912.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Trivedi, Maulesh. “Inverse learning of robot behavior for ad-hoc teamwork.” 2017. Web. 16 Jan 2021.

Vancouver:

Trivedi M. Inverse learning of robot behavior for ad-hoc teamwork. [Internet] [Thesis]. University of Georgia; 2017. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/10724/36912.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Trivedi M. Inverse learning of robot behavior for ad-hoc teamwork. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36912

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Urbana-Champaign

12. Baharian Khoshkhou, Golshid. Stochastic sequential assignment problem.

Degree: PhD, 0127, 2014, University of Illinois – Urbana-Champaign

 The stochastic sequential assignment problem (SSAP) studies the allocation of available distinct workers with deterministic values to sequentially-arriving tasks with stochastic parameters so as to… (more)

Subjects/Keywords: sequential assignment; Markov decision process; stationary policy; hidden Markov model; threshold criteria; risk measure

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Baharian Khoshkhou, G. (2014). Stochastic sequential assignment problem. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/50503

Chicago Manual of Style (16th Edition):

Baharian Khoshkhou, Golshid. “Stochastic sequential assignment problem.” 2014. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed January 16, 2021. http://hdl.handle.net/2142/50503.

MLA Handbook (7th Edition):

Baharian Khoshkhou, Golshid. “Stochastic sequential assignment problem.” 2014. Web. 16 Jan 2021.

Vancouver:

Baharian Khoshkhou G. Stochastic sequential assignment problem. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2014. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/2142/50503.

Council of Science Editors:

Baharian Khoshkhou G. Stochastic sequential assignment problem. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2014. Available from: http://hdl.handle.net/2142/50503


Cornell University

13. Kumar, Ravi. Dynamic Resource Management For Systems With Controllable Service Capacity.

Degree: PhD, Operations Research, 2015, Cornell University

 The rise in the Internet traffic volumes has led to a growing interest in reducing energy costs of IT infrastructure. Resource management policies for such… (more)

Subjects/Keywords: Markov Decision Process; Service Rate Control; Dynamic Power Management

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kumar, R. (2015). Dynamic Resource Management For Systems With Controllable Service Capacity. (Doctoral Dissertation). Cornell University. Retrieved from http://hdl.handle.net/1813/41011

Chicago Manual of Style (16th Edition):

Kumar, Ravi. “Dynamic Resource Management For Systems With Controllable Service Capacity.” 2015. Doctoral Dissertation, Cornell University. Accessed January 16, 2021. http://hdl.handle.net/1813/41011.

MLA Handbook (7th Edition):

Kumar, Ravi. “Dynamic Resource Management For Systems With Controllable Service Capacity.” 2015. Web. 16 Jan 2021.

Vancouver:

Kumar R. Dynamic Resource Management For Systems With Controllable Service Capacity. [Internet] [Doctoral dissertation]. Cornell University; 2015. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/1813/41011.

Council of Science Editors:

Kumar R. Dynamic Resource Management For Systems With Controllable Service Capacity. [Doctoral Dissertation]. Cornell University; 2015. Available from: http://hdl.handle.net/1813/41011


Penn State University

14. Hu, Nan. Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks.

Degree: 2016, Penn State University

 Support for intelligent and autonomous resource management is one of the key factors to the success of modern sensor network systems. The limited resources, such… (more)

Subjects/Keywords: stochastic resource allocation; markov decision process; uncertainty; sensor network

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hu, N. (2016). Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/13593nqh5045

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Hu, Nan. “Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks.” 2016. Thesis, Penn State University. Accessed January 16, 2021. https://submit-etda.libraries.psu.edu/catalog/13593nqh5045.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Hu, Nan. “Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks.” 2016. Web. 16 Jan 2021.

Vancouver:

Hu N. Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks. [Internet] [Thesis]. Penn State University; 2016. [cited 2021 Jan 16]. Available from: https://submit-etda.libraries.psu.edu/catalog/13593nqh5045.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Hu N. Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks. [Thesis]. Penn State University; 2016. Available from: https://submit-etda.libraries.psu.edu/catalog/13593nqh5045

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Queensland University of Technology

15. Glover, Arren John. Developing grounded representations for robots through the principles of sensorimotor coordination.

Degree: 2014, Queensland University of Technology

 Robots currently recognise and use objects through algorithms that are hand-coded or specifically trained. Such robots can operate in known, structured environments but cannot learn… (more)

Subjects/Keywords: Robotics; Affordance; Visual Object Recognition; Symbol Grounding; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Glover, A. J. (2014). Developing grounded representations for robots through the principles of sensorimotor coordination. (Thesis). Queensland University of Technology. Retrieved from https://eprints.qut.edu.au/71763/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Glover, Arren John. “Developing grounded representations for robots through the principles of sensorimotor coordination.” 2014. Thesis, Queensland University of Technology. Accessed January 16, 2021. https://eprints.qut.edu.au/71763/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Glover, Arren John. “Developing grounded representations for robots through the principles of sensorimotor coordination.” 2014. Web. 16 Jan 2021.

Vancouver:

Glover AJ. Developing grounded representations for robots through the principles of sensorimotor coordination. [Internet] [Thesis]. Queensland University of Technology; 2014. [cited 2021 Jan 16]. Available from: https://eprints.qut.edu.au/71763/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Glover AJ. Developing grounded representations for robots through the principles of sensorimotor coordination. [Thesis]. Queensland University of Technology; 2014. Available from: https://eprints.qut.edu.au/71763/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Australian National University

16. Karim, Mohammad Shahedul. Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications .

Degree: 2016, Australian National University

 The network coding paradigm enhances transmission efficiency by combining information flows and has drawn significant attention in information theory, networking, communications and data storage. Instantly… (more)

Subjects/Keywords: Network Coding; Wireless Communications; Video Streaming; Markov Decision Process; Graph Theory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Karim, M. S. (2016). Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications . (Thesis). Australian National University. Retrieved from http://hdl.handle.net/1885/118239

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Karim, Mohammad Shahedul. “Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications .” 2016. Thesis, Australian National University. Accessed January 16, 2021. http://hdl.handle.net/1885/118239.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Karim, Mohammad Shahedul. “Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications .” 2016. Web. 16 Jan 2021.

Vancouver:

Karim MS. Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications . [Internet] [Thesis]. Australian National University; 2016. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/1885/118239.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Karim MS. Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications . [Thesis]. Australian National University; 2016. Available from: http://hdl.handle.net/1885/118239

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

17. Venkatraman, Pavithra. Opportunistic bandwidth sharing through reinforcement learning.

Degree: MS, Electrical and Computer Engineering, 2010, Oregon State University

 The enormous success of wireless technology has recently led to an explosive demand for, and hence a shortage of, bandwidth resources. This expected shortage problem… (more)

Subjects/Keywords: Markov decision process

…8 3.1. Markov Decision Process (MDP)… …propose an RL scheme as a possible solution. 3.1. Markov Decision Process (MDP) We… …spectrum occupancy of PUs also follows a discrete-time ON/OFF Markov process. In most of these… …typically formalized in the context of Markov Decision Processes (MDPs). An MDP… …their observations to a decision center, which makes the decision regarding when and which… 

Page 1 Page 2 Page 3 Page 4 Page 5

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Venkatraman, P. (2010). Opportunistic bandwidth sharing through reinforcement learning. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/19126

Chicago Manual of Style (16th Edition):

Venkatraman, Pavithra. “Opportunistic bandwidth sharing through reinforcement learning.” 2010. Masters Thesis, Oregon State University. Accessed January 16, 2021. http://hdl.handle.net/1957/19126.

MLA Handbook (7th Edition):

Venkatraman, Pavithra. “Opportunistic bandwidth sharing through reinforcement learning.” 2010. Web. 16 Jan 2021.

Vancouver:

Venkatraman P. Opportunistic bandwidth sharing through reinforcement learning. [Internet] [Masters thesis]. Oregon State University; 2010. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/1957/19126.

Council of Science Editors:

Venkatraman P. Opportunistic bandwidth sharing through reinforcement learning. [Masters Thesis]. Oregon State University; 2010. Available from: http://hdl.handle.net/1957/19126


University of Ottawa

18. Astaraky, Davood. A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling .

Degree: 2013, University of Ottawa

 The thesis focuses on a model that seeks to address patient scheduling step of the surgical scheduling process to determine the number of surgeries to… (more)

Subjects/Keywords: Approximate Dynamic Programming; Surgical Scheduling; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Astaraky, D. (2013). A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling . (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/23622

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Astaraky, Davood. “A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling .” 2013. Thesis, University of Ottawa. Accessed January 16, 2021. http://hdl.handle.net/10393/23622.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Astaraky, Davood. “A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling .” 2013. Web. 16 Jan 2021.

Vancouver:

Astaraky D. A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling . [Internet] [Thesis]. University of Ottawa; 2013. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/10393/23622.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Astaraky D. A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling . [Thesis]. University of Ottawa; 2013. Available from: http://hdl.handle.net/10393/23622

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

19. Denis, Nicholas. On Hierarchical Goal Based Reinforcement Learning .

Degree: 2019, University of Ottawa

 Discrete time sequential decision processes require that an agent select an action at each time step. As humans, we plan over long time horizons and… (more)

Subjects/Keywords: Markov decision process; Reinforcement learning; Options framework; Temporal abstraction; Macro actions

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Denis, N. (2019). On Hierarchical Goal Based Reinforcement Learning . (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/39552

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Denis, Nicholas. “On Hierarchical Goal Based Reinforcement Learning .” 2019. Thesis, University of Ottawa. Accessed January 16, 2021. http://hdl.handle.net/10393/39552.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Denis, Nicholas. “On Hierarchical Goal Based Reinforcement Learning .” 2019. Web. 16 Jan 2021.

Vancouver:

Denis N. On Hierarchical Goal Based Reinforcement Learning . [Internet] [Thesis]. University of Ottawa; 2019. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/10393/39552.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Denis N. On Hierarchical Goal Based Reinforcement Learning . [Thesis]. University of Ottawa; 2019. Available from: http://hdl.handle.net/10393/39552

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Delft University of Technology

20. Walraven, E.M.P. (author). Traffic Flow Optimization using Reinforcement Learning.

Degree: 2014, Delft University of Technology

Traffic congestion causes unnecessary delay, pollution and increased fuel consumption. In this thesis we address this problem by proposing new algorithmic techniques to reduce traffic… (more)

Subjects/Keywords: reinforcement learning; markov decision process; traffic flow optimization; speed limits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Walraven, E. M. P. (. (2014). Traffic Flow Optimization using Reinforcement Learning. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:67d499a4-4398-416f-bb51-372bcaa25ac1

Chicago Manual of Style (16th Edition):

Walraven, E M P (author). “Traffic Flow Optimization using Reinforcement Learning.” 2014. Masters Thesis, Delft University of Technology. Accessed January 16, 2021. http://resolver.tudelft.nl/uuid:67d499a4-4398-416f-bb51-372bcaa25ac1.

MLA Handbook (7th Edition):

Walraven, E M P (author). “Traffic Flow Optimization using Reinforcement Learning.” 2014. Web. 16 Jan 2021.

Vancouver:

Walraven EMP(. Traffic Flow Optimization using Reinforcement Learning. [Internet] [Masters thesis]. Delft University of Technology; 2014. [cited 2021 Jan 16]. Available from: http://resolver.tudelft.nl/uuid:67d499a4-4398-416f-bb51-372bcaa25ac1.

Council of Science Editors:

Walraven EMP(. Traffic Flow Optimization using Reinforcement Learning. [Masters Thesis]. Delft University of Technology; 2014. Available from: http://resolver.tudelft.nl/uuid:67d499a4-4398-416f-bb51-372bcaa25ac1


University of Toronto

21. Delesalle, Samuel. Maintenance and Reliability Models for a Public Transit Bus.

Degree: 2020, University of Toronto

Public transit buses are an essential part of modern society and are prone to frequent breakdowns. The goal of this research is to develop an… (more)

Subjects/Keywords: Maintenance; Opportunistic; Reliability; Semi-Markov Decision Process; Simulation; Transit Bus; 0546

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Delesalle, S. (2020). Maintenance and Reliability Models for a Public Transit Bus. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/103534

Chicago Manual of Style (16th Edition):

Delesalle, Samuel. “Maintenance and Reliability Models for a Public Transit Bus.” 2020. Masters Thesis, University of Toronto. Accessed January 16, 2021. http://hdl.handle.net/1807/103534.

MLA Handbook (7th Edition):

Delesalle, Samuel. “Maintenance and Reliability Models for a Public Transit Bus.” 2020. Web. 16 Jan 2021.

Vancouver:

Delesalle S. Maintenance and Reliability Models for a Public Transit Bus. [Internet] [Masters thesis]. University of Toronto; 2020. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/1807/103534.

Council of Science Editors:

Delesalle S. Maintenance and Reliability Models for a Public Transit Bus. [Masters Thesis]. University of Toronto; 2020. Available from: http://hdl.handle.net/1807/103534


University of Windsor

22. Islam, Kingshuk Jubaer. Outsourcing Evaluation in RL Network.

Degree: MA, Industrial and Manufacturing Systems Engineering, 2012, University of Windsor

  This thesis addresses the qualitative investigation of the reverse logistics and outsourcing and a quantitative analysis of reverse logistic networks that covenant with the… (more)

Subjects/Keywords: Markov Decision Process; Outsourcing; Reverse Supply Chain; RL Network

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Islam, K. J. (2012). Outsourcing Evaluation in RL Network. (Masters Thesis). University of Windsor. Retrieved from https://scholar.uwindsor.ca/etd/5347

Chicago Manual of Style (16th Edition):

Islam, Kingshuk Jubaer. “Outsourcing Evaluation in RL Network.” 2012. Masters Thesis, University of Windsor. Accessed January 16, 2021. https://scholar.uwindsor.ca/etd/5347.

MLA Handbook (7th Edition):

Islam, Kingshuk Jubaer. “Outsourcing Evaluation in RL Network.” 2012. Web. 16 Jan 2021.

Vancouver:

Islam KJ. Outsourcing Evaluation in RL Network. [Internet] [Masters thesis]. University of Windsor; 2012. [cited 2021 Jan 16]. Available from: https://scholar.uwindsor.ca/etd/5347.

Council of Science Editors:

Islam KJ. Outsourcing Evaluation in RL Network. [Masters Thesis]. University of Windsor; 2012. Available from: https://scholar.uwindsor.ca/etd/5347

23. Poulin, Nolan. Proactive Planning through Active Policy Inference in Stochastic Environments.

Degree: MS, 2018, Worcester Polytechnic Institute

  In multi-agent Markov Decision Processes, a controllable agent must perform optimal planning in a dynamic and uncertain environment that includes another unknown and uncontrollable… (more)

Subjects/Keywords: active learning markov decision process softmax boltzmann policy gradient

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Poulin, N. (2018). Proactive Planning through Active Policy Inference in Stochastic Environments. (Thesis). Worcester Polytechnic Institute. Retrieved from etd-052818-100711 ; https://digitalcommons.wpi.edu/etd-theses/1267

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Poulin, Nolan. “Proactive Planning through Active Policy Inference in Stochastic Environments.” 2018. Thesis, Worcester Polytechnic Institute. Accessed January 16, 2021. etd-052818-100711 ; https://digitalcommons.wpi.edu/etd-theses/1267.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Poulin, Nolan. “Proactive Planning through Active Policy Inference in Stochastic Environments.” 2018. Web. 16 Jan 2021.

Vancouver:

Poulin N. Proactive Planning through Active Policy Inference in Stochastic Environments. [Internet] [Thesis]. Worcester Polytechnic Institute; 2018. [cited 2021 Jan 16]. Available from: etd-052818-100711 ; https://digitalcommons.wpi.edu/etd-theses/1267.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Poulin N. Proactive Planning through Active Policy Inference in Stochastic Environments. [Thesis]. Worcester Polytechnic Institute; 2018. Available from: etd-052818-100711 ; https://digitalcommons.wpi.edu/etd-theses/1267

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Edinburgh

24. Ahmad Mustaffa, Nurakmal. Global dual-sourcing strategy : is it effective in mitigating supply disruption?.

Degree: PhD, 2015, University of Edinburgh

 Most firms are still failing to think strategically and systematically about managing supply disruption risk and most of the supply chain management efforts are focused… (more)

Subjects/Keywords: 658.7; supply disruption; discrete time Markov decision process; DTMDP

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ahmad Mustaffa, N. (2015). Global dual-sourcing strategy : is it effective in mitigating supply disruption?. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/21046

Chicago Manual of Style (16th Edition):

Ahmad Mustaffa, Nurakmal. “Global dual-sourcing strategy : is it effective in mitigating supply disruption?.” 2015. Doctoral Dissertation, University of Edinburgh. Accessed January 16, 2021. http://hdl.handle.net/1842/21046.

MLA Handbook (7th Edition):

Ahmad Mustaffa, Nurakmal. “Global dual-sourcing strategy : is it effective in mitigating supply disruption?.” 2015. Web. 16 Jan 2021.

Vancouver:

Ahmad Mustaffa N. Global dual-sourcing strategy : is it effective in mitigating supply disruption?. [Internet] [Doctoral dissertation]. University of Edinburgh; 2015. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/1842/21046.

Council of Science Editors:

Ahmad Mustaffa N. Global dual-sourcing strategy : is it effective in mitigating supply disruption?. [Doctoral Dissertation]. University of Edinburgh; 2015. Available from: http://hdl.handle.net/1842/21046


Queens University

25. Cownden, Daniel. Evolutionarily Stable Learning and Foraging Strategies .

Degree: Mathematics and Statistics, 2012, Queens University

 This thesis examines a series of problems with the goal of better understanding the fundamental dilemma of whether to invest effort in obtaining information that… (more)

Subjects/Keywords: Evolutionary Game Theory ; Partially Observable Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cownden, D. (2012). Evolutionarily Stable Learning and Foraging Strategies . (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/6999

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Cownden, Daniel. “Evolutionarily Stable Learning and Foraging Strategies .” 2012. Thesis, Queens University. Accessed January 16, 2021. http://hdl.handle.net/1974/6999.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Cownden, Daniel. “Evolutionarily Stable Learning and Foraging Strategies .” 2012. Web. 16 Jan 2021.

Vancouver:

Cownden D. Evolutionarily Stable Learning and Foraging Strategies . [Internet] [Thesis]. Queens University; 2012. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/1974/6999.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Cownden D. Evolutionarily Stable Learning and Foraging Strategies . [Thesis]. Queens University; 2012. Available from: http://hdl.handle.net/1974/6999

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Louisiana State University

26. Irshad, Ahmed Syed. Fuzzifying [sic] Markov decision process.

Degree: MSEE, Electrical and Computer Engineering, 2005, Louisiana State University

Markov decision processes have become an indispensable tool in applications as diverse as equipment maintenance, manufacturing systems, inventory control, queuing networks and investment analysis. Typically… (more)

Subjects/Keywords: markov decision process; fuzzy membership

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Irshad, A. S. (2005). Fuzzifying [sic] Markov decision process. (Masters Thesis). Louisiana State University. Retrieved from etd-04112005-224801 ; https://digitalcommons.lsu.edu/gradschool_theses/1373

Chicago Manual of Style (16th Edition):

Irshad, Ahmed Syed. “Fuzzifying [sic] Markov decision process.” 2005. Masters Thesis, Louisiana State University. Accessed January 16, 2021. etd-04112005-224801 ; https://digitalcommons.lsu.edu/gradschool_theses/1373.

MLA Handbook (7th Edition):

Irshad, Ahmed Syed. “Fuzzifying [sic] Markov decision process.” 2005. Web. 16 Jan 2021.

Vancouver:

Irshad AS. Fuzzifying [sic] Markov decision process. [Internet] [Masters thesis]. Louisiana State University; 2005. [cited 2021 Jan 16]. Available from: etd-04112005-224801 ; https://digitalcommons.lsu.edu/gradschool_theses/1373.

Council of Science Editors:

Irshad AS. Fuzzifying [sic] Markov decision process. [Masters Thesis]. Louisiana State University; 2005. Available from: etd-04112005-224801 ; https://digitalcommons.lsu.edu/gradschool_theses/1373


University of Georgia

27. Bogert, Kenneth Daniel. Inverse reinforcement learning for robotic applications.

Degree: 2017, University of Georgia

 Robots deployed into many real-world scenarios are expected to face situations that their designers could not anticipate. Machine learning is an effective tool for extending… (more)

Subjects/Keywords: robotics; inverse reinforcement learning; machine learning; Markov decision process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bogert, K. D. (2017). Inverse reinforcement learning for robotic applications. (Thesis). University of Georgia. Retrieved from http://hdl.handle.net/10724/36625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Bogert, Kenneth Daniel. “Inverse reinforcement learning for robotic applications.” 2017. Thesis, University of Georgia. Accessed January 16, 2021. http://hdl.handle.net/10724/36625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Bogert, Kenneth Daniel. “Inverse reinforcement learning for robotic applications.” 2017. Web. 16 Jan 2021.

Vancouver:

Bogert KD. Inverse reinforcement learning for robotic applications. [Internet] [Thesis]. University of Georgia; 2017. [cited 2021 Jan 16]. Available from: http://hdl.handle.net/10724/36625.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bogert KD. Inverse reinforcement learning for robotic applications. [Thesis]. University of Georgia; 2017. Available from: http://hdl.handle.net/10724/36625

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of New South Wales

28. Bokani, Ayub. Dynamic adaptation of HTTP-based video streaming using Markov decision process.

Degree: Computer Science & Engineering, 2015, University of New South Wales

 Hypertext transfer protocol (HTTP) is the fundamental mechanics supporting web browsing on the Internet. An HTTP server stores large volumes of contents and delivers specific… (more)

Subjects/Keywords: Dinamic Adaptive Streaming over HTTP; Video Streaming; Markov Decision Process; DASH

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bokani, A. (2015). Dynamic adaptation of HTTP-based video streaming using Markov decision process. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/55827 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:39485/SOURCE02?view=true

Chicago Manual of Style (16th Edition):

Bokani, Ayub. “Dynamic adaptation of HTTP-based video streaming using Markov decision process.” 2015. Doctoral Dissertation, University of New South Wales. Accessed January 16, 2021. http://handle.unsw.edu.au/1959.4/55827 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:39485/SOURCE02?view=true.

MLA Handbook (7th Edition):

Bokani, Ayub. “Dynamic adaptation of HTTP-based video streaming using Markov decision process.” 2015. Web. 16 Jan 2021.

Vancouver:

Bokani A. Dynamic adaptation of HTTP-based video streaming using Markov decision process. [Internet] [Doctoral dissertation]. University of New South Wales; 2015. [cited 2021 Jan 16]. Available from: http://handle.unsw.edu.au/1959.4/55827 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:39485/SOURCE02?view=true.

Council of Science Editors:

Bokani A. Dynamic adaptation of HTTP-based video streaming using Markov decision process. [Doctoral Dissertation]. University of New South Wales; 2015. Available from: http://handle.unsw.edu.au/1959.4/55827 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:39485/SOURCE02?view=true


The Ohio State University

29. Swang, Theodore W, II. A Mathematical Model for the Energy Allocation Function of Sleep.

Degree: PhD, Mathematics, 2017, The Ohio State University

 The function of sleep remains one of the greatest unsolved questions in biology. Schmidt has proposed the unifying Energy Allocation Function of sleep, which posits… (more)

Subjects/Keywords: Mathematics; Sleep; mathematical biology; differential equations; Markov decision process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Swang, Theodore W, I. (2017). A Mathematical Model for the Energy Allocation Function of Sleep. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1483392711778623

Chicago Manual of Style (16th Edition):

Swang, Theodore W, II. “A Mathematical Model for the Energy Allocation Function of Sleep.” 2017. Doctoral Dissertation, The Ohio State University. Accessed January 16, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483392711778623.

MLA Handbook (7th Edition):

Swang, Theodore W, II. “A Mathematical Model for the Energy Allocation Function of Sleep.” 2017. Web. 16 Jan 2021.

Vancouver:

Swang, Theodore W I. A Mathematical Model for the Energy Allocation Function of Sleep. [Internet] [Doctoral dissertation]. The Ohio State University; 2017. [cited 2021 Jan 16]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1483392711778623.

Council of Science Editors:

Swang, Theodore W I. A Mathematical Model for the Energy Allocation Function of Sleep. [Doctoral Dissertation]. The Ohio State University; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1483392711778623


Texas State University – San Marcos

30. Kelly, Janiece. The Effect of Decoy Attacks on Dynamic Channel Assignment.

Degree: MS, Computer Science, 2014, Texas State University – San Marcos

 As networks grow rapidly denser with the introduction of wireless-enabled cars, wearables and appliances, signal interference coupled with limited radio spectrum availability emerges as a… (more)

Subjects/Keywords: Computer Science; Security; Dynamic Channel Assignment; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kelly, J. (2014). The Effect of Decoy Attacks on Dynamic Channel Assignment. (Masters Thesis). Texas State University – San Marcos. Retrieved from https://digital.library.txstate.edu/handle/10877/6370

Chicago Manual of Style (16th Edition):

Kelly, Janiece. “The Effect of Decoy Attacks on Dynamic Channel Assignment.” 2014. Masters Thesis, Texas State University – San Marcos. Accessed January 16, 2021. https://digital.library.txstate.edu/handle/10877/6370.

MLA Handbook (7th Edition):

Kelly, Janiece. “The Effect of Decoy Attacks on Dynamic Channel Assignment.” 2014. Web. 16 Jan 2021.

Vancouver:

Kelly J. The Effect of Decoy Attacks on Dynamic Channel Assignment. [Internet] [Masters thesis]. Texas State University – San Marcos; 2014. [cited 2021 Jan 16]. Available from: https://digital.library.txstate.edu/handle/10877/6370.

Council of Science Editors:

Kelly J. The Effect of Decoy Attacks on Dynamic Channel Assignment. [Masters Thesis]. Texas State University – San Marcos; 2014. Available from: https://digital.library.txstate.edu/handle/10877/6370

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

.