Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for subject:(Markov decision process). Showing records 1 – 30 of 214 total matches.

[1] [2] [3] [4] [5] [6] [7] [8]

Search Limiters

Last 2 Years | English Only

Degrees

Levels

Country

▼ Search Limiters


University of Georgia

1. Perez Barrenechea, Dennis David. Anytime point based approximations for interactive POMDPs.

Degree: MS, Artificial Intelligence, 2007, University of Georgia

 Partially observable Markov decision processes (POMDPs) have been largely accepted as a rich-framework for planning and control problems. In settings where multiple agents interact, POMDPs… (more)

Subjects/Keywords: Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Perez Barrenechea, D. D. (2007). Anytime point based approximations for interactive POMDPs. (Masters Thesis). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/perez-barrenechea_dennis_d_200712_ms

Chicago Manual of Style (16th Edition):

Perez Barrenechea, Dennis David. “Anytime point based approximations for interactive POMDPs.” 2007. Masters Thesis, University of Georgia. Accessed October 21, 2019. http://purl.galileo.usg.edu/uga_etd/perez-barrenechea_dennis_d_200712_ms.

MLA Handbook (7th Edition):

Perez Barrenechea, Dennis David. “Anytime point based approximations for interactive POMDPs.” 2007. Web. 21 Oct 2019.

Vancouver:

Perez Barrenechea DD. Anytime point based approximations for interactive POMDPs. [Internet] [Masters thesis]. University of Georgia; 2007. [cited 2019 Oct 21]. Available from: http://purl.galileo.usg.edu/uga_etd/perez-barrenechea_dennis_d_200712_ms.

Council of Science Editors:

Perez Barrenechea DD. Anytime point based approximations for interactive POMDPs. [Masters Thesis]. University of Georgia; 2007. Available from: http://purl.galileo.usg.edu/uga_etd/perez-barrenechea_dennis_d_200712_ms


University of Minnesota

2. Liu, Yuhang. Optimal serving schedules for multiple queues with size-independent service times.

Degree: MS, Industrial and Systems Engineering, 2013, University of Minnesota

University of Minnesota M.S. thesis. May 2013. Major: Industrial and Systems Engineering. Advisor: Zizhuo Wang. 1 computer file (PDF); v, 37 pages.

We consider a… (more)

Subjects/Keywords: Markov decision process; Queuing theory; Traffic schedule

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liu, Y. (2013). Optimal serving schedules for multiple queues with size-independent service times. (Masters Thesis). University of Minnesota. Retrieved from http://purl.umn.edu/160186

Chicago Manual of Style (16th Edition):

Liu, Yuhang. “Optimal serving schedules for multiple queues with size-independent service times.” 2013. Masters Thesis, University of Minnesota. Accessed October 21, 2019. http://purl.umn.edu/160186.

MLA Handbook (7th Edition):

Liu, Yuhang. “Optimal serving schedules for multiple queues with size-independent service times.” 2013. Web. 21 Oct 2019.

Vancouver:

Liu Y. Optimal serving schedules for multiple queues with size-independent service times. [Internet] [Masters thesis]. University of Minnesota; 2013. [cited 2019 Oct 21]. Available from: http://purl.umn.edu/160186.

Council of Science Editors:

Liu Y. Optimal serving schedules for multiple queues with size-independent service times. [Masters Thesis]. University of Minnesota; 2013. Available from: http://purl.umn.edu/160186


University of Minnesota

3. Liu, Yuhang. Optimal serving schedules for multiple queues with size-independent service times.

Degree: MS, Industrial and Systems Engineering, 2013, University of Minnesota

 We consider a service system with two Poisson arrival queues. There is a single server that chooses which queue to serve at each moment. Once… (more)

Subjects/Keywords: Markov decision process; Queuing theory; Traffic schedule

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Liu, Y. (2013). Optimal serving schedules for multiple queues with size-independent service times. (Masters Thesis). University of Minnesota. Retrieved from http://purl.umn.edu/160186

Chicago Manual of Style (16th Edition):

Liu, Yuhang. “Optimal serving schedules for multiple queues with size-independent service times.” 2013. Masters Thesis, University of Minnesota. Accessed October 21, 2019. http://purl.umn.edu/160186.

MLA Handbook (7th Edition):

Liu, Yuhang. “Optimal serving schedules for multiple queues with size-independent service times.” 2013. Web. 21 Oct 2019.

Vancouver:

Liu Y. Optimal serving schedules for multiple queues with size-independent service times. [Internet] [Masters thesis]. University of Minnesota; 2013. [cited 2019 Oct 21]. Available from: http://purl.umn.edu/160186.

Council of Science Editors:

Liu Y. Optimal serving schedules for multiple queues with size-independent service times. [Masters Thesis]. University of Minnesota; 2013. Available from: http://purl.umn.edu/160186


University of Manitoba

4. Gunasekara, Charith. Optimal threshold policy for opportunistic network coding under phase type arrivals.

Degree: Electrical and Computer Engineering, 2016, University of Manitoba

 Network coding allows each node in a network to perform some coding operations on the data packets and improve the overall throughput of communication. However,… (more)

Subjects/Keywords: Queueing Theory; Network Coding; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Gunasekara, C. (2016). Optimal threshold policy for opportunistic network coding under phase type arrivals. (Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/31615

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Gunasekara, Charith. “Optimal threshold policy for opportunistic network coding under phase type arrivals.” 2016. Thesis, University of Manitoba. Accessed October 21, 2019. http://hdl.handle.net/1993/31615.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Gunasekara, Charith. “Optimal threshold policy for opportunistic network coding under phase type arrivals.” 2016. Web. 21 Oct 2019.

Vancouver:

Gunasekara C. Optimal threshold policy for opportunistic network coding under phase type arrivals. [Internet] [Thesis]. University of Manitoba; 2016. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/1993/31615.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Gunasekara C. Optimal threshold policy for opportunistic network coding under phase type arrivals. [Thesis]. University of Manitoba; 2016. Available from: http://hdl.handle.net/1993/31615

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

5. Shirmohammadi, Mahsa. Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants.

Degree: Docteur es, Informatique, 2014, Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....)

 Les Markov Decision Process (MDP) sont des systèmes finis probabilistes avec à la fois des choix aléatoires et des stratégies, et sont ainsi reconnus comme… (more)

Subjects/Keywords: Markov decision process; Automates probabilistes; Mots synchronisants; Markov decision process; Probabilistic automata; Synchronising words

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Shirmohammadi, M. (2014). Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants. (Doctoral Dissertation). Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....). Retrieved from http://www.theses.fr/2014DENS0054

Chicago Manual of Style (16th Edition):

Shirmohammadi, Mahsa. “Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants.” 2014. Doctoral Dissertation, Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....). Accessed October 21, 2019. http://www.theses.fr/2014DENS0054.

MLA Handbook (7th Edition):

Shirmohammadi, Mahsa. “Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants.” 2014. Web. 21 Oct 2019.

Vancouver:

Shirmohammadi M. Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants. [Internet] [Doctoral dissertation]. Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....); 2014. [cited 2019 Oct 21]. Available from: http://www.theses.fr/2014DENS0054.

Council of Science Editors:

Shirmohammadi M. Qualitative analysis of synchronizing probabilistic systems : Analyse qualitative des systèmes probabilistes synchronisants. [Doctoral Dissertation]. Cachan, Ecole normale supérieure; Université libre de Bruxelles (1970-....); 2014. Available from: http://www.theses.fr/2014DENS0054


University of Akron

6. Chippa, Mukesh K. Goal-seeking Decision Support System to Empower Personal Wellness Management.

Degree: PhD, Computer Engineering, 2016, University of Akron

 Obesity has reached epidemic proportions globally, withmore than one billion adults overweight with at least threehundred million of them clinically obese; this is a major… (more)

Subjects/Keywords: Computer Engineering; decision support system, personalized wellness management, Goal seeking paradigm, markov decision process, partially observable markov decision process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Chippa, M. K. (2016). Goal-seeking Decision Support System to Empower Personal Wellness Management. (Doctoral Dissertation). University of Akron. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467

Chicago Manual of Style (16th Edition):

Chippa, Mukesh K. “Goal-seeking Decision Support System to Empower Personal Wellness Management.” 2016. Doctoral Dissertation, University of Akron. Accessed October 21, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467.

MLA Handbook (7th Edition):

Chippa, Mukesh K. “Goal-seeking Decision Support System to Empower Personal Wellness Management.” 2016. Web. 21 Oct 2019.

Vancouver:

Chippa MK. Goal-seeking Decision Support System to Empower Personal Wellness Management. [Internet] [Doctoral dissertation]. University of Akron; 2016. [cited 2019 Oct 21]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467.

Council of Science Editors:

Chippa MK. Goal-seeking Decision Support System to Empower Personal Wellness Management. [Doctoral Dissertation]. University of Akron; 2016. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=akron1480413936639467


Queensland University of Technology

7. Al Sabban, Wesam H. Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process.

Degree: 2015, Queensland University of Technology

 One of the main challenges facing online and offline path planners is the uncertainty in the magnitude and direction of the environmental energy because it… (more)

Subjects/Keywords: Autonomous Vehicle Path Planning; Path Planning Under Uncertainty; Markov Decision Process; Gaussian Based Markov Decision Process; GMDP; UAV; UAS; AUV

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Al Sabban, W. H. (2015). Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process. (Thesis). Queensland University of Technology. Retrieved from https://eprints.qut.edu.au/82297/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Al Sabban, Wesam H. “Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process.” 2015. Thesis, Queensland University of Technology. Accessed October 21, 2019. https://eprints.qut.edu.au/82297/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Al Sabban, Wesam H. “Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process.” 2015. Web. 21 Oct 2019.

Vancouver:

Al Sabban WH. Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process. [Internet] [Thesis]. Queensland University of Technology; 2015. [cited 2019 Oct 21]. Available from: https://eprints.qut.edu.au/82297/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Al Sabban WH. Autonomous vehicle path planning for persistence monitoring under uncertainty using Gaussian based Markov decision process. [Thesis]. Queensland University of Technology; 2015. Available from: https://eprints.qut.edu.au/82297/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Newcastle

8. Abed-Alguni, Bilal Hashem Kalil. Cooperative reinforcement learning for independent learners.

Degree: PhD, 2014, University of Newcastle

Research Doctorate - Doctor of Philosophy (PhD)

Machine learning in multi-agent domains poses several research challenges. One challenge is how to model cooperation between reinforcement… (more)

Subjects/Keywords: reinforcement learning; Q-learning; multi-agent system; distributed system; Markov decision process; factored Markov decision process; cooperation

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Abed-Alguni, B. H. K. (2014). Cooperative reinforcement learning for independent learners. (Doctoral Dissertation). University of Newcastle. Retrieved from http://hdl.handle.net/1959.13/1052917

Chicago Manual of Style (16th Edition):

Abed-Alguni, Bilal Hashem Kalil. “Cooperative reinforcement learning for independent learners.” 2014. Doctoral Dissertation, University of Newcastle. Accessed October 21, 2019. http://hdl.handle.net/1959.13/1052917.

MLA Handbook (7th Edition):

Abed-Alguni, Bilal Hashem Kalil. “Cooperative reinforcement learning for independent learners.” 2014. Web. 21 Oct 2019.

Vancouver:

Abed-Alguni BHK. Cooperative reinforcement learning for independent learners. [Internet] [Doctoral dissertation]. University of Newcastle; 2014. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/1959.13/1052917.

Council of Science Editors:

Abed-Alguni BHK. Cooperative reinforcement learning for independent learners. [Doctoral Dissertation]. University of Newcastle; 2014. Available from: http://hdl.handle.net/1959.13/1052917


University of Illinois – Urbana-Champaign

9. Baharian Khoshkhou, Golshid. Stochastic sequential assignment problem.

Degree: PhD, 0127, 2014, University of Illinois – Urbana-Champaign

 The stochastic sequential assignment problem (SSAP) studies the allocation of available distinct workers with deterministic values to sequentially-arriving tasks with stochastic parameters so as to… (more)

Subjects/Keywords: sequential assignment; Markov decision process; stationary policy; hidden Markov model; threshold criteria; risk measure

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Baharian Khoshkhou, G. (2014). Stochastic sequential assignment problem. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/50503

Chicago Manual of Style (16th Edition):

Baharian Khoshkhou, Golshid. “Stochastic sequential assignment problem.” 2014. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed October 21, 2019. http://hdl.handle.net/2142/50503.

MLA Handbook (7th Edition):

Baharian Khoshkhou, Golshid. “Stochastic sequential assignment problem.” 2014. Web. 21 Oct 2019.

Vancouver:

Baharian Khoshkhou G. Stochastic sequential assignment problem. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2014. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/2142/50503.

Council of Science Editors:

Baharian Khoshkhou G. Stochastic sequential assignment problem. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2014. Available from: http://hdl.handle.net/2142/50503


University of Ottawa

10. Denis, Nicholas. On Hierarchical Goal Based Reinforcement Learning .

Degree: 2019, University of Ottawa

 Discrete time sequential decision processes require that an agent select an action at each time step. As humans, we plan over long time horizons and… (more)

Subjects/Keywords: Markov decision process; Reinforcement learning; Options framework; Temporal abstraction; Macro actions

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Denis, N. (2019). On Hierarchical Goal Based Reinforcement Learning . (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/39552

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Denis, Nicholas. “On Hierarchical Goal Based Reinforcement Learning .” 2019. Thesis, University of Ottawa. Accessed October 21, 2019. http://hdl.handle.net/10393/39552.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Denis, Nicholas. “On Hierarchical Goal Based Reinforcement Learning .” 2019. Web. 21 Oct 2019.

Vancouver:

Denis N. On Hierarchical Goal Based Reinforcement Learning . [Internet] [Thesis]. University of Ottawa; 2019. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/10393/39552.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Denis N. On Hierarchical Goal Based Reinforcement Learning . [Thesis]. University of Ottawa; 2019. Available from: http://hdl.handle.net/10393/39552

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Penn State University

11. Hu, Nan. Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks.

Degree: PhD, Computer Science and Engineering, 2016, Penn State University

 Support for intelligent and autonomous resource management is one of the key factors to the success of modern sensor network systems. The limited resources, such… (more)

Subjects/Keywords: stochastic resource allocation; markov decision process; uncertainty; sensor network

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Hu, N. (2016). Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks. (Doctoral Dissertation). Penn State University. Retrieved from https://etda.libraries.psu.edu/catalog/13593nqh5045

Chicago Manual of Style (16th Edition):

Hu, Nan. “Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks.” 2016. Doctoral Dissertation, Penn State University. Accessed October 21, 2019. https://etda.libraries.psu.edu/catalog/13593nqh5045.

MLA Handbook (7th Edition):

Hu, Nan. “Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks.” 2016. Web. 21 Oct 2019.

Vancouver:

Hu N. Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks. [Internet] [Doctoral dissertation]. Penn State University; 2016. [cited 2019 Oct 21]. Available from: https://etda.libraries.psu.edu/catalog/13593nqh5045.

Council of Science Editors:

Hu N. Stochastic Resource Allocation Strategies With Uncertain Information In Sensor Networks. [Doctoral Dissertation]. Penn State University; 2016. Available from: https://etda.libraries.psu.edu/catalog/13593nqh5045


Louisiana State University

12. Irshad, Ahmed Syed. Fuzzifying [sic] Markov decision process.

Degree: MSEE, Electrical and Computer Engineering, 2005, Louisiana State University

Markov decision processes have become an indispensable tool in applications as diverse as equipment maintenance, manufacturing systems, inventory control, queuing networks and investment analysis. Typically… (more)

Subjects/Keywords: markov decision process; fuzzy membership

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Irshad, A. S. (2005). Fuzzifying [sic] Markov decision process. (Masters Thesis). Louisiana State University. Retrieved from etd-04112005-224801 ; https://digitalcommons.lsu.edu/gradschool_theses/1373

Chicago Manual of Style (16th Edition):

Irshad, Ahmed Syed. “Fuzzifying [sic] Markov decision process.” 2005. Masters Thesis, Louisiana State University. Accessed October 21, 2019. etd-04112005-224801 ; https://digitalcommons.lsu.edu/gradschool_theses/1373.

MLA Handbook (7th Edition):

Irshad, Ahmed Syed. “Fuzzifying [sic] Markov decision process.” 2005. Web. 21 Oct 2019.

Vancouver:

Irshad AS. Fuzzifying [sic] Markov decision process. [Internet] [Masters thesis]. Louisiana State University; 2005. [cited 2019 Oct 21]. Available from: etd-04112005-224801 ; https://digitalcommons.lsu.edu/gradschool_theses/1373.

Council of Science Editors:

Irshad AS. Fuzzifying [sic] Markov decision process. [Masters Thesis]. Louisiana State University; 2005. Available from: etd-04112005-224801 ; https://digitalcommons.lsu.edu/gradschool_theses/1373


University of California – Berkeley

13. Malek, Alan. Efficient Sequential Decision Making.

Degree: Electrical Engineering & Computer Sciences, 2017, University of California – Berkeley

 This thesis studies three problems in sequential decision making across two different frameworks. The first framework we consider is online learning: for each round of… (more)

Subjects/Keywords: Computer science; markov decision process; Minimax algorithm; Online Learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Malek, A. (2017). Efficient Sequential Decision Making. (Thesis). University of California – Berkeley. Retrieved from http://www.escholarship.org/uc/item/0qm524f5

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Malek, Alan. “Efficient Sequential Decision Making.” 2017. Thesis, University of California – Berkeley. Accessed October 21, 2019. http://www.escholarship.org/uc/item/0qm524f5.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Malek, Alan. “Efficient Sequential Decision Making.” 2017. Web. 21 Oct 2019.

Vancouver:

Malek A. Efficient Sequential Decision Making. [Internet] [Thesis]. University of California – Berkeley; 2017. [cited 2019 Oct 21]. Available from: http://www.escholarship.org/uc/item/0qm524f5.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Malek A. Efficient Sequential Decision Making. [Thesis]. University of California – Berkeley; 2017. Available from: http://www.escholarship.org/uc/item/0qm524f5

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Queensland University of Technology

14. Glover, Arren John. Developing grounded representations for robots through the principles of sensorimotor coordination.

Degree: 2014, Queensland University of Technology

 Robots currently recognise and use objects through algorithms that are hand-coded or specifically trained. Such robots can operate in known, structured environments but cannot learn… (more)

Subjects/Keywords: Robotics; Affordance; Visual Object Recognition; Symbol Grounding; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Glover, A. J. (2014). Developing grounded representations for robots through the principles of sensorimotor coordination. (Thesis). Queensland University of Technology. Retrieved from https://eprints.qut.edu.au/71763/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Glover, Arren John. “Developing grounded representations for robots through the principles of sensorimotor coordination.” 2014. Thesis, Queensland University of Technology. Accessed October 21, 2019. https://eprints.qut.edu.au/71763/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Glover, Arren John. “Developing grounded representations for robots through the principles of sensorimotor coordination.” 2014. Web. 21 Oct 2019.

Vancouver:

Glover AJ. Developing grounded representations for robots through the principles of sensorimotor coordination. [Internet] [Thesis]. Queensland University of Technology; 2014. [cited 2019 Oct 21]. Available from: https://eprints.qut.edu.au/71763/.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Glover AJ. Developing grounded representations for robots through the principles of sensorimotor coordination. [Thesis]. Queensland University of Technology; 2014. Available from: https://eprints.qut.edu.au/71763/

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


The Ohio State University

15. Swang, Theodore W, II. A Mathematical Model for the Energy Allocation Function of Sleep.

Degree: PhD, Mathematics, 2017, The Ohio State University

 The function of sleep remains one of the greatest unsolved questions in biology. Schmidt has proposed the unifying Energy Allocation Function of sleep, which posits… (more)

Subjects/Keywords: Mathematics; Sleep; mathematical biology; differential equations; Markov decision process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Swang, Theodore W, I. (2017). A Mathematical Model for the Energy Allocation Function of Sleep. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1483392711778623

Chicago Manual of Style (16th Edition):

Swang, Theodore W, II. “A Mathematical Model for the Energy Allocation Function of Sleep.” 2017. Doctoral Dissertation, The Ohio State University. Accessed October 21, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483392711778623.

MLA Handbook (7th Edition):

Swang, Theodore W, II. “A Mathematical Model for the Energy Allocation Function of Sleep.” 2017. Web. 21 Oct 2019.

Vancouver:

Swang, Theodore W I. A Mathematical Model for the Energy Allocation Function of Sleep. [Internet] [Doctoral dissertation]. The Ohio State University; 2017. [cited 2019 Oct 21]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1483392711778623.

Council of Science Editors:

Swang, Theodore W I. A Mathematical Model for the Energy Allocation Function of Sleep. [Doctoral Dissertation]. The Ohio State University; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1483392711778623


Virginia Tech

16. Selvi, Ersin Suleyman. Cognitive Radar Applied To Target Tracking Using Markov Decision Processes.

Degree: MS, Electrical Engineering, 2018, Virginia Tech

 The radio-frequency spectrum is a precious resource, with many applications and users, especially with the recent spectrum auction in the United States. Future platforms and… (more)

Subjects/Keywords: Cognitive radar; target tracking; Markov decision process; interference mitigation; spectrum coexistence

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Selvi, E. S. (2018). Cognitive Radar Applied To Target Tracking Using Markov Decision Processes. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/81968

Chicago Manual of Style (16th Edition):

Selvi, Ersin Suleyman. “Cognitive Radar Applied To Target Tracking Using Markov Decision Processes.” 2018. Masters Thesis, Virginia Tech. Accessed October 21, 2019. http://hdl.handle.net/10919/81968.

MLA Handbook (7th Edition):

Selvi, Ersin Suleyman. “Cognitive Radar Applied To Target Tracking Using Markov Decision Processes.” 2018. Web. 21 Oct 2019.

Vancouver:

Selvi ES. Cognitive Radar Applied To Target Tracking Using Markov Decision Processes. [Internet] [Masters thesis]. Virginia Tech; 2018. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/10919/81968.

Council of Science Editors:

Selvi ES. Cognitive Radar Applied To Target Tracking Using Markov Decision Processes. [Masters Thesis]. Virginia Tech; 2018. Available from: http://hdl.handle.net/10919/81968

17. Venkatraman, Pavithra. Opportunistic bandwidth sharing through reinforcement learning.

Degree: MS, Electrical and Computer Engineering, 2010, Oregon State University

 The enormous success of wireless technology has recently led to an explosive demand for, and hence a shortage of, bandwidth resources. This expected shortage problem… (more)

Subjects/Keywords: Markov decision process

…8 3.1. Markov Decision Process (MDP)… …propose an RL scheme as a possible solution. 3.1. Markov Decision Process (MDP) We… …spectrum occupancy of PUs also follows a discrete-time ON/OFF Markov process. In most of these… …typically formalized in the context of Markov Decision Processes (MDPs). An MDP… …their observations to a decision center, which makes the decision regarding when and which… 

Page 1 Page 2 Page 3 Page 4 Page 5

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Venkatraman, P. (2010). Opportunistic bandwidth sharing through reinforcement learning. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/19126

Chicago Manual of Style (16th Edition):

Venkatraman, Pavithra. “Opportunistic bandwidth sharing through reinforcement learning.” 2010. Masters Thesis, Oregon State University. Accessed October 21, 2019. http://hdl.handle.net/1957/19126.

MLA Handbook (7th Edition):

Venkatraman, Pavithra. “Opportunistic bandwidth sharing through reinforcement learning.” 2010. Web. 21 Oct 2019.

Vancouver:

Venkatraman P. Opportunistic bandwidth sharing through reinforcement learning. [Internet] [Masters thesis]. Oregon State University; 2010. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/1957/19126.

Council of Science Editors:

Venkatraman P. Opportunistic bandwidth sharing through reinforcement learning. [Masters Thesis]. Oregon State University; 2010. Available from: http://hdl.handle.net/1957/19126


Cornell University

18. Kumar, Ravi. Dynamic Resource Management For Systems With Controllable Service Capacity .

Degree: 2015, Cornell University

 The rise in the Internet traffic volumes has led to a growing interest in reducing energy costs of IT infrastructure. Resource management policies for such… (more)

Subjects/Keywords: Markov Decision Process; Service Rate Control; Dynamic Power Management

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kumar, R. (2015). Dynamic Resource Management For Systems With Controllable Service Capacity . (Thesis). Cornell University. Retrieved from http://hdl.handle.net/1813/41011

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Kumar, Ravi. “Dynamic Resource Management For Systems With Controllable Service Capacity .” 2015. Thesis, Cornell University. Accessed October 21, 2019. http://hdl.handle.net/1813/41011.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Kumar, Ravi. “Dynamic Resource Management For Systems With Controllable Service Capacity .” 2015. Web. 21 Oct 2019.

Vancouver:

Kumar R. Dynamic Resource Management For Systems With Controllable Service Capacity . [Internet] [Thesis]. Cornell University; 2015. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/1813/41011.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Kumar R. Dynamic Resource Management For Systems With Controllable Service Capacity . [Thesis]. Cornell University; 2015. Available from: http://hdl.handle.net/1813/41011

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Delft University of Technology

19. Walraven, E.M.P. Traffic Flow Optimization using Reinforcement Learning:.

Degree: 2014, Delft University of Technology

 Traffic congestion causes unnecessary delay, pollution and increased fuel consumption. In this thesis we address this problem by proposing new algorithmic techniques to reduce traffic… (more)

Subjects/Keywords: reinforcement learning; markov decision process; traffic flow optimization; speed limits

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Walraven, E. M. P. (2014). Traffic Flow Optimization using Reinforcement Learning:. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:67d499a4-4398-416f-bb51-372bcaa25ac1

Chicago Manual of Style (16th Edition):

Walraven, E M P. “Traffic Flow Optimization using Reinforcement Learning:.” 2014. Masters Thesis, Delft University of Technology. Accessed October 21, 2019. http://resolver.tudelft.nl/uuid:67d499a4-4398-416f-bb51-372bcaa25ac1.

MLA Handbook (7th Edition):

Walraven, E M P. “Traffic Flow Optimization using Reinforcement Learning:.” 2014. Web. 21 Oct 2019.

Vancouver:

Walraven EMP. Traffic Flow Optimization using Reinforcement Learning:. [Internet] [Masters thesis]. Delft University of Technology; 2014. [cited 2019 Oct 21]. Available from: http://resolver.tudelft.nl/uuid:67d499a4-4398-416f-bb51-372bcaa25ac1.

Council of Science Editors:

Walraven EMP. Traffic Flow Optimization using Reinforcement Learning:. [Masters Thesis]. Delft University of Technology; 2014. Available from: http://resolver.tudelft.nl/uuid:67d499a4-4398-416f-bb51-372bcaa25ac1


University of Ottawa

20. Astaraky, Davood. A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling .

Degree: 2013, University of Ottawa

 The thesis focuses on a model that seeks to address patient scheduling step of the surgical scheduling process to determine the number of surgeries to… (more)

Subjects/Keywords: Approximate Dynamic Programming; Surgical Scheduling; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Astaraky, D. (2013). A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling . (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/23622

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Astaraky, Davood. “A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling .” 2013. Thesis, University of Ottawa. Accessed October 21, 2019. http://hdl.handle.net/10393/23622.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Astaraky, Davood. “A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling .” 2013. Web. 21 Oct 2019.

Vancouver:

Astaraky D. A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling . [Internet] [Thesis]. University of Ottawa; 2013. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/10393/23622.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Astaraky D. A Simulation Based Approximate Dynamic Programming Approach to Multi-class, Multi-resource Surgical Scheduling . [Thesis]. University of Ottawa; 2013. Available from: http://hdl.handle.net/10393/23622

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Edinburgh

21. Ahmad Mustaffa, Nurakmal. Global dual-sourcing strategy : is it effective in mitigating supply disruption?.

Degree: PhD, 2015, University of Edinburgh

 Most firms are still failing to think strategically and systematically about managing supply disruption risk and most of the supply chain management efforts are focused… (more)

Subjects/Keywords: 658.7; supply disruption; discrete time Markov decision process; DTMDP

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Ahmad Mustaffa, N. (2015). Global dual-sourcing strategy : is it effective in mitigating supply disruption?. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/21046

Chicago Manual of Style (16th Edition):

Ahmad Mustaffa, Nurakmal. “Global dual-sourcing strategy : is it effective in mitigating supply disruption?.” 2015. Doctoral Dissertation, University of Edinburgh. Accessed October 21, 2019. http://hdl.handle.net/1842/21046.

MLA Handbook (7th Edition):

Ahmad Mustaffa, Nurakmal. “Global dual-sourcing strategy : is it effective in mitigating supply disruption?.” 2015. Web. 21 Oct 2019.

Vancouver:

Ahmad Mustaffa N. Global dual-sourcing strategy : is it effective in mitigating supply disruption?. [Internet] [Doctoral dissertation]. University of Edinburgh; 2015. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/1842/21046.

Council of Science Editors:

Ahmad Mustaffa N. Global dual-sourcing strategy : is it effective in mitigating supply disruption?. [Doctoral Dissertation]. University of Edinburgh; 2015. Available from: http://hdl.handle.net/1842/21046


IUPUI

22. Nguyen, Thanh Minh. Selectively decentralized reinforcement learning.

Degree: 2018, IUPUI

Indiana University-Purdue University Indianapolis (IUPUI)

The main contributions in this thesis include the selectively decentralized method in solving multi-agent reinforcement learning problems and the discretized… (more)

Subjects/Keywords: Selective Decentralization; Reinforcement Learning; Markov Decision Process; Multidisiplinary Optimization; Adaptive Control

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Nguyen, T. M. (2018). Selectively decentralized reinforcement learning. (Thesis). IUPUI. Retrieved from http://hdl.handle.net/1805/17103

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Nguyen, Thanh Minh. “Selectively decentralized reinforcement learning.” 2018. Thesis, IUPUI. Accessed October 21, 2019. http://hdl.handle.net/1805/17103.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Nguyen, Thanh Minh. “Selectively decentralized reinforcement learning.” 2018. Web. 21 Oct 2019.

Vancouver:

Nguyen TM. Selectively decentralized reinforcement learning. [Internet] [Thesis]. IUPUI; 2018. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/1805/17103.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Nguyen TM. Selectively decentralized reinforcement learning. [Thesis]. IUPUI; 2018. Available from: http://hdl.handle.net/1805/17103

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Waterloo

23. Li, Changjian. Autonomous Driving: A Multi-Objective Deep Reinforcement Learning Approach.

Degree: 2019, University of Waterloo

 Autonomous driving is a challenging domain that entails multiple aspects: a vehicle should be able to drive to its destination as fast as possible while… (more)

Subjects/Keywords: autonomous driving; reinforcement learning; Markov decision process; deep learning

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Li, C. (2019). Autonomous Driving: A Multi-Objective Deep Reinforcement Learning Approach. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/14697

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Li, Changjian. “Autonomous Driving: A Multi-Objective Deep Reinforcement Learning Approach.” 2019. Thesis, University of Waterloo. Accessed October 21, 2019. http://hdl.handle.net/10012/14697.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Li, Changjian. “Autonomous Driving: A Multi-Objective Deep Reinforcement Learning Approach.” 2019. Web. 21 Oct 2019.

Vancouver:

Li C. Autonomous Driving: A Multi-Objective Deep Reinforcement Learning Approach. [Internet] [Thesis]. University of Waterloo; 2019. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/10012/14697.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Li C. Autonomous Driving: A Multi-Objective Deep Reinforcement Learning Approach. [Thesis]. University of Waterloo; 2019. Available from: http://hdl.handle.net/10012/14697

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

24. Johns, Jeffrey Thomas. Basis Construction and Utilization for Markov Decision Processes Using Graphs.

Degree: PhD, Computer Science, 2010, U of Massachusetts : PhD

 The ease or difficulty in solving a problemstrongly depends on the way it is represented. For example, consider the task of multiplying the numbers 12… (more)

Subjects/Keywords: Markov decision process; Reinforcement learning; Representation discovery; Computer Sciences

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Johns, J. T. (2010). Basis Construction and Utilization for Markov Decision Processes Using Graphs. (Doctoral Dissertation). U of Massachusetts : PhD. Retrieved from https://scholarworks.umass.edu/open_access_dissertations/177

Chicago Manual of Style (16th Edition):

Johns, Jeffrey Thomas. “Basis Construction and Utilization for Markov Decision Processes Using Graphs.” 2010. Doctoral Dissertation, U of Massachusetts : PhD. Accessed October 21, 2019. https://scholarworks.umass.edu/open_access_dissertations/177.

MLA Handbook (7th Edition):

Johns, Jeffrey Thomas. “Basis Construction and Utilization for Markov Decision Processes Using Graphs.” 2010. Web. 21 Oct 2019.

Vancouver:

Johns JT. Basis Construction and Utilization for Markov Decision Processes Using Graphs. [Internet] [Doctoral dissertation]. U of Massachusetts : PhD; 2010. [cited 2019 Oct 21]. Available from: https://scholarworks.umass.edu/open_access_dissertations/177.

Council of Science Editors:

Johns JT. Basis Construction and Utilization for Markov Decision Processes Using Graphs. [Doctoral Dissertation]. U of Massachusetts : PhD; 2010. Available from: https://scholarworks.umass.edu/open_access_dissertations/177


University of New South Wales

25. Bokani, Ayub. Dynamic adaptation of HTTP-based video streaming using Markov decision process.

Degree: Computer Science & Engineering, 2015, University of New South Wales

 Hypertext transfer protocol (HTTP) is the fundamental mechanics supporting web browsing on the Internet. An HTTP server stores large volumes of contents and delivers specific… (more)

Subjects/Keywords: Dinamic Adaptive Streaming over HTTP; Video Streaming; Markov Decision Process; DASH

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Bokani, A. (2015). Dynamic adaptation of HTTP-based video streaming using Markov decision process. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/55827 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:39485/SOURCE02?view=true

Chicago Manual of Style (16th Edition):

Bokani, Ayub. “Dynamic adaptation of HTTP-based video streaming using Markov decision process.” 2015. Doctoral Dissertation, University of New South Wales. Accessed October 21, 2019. http://handle.unsw.edu.au/1959.4/55827 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:39485/SOURCE02?view=true.

MLA Handbook (7th Edition):

Bokani, Ayub. “Dynamic adaptation of HTTP-based video streaming using Markov decision process.” 2015. Web. 21 Oct 2019.

Vancouver:

Bokani A. Dynamic adaptation of HTTP-based video streaming using Markov decision process. [Internet] [Doctoral dissertation]. University of New South Wales; 2015. [cited 2019 Oct 21]. Available from: http://handle.unsw.edu.au/1959.4/55827 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:39485/SOURCE02?view=true.

Council of Science Editors:

Bokani A. Dynamic adaptation of HTTP-based video streaming using Markov decision process. [Doctoral Dissertation]. University of New South Wales; 2015. Available from: http://handle.unsw.edu.au/1959.4/55827 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:39485/SOURCE02?view=true


University of Windsor

26. Islam, Kingshuk Jubaer. Outsourcing Evaluation in RL Network.

Degree: MA, Industrial and Manufacturing Systems Engineering, 2012, University of Windsor

  This thesis addresses the qualitative investigation of the reverse logistics and outsourcing and a quantitative analysis of reverse logistic networks that covenant with the… (more)

Subjects/Keywords: Markov Decision Process; Outsourcing; Reverse Supply Chain; RL Network

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Islam, K. J. (2012). Outsourcing Evaluation in RL Network. (Masters Thesis). University of Windsor. Retrieved from https://scholar.uwindsor.ca/etd/5347

Chicago Manual of Style (16th Edition):

Islam, Kingshuk Jubaer. “Outsourcing Evaluation in RL Network.” 2012. Masters Thesis, University of Windsor. Accessed October 21, 2019. https://scholar.uwindsor.ca/etd/5347.

MLA Handbook (7th Edition):

Islam, Kingshuk Jubaer. “Outsourcing Evaluation in RL Network.” 2012. Web. 21 Oct 2019.

Vancouver:

Islam KJ. Outsourcing Evaluation in RL Network. [Internet] [Masters thesis]. University of Windsor; 2012. [cited 2019 Oct 21]. Available from: https://scholar.uwindsor.ca/etd/5347.

Council of Science Editors:

Islam KJ. Outsourcing Evaluation in RL Network. [Masters Thesis]. University of Windsor; 2012. Available from: https://scholar.uwindsor.ca/etd/5347


Queens University

27. Cownden, Daniel. Evolutionarily Stable Learning and Foraging Strategies .

Degree: Mathematics and Statistics, 2012, Queens University

 This thesis examines a series of problems with the goal of better understanding the fundamental dilemma of whether to invest effort in obtaining information that… (more)

Subjects/Keywords: Evolutionary Game Theory; Partially Observable Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Cownden, D. (2012). Evolutionarily Stable Learning and Foraging Strategies . (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/6999

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Cownden, Daniel. “Evolutionarily Stable Learning and Foraging Strategies .” 2012. Thesis, Queens University. Accessed October 21, 2019. http://hdl.handle.net/1974/6999.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Cownden, Daniel. “Evolutionarily Stable Learning and Foraging Strategies .” 2012. Web. 21 Oct 2019.

Vancouver:

Cownden D. Evolutionarily Stable Learning and Foraging Strategies . [Internet] [Thesis]. Queens University; 2012. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/1974/6999.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Cownden D. Evolutionarily Stable Learning and Foraging Strategies . [Thesis]. Queens University; 2012. Available from: http://hdl.handle.net/1974/6999

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


Texas State University – San Marcos

28. Kelly, Janiece. The Effect of Decoy Attacks on Dynamic Channel Assignment.

Degree: MS, Computer Science, 2014, Texas State University – San Marcos

 As networks grow rapidly denser with the introduction of wireless-enabled cars, wearables and appliances, signal interference coupled with limited radio spectrum availability emerges as a… (more)

Subjects/Keywords: Computer Science; Security; Dynamic Channel Assignment; Markov Decision Process

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Kelly, J. (2014). The Effect of Decoy Attacks on Dynamic Channel Assignment. (Masters Thesis). Texas State University – San Marcos. Retrieved from https://digital.library.txstate.edu/handle/10877/6370

Chicago Manual of Style (16th Edition):

Kelly, Janiece. “The Effect of Decoy Attacks on Dynamic Channel Assignment.” 2014. Masters Thesis, Texas State University – San Marcos. Accessed October 21, 2019. https://digital.library.txstate.edu/handle/10877/6370.

MLA Handbook (7th Edition):

Kelly, Janiece. “The Effect of Decoy Attacks on Dynamic Channel Assignment.” 2014. Web. 21 Oct 2019.

Vancouver:

Kelly J. The Effect of Decoy Attacks on Dynamic Channel Assignment. [Internet] [Masters thesis]. Texas State University – San Marcos; 2014. [cited 2019 Oct 21]. Available from: https://digital.library.txstate.edu/handle/10877/6370.

Council of Science Editors:

Kelly J. The Effect of Decoy Attacks on Dynamic Channel Assignment. [Masters Thesis]. Texas State University – San Marcos; 2014. Available from: https://digital.library.txstate.edu/handle/10877/6370


Australian National University

29. Karim, Mohammad Shahedul. Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications .

Degree: 2016, Australian National University

 The network coding paradigm enhances transmission efficiency by combining information flows and has drawn significant attention in information theory, networking, communications and data storage. Instantly… (more)

Subjects/Keywords: Network Coding; Wireless Communications; Video Streaming; Markov Decision Process; Graph Theory

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Karim, M. S. (2016). Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications . (Thesis). Australian National University. Retrieved from http://hdl.handle.net/1885/118239

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Karim, Mohammad Shahedul. “Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications .” 2016. Thesis, Australian National University. Accessed October 21, 2019. http://hdl.handle.net/1885/118239.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Karim, Mohammad Shahedul. “Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications .” 2016. Web. 21 Oct 2019.

Vancouver:

Karim MS. Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications . [Internet] [Thesis]. Australian National University; 2016. [cited 2019 Oct 21]. Available from: http://hdl.handle.net/1885/118239.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Karim MS. Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications . [Thesis]. Australian National University; 2016. Available from: http://hdl.handle.net/1885/118239

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Colorado

30. Sukumar, Shruthi Sukumar. Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component.

Degree: MS, 2017, University of Colorado

Markov Decision Processes (MDPs) are discrete-time random processes that provide a framework to model sequential decision problems in stochastic environments. However, the use of… (more)

Subjects/Keywords: behaviour; decision making; markov decision process; optimal control; value iteration; Electrical and Computer Engineering

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Sukumar, S. S. (2017). Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component. (Masters Thesis). University of Colorado. Retrieved from https://scholar.colorado.edu/eeng_gradetds/24

Chicago Manual of Style (16th Edition):

Sukumar, Shruthi Sukumar. “Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component.” 2017. Masters Thesis, University of Colorado. Accessed October 21, 2019. https://scholar.colorado.edu/eeng_gradetds/24.

MLA Handbook (7th Edition):

Sukumar, Shruthi Sukumar. “Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component.” 2017. Web. 21 Oct 2019.

Vancouver:

Sukumar SS. Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component. [Internet] [Masters thesis]. University of Colorado; 2017. [cited 2019 Oct 21]. Available from: https://scholar.colorado.edu/eeng_gradetds/24.

Council of Science Editors:

Sukumar SS. Analysis and Solution of Markov Decision Problems with a Continuous, Stochastic State Component. [Masters Thesis]. University of Colorado; 2017. Available from: https://scholar.colorado.edu/eeng_gradetds/24

[1] [2] [3] [4] [5] [6] [7] [8]

.