Advanced search options

Sorted by: relevance · author · university · date | New search

You searched for `subject:(Markov decision processes)`

.
Showing records 1 – 30 of
181 total matches.

◁ [1] [2] [3] [4] [5] [6] [7] ▶

Search Limiters

Dates

- 2015 – 2019 (60)
- 2010 – 2014 (73)
- 2005 – 2009 (35)
- 2000 – 2004 (16)

Universities

- Georgia Tech (18)
- Indian Institute of Science (11)

Department

Languages

- English (123)
- Portuguese (10)

▼ Search Limiters

Oregon State University

1. Alkaee Taleghan, Majid. Simulator-Defined MDP Planning with Applications in Natural Resource Management.

Degree: PhD, Computer Science, 2017, Oregon State University

URL: http://hdl.handle.net/1957/60125

► This work is inspired by problems in natural resource management centered on the challenge of invasive species. Computing optimal management policies for maintaining ecosystem sustainable…
(more)

Subjects/Keywords: Markov Decision Processes; Markov processes

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Alkaee Taleghan, M. (2017). Simulator-Defined MDP Planning with Applications in Natural Resource Management. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/60125

Chicago Manual of Style (16^{th} Edition):

Alkaee Taleghan, Majid. “Simulator-Defined MDP Planning with Applications in Natural Resource Management.” 2017. Doctoral Dissertation, Oregon State University. Accessed September 16, 2019. http://hdl.handle.net/1957/60125.

MLA Handbook (7^{th} Edition):

Alkaee Taleghan, Majid. “Simulator-Defined MDP Planning with Applications in Natural Resource Management.” 2017. Web. 16 Sep 2019.

Vancouver:

Alkaee Taleghan M. Simulator-Defined MDP Planning with Applications in Natural Resource Management. [Internet] [Doctoral dissertation]. Oregon State University; 2017. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1957/60125.

Council of Science Editors:

Alkaee Taleghan M. Simulator-Defined MDP Planning with Applications in Natural Resource Management. [Doctoral Dissertation]. Oregon State University; 2017. Available from: http://hdl.handle.net/1957/60125

Texas A&M University

2.
Faryabi, Babak.
Systems Medicine: An Integrated Approach with *Decision* Making Perspective.

Degree: 2010, Texas A&M University

URL: http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-2940

► Two models are proposed to describe interactions among genes, transcription factors, and signaling cascades involved in regulating a cellular sub-system. These models fall within the…
(more)

Subjects/Keywords: Regulatory Networks; Markov Decision Processes; Computational Biology

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Faryabi, B. (2010). Systems Medicine: An Integrated Approach with Decision Making Perspective. (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-2940

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Faryabi, Babak. “Systems Medicine: An Integrated Approach with Decision Making Perspective.” 2010. Thesis, Texas A&M University. Accessed September 16, 2019. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-2940.

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Faryabi, Babak. “Systems Medicine: An Integrated Approach with Decision Making Perspective.” 2010. Web. 16 Sep 2019.

Vancouver:

Faryabi B. Systems Medicine: An Integrated Approach with Decision Making Perspective. [Internet] [Thesis]. Texas A&M University; 2010. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-2940.

Note: this citation may be lacking information needed for this citation format:

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Faryabi B. Systems Medicine: An Integrated Approach with Decision Making Perspective. [Thesis]. Texas A&M University; 2010. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-2940

Not specified: Masters Thesis or Doctoral Dissertation

University of Hong Kong

3.
辜有明.; Koh, You Beng.
Bayesian analysis in *Markov* regime-switching
models.

Degree: PhD, 2012, University of Hong Kong

URL: Koh, Y. B. [辜有明]. (2012). Bayesian analysis in Markov regime-switching models. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b4852164 ; http://dx.doi.org/10.5353/th_b4852164 ; http://hdl.handle.net/10722/179973

►

van Norden and Schaller (1996) develop a standard regime-switching model to study stock market crashes. In their seminal paper, they use the maximum likelihood estimation… (more)

Subjects/Keywords: Markov processes.; Bayesian statistical decision theory.

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

辜有明.; Koh, Y. B. (2012). Bayesian analysis in Markov regime-switching models. (Doctoral Dissertation). University of Hong Kong. Retrieved from Koh, Y. B. [辜有明]. (2012). Bayesian analysis in Markov regime-switching models. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b4852164 ; http://dx.doi.org/10.5353/th_b4852164 ; http://hdl.handle.net/10722/179973

Chicago Manual of Style (16^{th} Edition):

辜有明.; Koh, You Beng. “Bayesian analysis in Markov regime-switching models.” 2012. Doctoral Dissertation, University of Hong Kong. Accessed September 16, 2019. Koh, Y. B. [辜有明]. (2012). Bayesian analysis in Markov regime-switching models. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b4852164 ; http://dx.doi.org/10.5353/th_b4852164 ; http://hdl.handle.net/10722/179973.

MLA Handbook (7^{th} Edition):

辜有明.; Koh, You Beng. “Bayesian analysis in Markov regime-switching models.” 2012. Web. 16 Sep 2019.

Vancouver:

辜有明.; Koh YB. Bayesian analysis in Markov regime-switching models. [Internet] [Doctoral dissertation]. University of Hong Kong; 2012. [cited 2019 Sep 16]. Available from: Koh, Y. B. [辜有明]. (2012). Bayesian analysis in Markov regime-switching models. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b4852164 ; http://dx.doi.org/10.5353/th_b4852164 ; http://hdl.handle.net/10722/179973.

Council of Science Editors:

辜有明.; Koh YB. Bayesian analysis in Markov regime-switching models. [Doctoral Dissertation]. University of Hong Kong; 2012. Available from: Koh, Y. B. [辜有明]. (2012). Bayesian analysis in Markov regime-switching models. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b4852164 ; http://dx.doi.org/10.5353/th_b4852164 ; http://hdl.handle.net/10722/179973

Central Connecticut State University

4. Bitiukov, Alex. Implementation of Customer Lifetime Value model in the context of Financial Services.

Degree: Department of Mathematical Sciences, 2014, Central Connecticut State University

URL: http://content.library.ccsu.edu/u?/ccsutheses,2047

►

The challenge of justifying long-term investments using traditional business case evaluation methodologies became increasingly transparent due to increased awareness, shareholder activism, traditional and social media.… (more)

Subjects/Keywords: Financial services industry.; Markov processes.; Decision trees.

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Bitiukov, A. (2014). Implementation of Customer Lifetime Value model in the context of Financial Services. (Thesis). Central Connecticut State University. Retrieved from http://content.library.ccsu.edu/u?/ccsutheses,2047

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Bitiukov, Alex. “Implementation of Customer Lifetime Value model in the context of Financial Services.” 2014. Thesis, Central Connecticut State University. Accessed September 16, 2019. http://content.library.ccsu.edu/u?/ccsutheses,2047.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Bitiukov, Alex. “Implementation of Customer Lifetime Value model in the context of Financial Services.” 2014. Web. 16 Sep 2019.

Vancouver:

Bitiukov A. Implementation of Customer Lifetime Value model in the context of Financial Services. [Internet] [Thesis]. Central Connecticut State University; 2014. [cited 2019 Sep 16]. Available from: http://content.library.ccsu.edu/u?/ccsutheses,2047.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Bitiukov A. Implementation of Customer Lifetime Value model in the context of Financial Services. [Thesis]. Central Connecticut State University; 2014. Available from: http://content.library.ccsu.edu/u?/ccsutheses,2047

Not specified: Masters Thesis or Doctoral Dissertation

University of Manitoba

5.
Liang, You.
Risk management by *Markov* *decision* * processes*.

Degree: Statistics, 2015, University of Manitoba

URL: http://hdl.handle.net/1993/30829

► A very important and powerful tool in the study of mathematical finance including risk management is the model of *Markov* *decision* *processes*. My PhD research…
(more)

Subjects/Keywords: Markov decision processes; Risk measures; Deviation measures

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Liang, Y. (2015). Risk management by Markov decision processes. (Thesis). University of Manitoba. Retrieved from http://hdl.handle.net/1993/30829

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Liang, You. “Risk management by Markov decision processes.” 2015. Thesis, University of Manitoba. Accessed September 16, 2019. http://hdl.handle.net/1993/30829.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Liang, You. “Risk management by Markov decision processes.” 2015. Web. 16 Sep 2019.

Vancouver:

Liang Y. Risk management by Markov decision processes. [Internet] [Thesis]. University of Manitoba; 2015. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1993/30829.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Liang Y. Risk management by Markov decision processes. [Thesis]. University of Manitoba; 2015. Available from: http://hdl.handle.net/1993/30829

Not specified: Masters Thesis or Doctoral Dissertation

Rutgers University

6. Diuk Wasser, Carlos Gregorio, 1974-. An object-oriented representation for efficient reinforcement learning.

Degree: PhD, Computer Science, 2010, Rutgers University

URL: http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056289

►

Agents (humans, mice, computers) need to constantly make decisions to survive and thrive in their environment. In the reinforcement-learning problem, an agent needs to learn… (more)

Subjects/Keywords: Reinforcement learning; Decision making – Testing; Markov processes

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Diuk Wasser, Carlos Gregorio, 1. (2010). An object-oriented representation for efficient reinforcement learning. (Doctoral Dissertation). Rutgers University. Retrieved from http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056289

Chicago Manual of Style (16^{th} Edition):

Diuk Wasser, Carlos Gregorio, 1974-. “An object-oriented representation for efficient reinforcement learning.” 2010. Doctoral Dissertation, Rutgers University. Accessed September 16, 2019. http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056289.

MLA Handbook (7^{th} Edition):

Diuk Wasser, Carlos Gregorio, 1974-. “An object-oriented representation for efficient reinforcement learning.” 2010. Web. 16 Sep 2019.

Vancouver:

Diuk Wasser, Carlos Gregorio 1. An object-oriented representation for efficient reinforcement learning. [Internet] [Doctoral dissertation]. Rutgers University; 2010. [cited 2019 Sep 16]. Available from: http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056289.

Council of Science Editors:

Diuk Wasser, Carlos Gregorio 1. An object-oriented representation for efficient reinforcement learning. [Doctoral Dissertation]. Rutgers University; 2010. Available from: http://hdl.rutgers.edu/1782.1/rucore10001600001.ETD.000056289

Indian Institute of Science

7. Saha, Subhamay. Single and Multi-player Stochastic Dynamic Optimization.

Degree: 2013, Indian Institute of Science

URL: http://etd.iisc.ernet.in/2005/3357 ; http://etd.iisc.ernet.in/abstracts/4224/G25755-Abs.pdf

► In this thesis we investigate single and multi-player stochastic dynamic optimization prob-lems. We consider both discrete and continuous time *processes*. In the multi-player setup we…
(more)

Subjects/Keywords: Stochastic Dynamic Optimization; Stochastic Control Theory; Stochastic Processes; Markov Processes; Continuous-Time Markov Chains; Stochastic Games; Semi-Markov Decision Processes; Markov Processes - Optimal Control; Continuous Time Stochastic Processes; Dicrete Time Stochastic Processes; Continuous Time Markov Chains; Semi-Markov Decision Processes (SMDP); Optimal Markov Control; Mathematics

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Saha, S. (2013). Single and Multi-player Stochastic Dynamic Optimization. (Thesis). Indian Institute of Science. Retrieved from http://etd.iisc.ernet.in/2005/3357 ; http://etd.iisc.ernet.in/abstracts/4224/G25755-Abs.pdf

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Saha, Subhamay. “Single and Multi-player Stochastic Dynamic Optimization.” 2013. Thesis, Indian Institute of Science. Accessed September 16, 2019. http://etd.iisc.ernet.in/2005/3357 ; http://etd.iisc.ernet.in/abstracts/4224/G25755-Abs.pdf.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Saha, Subhamay. “Single and Multi-player Stochastic Dynamic Optimization.” 2013. Web. 16 Sep 2019.

Vancouver:

Saha S. Single and Multi-player Stochastic Dynamic Optimization. [Internet] [Thesis]. Indian Institute of Science; 2013. [cited 2019 Sep 16]. Available from: http://etd.iisc.ernet.in/2005/3357 ; http://etd.iisc.ernet.in/abstracts/4224/G25755-Abs.pdf.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Saha S. Single and Multi-player Stochastic Dynamic Optimization. [Thesis]. Indian Institute of Science; 2013. Available from: http://etd.iisc.ernet.in/2005/3357 ; http://etd.iisc.ernet.in/abstracts/4224/G25755-Abs.pdf

Not specified: Masters Thesis or Doctoral Dissertation

Georgia Tech

8. Silva Izquierdo, Daniel F. Optimal admission control in tandem and parallel queueing systems with applications to computer networks.

Degree: PhD, Industrial and Systems Engineering, 2016, Georgia Tech

URL: http://hdl.handle.net/1853/55661

► Modern computer networks require advanced, efficient algorithms to control several aspects of their operations, including routing data packets, access to secure systems and data, capacity…
(more)

Subjects/Keywords: Queueing sytems; Markov Decision processes; Tandem queues; Stochastic systems; Stochastic dynamic programming; Authentication systems; Constrained Markov decision processes; Linear programming

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Silva Izquierdo, D. F. (2016). Optimal admission control in tandem and parallel queueing systems with applications to computer networks. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55661

Chicago Manual of Style (16^{th} Edition):

Silva Izquierdo, Daniel F. “Optimal admission control in tandem and parallel queueing systems with applications to computer networks.” 2016. Doctoral Dissertation, Georgia Tech. Accessed September 16, 2019. http://hdl.handle.net/1853/55661.

MLA Handbook (7^{th} Edition):

Silva Izquierdo, Daniel F. “Optimal admission control in tandem and parallel queueing systems with applications to computer networks.” 2016. Web. 16 Sep 2019.

Vancouver:

Silva Izquierdo DF. Optimal admission control in tandem and parallel queueing systems with applications to computer networks. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1853/55661.

Council of Science Editors:

Silva Izquierdo DF. Optimal admission control in tandem and parallel queueing systems with applications to computer networks. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55661

University of Edinburgh

9. Crook, Paul A. Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds.

Degree: 2007, University of Edinburgh

URL: http://hdl.handle.net/1842/1471

► In applying reinforcement learning to agents acting in the real world we are often faced with tasks that are non-Markovian in nature. Much work has…
(more)

Subjects/Keywords: 006.3; Markov model; Active Perception; Reinforcement Learning; Partially Observable Markov Decision Processes

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Crook, P. A. (2007). Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/1471

Chicago Manual of Style (16^{th} Edition):

Crook, Paul A. “Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds.” 2007. Doctoral Dissertation, University of Edinburgh. Accessed September 16, 2019. http://hdl.handle.net/1842/1471.

MLA Handbook (7^{th} Edition):

Crook, Paul A. “Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds.” 2007. Web. 16 Sep 2019.

Vancouver:

Crook PA. Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds. [Internet] [Doctoral dissertation]. University of Edinburgh; 2007. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1842/1471.

Council of Science Editors:

Crook PA. Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds. [Doctoral Dissertation]. University of Edinburgh; 2007. Available from: http://hdl.handle.net/1842/1471

10. McGregor, Sean. Machine learning methods for public policy: simulation, optimization, and visualization.

Degree: PhD, 2017, Oregon State University

URL: http://hdl.handle.net/1957/61702

► Society faces many complex management problems, particularly in the area of shared public resources such as ecosystems. Existing *decision* making *processes* are often guided by…
(more)

Subjects/Keywords: Markov Decision Processes

…Interactive Visualization for Testing *Markov* *Decision* *Processes*: MDPvis
5
2.1 Introduction… …*Processes*: MDPvis
2.1 Introduction
Policies for *Markov* *Decision* *Processes* (MDPs) are… …9
A high level overview of the *Markov* *Decision* Process visualization
prototype: MDPvis… …the *Markov* *Decision* Process (MDP). In
an MDP, the state of the world evolves… …with the final manuscript.
5
2
An Interactive Visualization for Testing *Markov* *Decision*…

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

McGregor, S. (2017). Machine learning methods for public policy: simulation, optimization, and visualization. (Doctoral Dissertation). Oregon State University. Retrieved from http://hdl.handle.net/1957/61702

Chicago Manual of Style (16^{th} Edition):

McGregor, Sean. “Machine learning methods for public policy: simulation, optimization, and visualization.” 2017. Doctoral Dissertation, Oregon State University. Accessed September 16, 2019. http://hdl.handle.net/1957/61702.

MLA Handbook (7^{th} Edition):

McGregor, Sean. “Machine learning methods for public policy: simulation, optimization, and visualization.” 2017. Web. 16 Sep 2019.

Vancouver:

McGregor S. Machine learning methods for public policy: simulation, optimization, and visualization. [Internet] [Doctoral dissertation]. Oregon State University; 2017. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1957/61702.

Council of Science Editors:

McGregor S. Machine learning methods for public policy: simulation, optimization, and visualization. [Doctoral Dissertation]. Oregon State University; 2017. Available from: http://hdl.handle.net/1957/61702

The Ohio State University

11. Afful-Dadzi, Anthony. Robust Optimal Maintenance Policies and Charts for Cyber Vulnerability Management.

Degree: PhD, Industrial and Systems Engineering, 2012, The Ohio State University

URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1354126687

► Cyber-attacks are considered the greatest domestic security threat in the United States and among the greatest international security threats. Hypothetically, every personal computer connected…
(more)

Subjects/Keywords: Industrial Engineering; Cyber Attack; Value function; Markov Decision Processes; Control Charts

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Afful-Dadzi, A. (2012). Robust Optimal Maintenance Policies and Charts for Cyber Vulnerability Management. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1354126687

Chicago Manual of Style (16^{th} Edition):

Afful-Dadzi, Anthony. “Robust Optimal Maintenance Policies and Charts for Cyber Vulnerability Management.” 2012. Doctoral Dissertation, The Ohio State University. Accessed September 16, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354126687.

MLA Handbook (7^{th} Edition):

Afful-Dadzi, Anthony. “Robust Optimal Maintenance Policies and Charts for Cyber Vulnerability Management.” 2012. Web. 16 Sep 2019.

Vancouver:

Afful-Dadzi A. Robust Optimal Maintenance Policies and Charts for Cyber Vulnerability Management. [Internet] [Doctoral dissertation]. The Ohio State University; 2012. [cited 2019 Sep 16]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1354126687.

Council of Science Editors:

Afful-Dadzi A. Robust Optimal Maintenance Policies and Charts for Cyber Vulnerability Management. [Doctoral Dissertation]. The Ohio State University; 2012. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1354126687

Baylor University

12. Yuan, Jiang, 1984-. Normal approximation for Bayesian models with non-sampling bias.

Degree: Statistical Sciences., 2014, Baylor University

URL: http://hdl.handle.net/2104/8926

► Bayesian sample size determination can be computationally intensive for mod- els where *Markov* chain Monte Carlo (MCMC) methods are commonly used for in- ference. It…
(more)

Subjects/Keywords: Bayesian statistical decision theory.; Monte Carlo method.; Markov processes.

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Yuan, Jiang, 1. (2014). Normal approximation for Bayesian models with non-sampling bias. (Thesis). Baylor University. Retrieved from http://hdl.handle.net/2104/8926

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Yuan, Jiang, 1984-. “Normal approximation for Bayesian models with non-sampling bias. ” 2014. Thesis, Baylor University. Accessed September 16, 2019. http://hdl.handle.net/2104/8926.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Yuan, Jiang, 1984-. “Normal approximation for Bayesian models with non-sampling bias. ” 2014. Web. 16 Sep 2019.

Vancouver:

Yuan, Jiang 1. Normal approximation for Bayesian models with non-sampling bias. [Internet] [Thesis]. Baylor University; 2014. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/2104/8926.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Yuan, Jiang 1. Normal approximation for Bayesian models with non-sampling bias. [Thesis]. Baylor University; 2014. Available from: http://hdl.handle.net/2104/8926

Not specified: Masters Thesis or Doctoral Dissertation

Virginia Tech

13. Singh, Meghendra. Human Behavior Modeling and Calibration in Epidemic Simulations.

Degree: MS, Computer Science, 2019, Virginia Tech

URL: http://hdl.handle.net/10919/87050

► Human behavior plays an important role in infectious disease epidemics. The choice of preventive actions taken by individuals can completely change the epidemic outcome. Computational…
(more)

Subjects/Keywords: Human behavior modeling; Agent based simulation; Markov decision processes

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Singh, M. (2019). Human Behavior Modeling and Calibration in Epidemic Simulations. (Masters Thesis). Virginia Tech. Retrieved from http://hdl.handle.net/10919/87050

Chicago Manual of Style (16^{th} Edition):

Singh, Meghendra. “Human Behavior Modeling and Calibration in Epidemic Simulations.” 2019. Masters Thesis, Virginia Tech. Accessed September 16, 2019. http://hdl.handle.net/10919/87050.

MLA Handbook (7^{th} Edition):

Singh, Meghendra. “Human Behavior Modeling and Calibration in Epidemic Simulations.” 2019. Web. 16 Sep 2019.

Vancouver:

Singh M. Human Behavior Modeling and Calibration in Epidemic Simulations. [Internet] [Masters thesis]. Virginia Tech; 2019. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/10919/87050.

Council of Science Editors:

Singh M. Human Behavior Modeling and Calibration in Epidemic Simulations. [Masters Thesis]. Virginia Tech; 2019. Available from: http://hdl.handle.net/10919/87050

Texas A&M University

14.
Coskun, Serdar.
A Human Driver Model for Autonomous Lane Changing in Highways: Predictive Fuzzy *Markov* Game Driving Strategy.

Degree: PhD, Mechanical Engineering, 2018, Texas A&M University

URL: http://hdl.handle.net/1969.1/174312

► This study presents an integrated hybrid solution to mandatory lane changing problem to deal with accident avoidance by choosing a safe gap in highway driving.…
(more)

Subjects/Keywords: Markov decision processes; game theory; autonomous driving; Hinf; control

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Coskun, S. (2018). A Human Driver Model for Autonomous Lane Changing in Highways: Predictive Fuzzy Markov Game Driving Strategy. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/174312

Chicago Manual of Style (16^{th} Edition):

Coskun, Serdar. “A Human Driver Model for Autonomous Lane Changing in Highways: Predictive Fuzzy Markov Game Driving Strategy.” 2018. Doctoral Dissertation, Texas A&M University. Accessed September 16, 2019. http://hdl.handle.net/1969.1/174312.

MLA Handbook (7^{th} Edition):

Coskun, Serdar. “A Human Driver Model for Autonomous Lane Changing in Highways: Predictive Fuzzy Markov Game Driving Strategy.” 2018. Web. 16 Sep 2019.

Vancouver:

Coskun S. A Human Driver Model for Autonomous Lane Changing in Highways: Predictive Fuzzy Markov Game Driving Strategy. [Internet] [Doctoral dissertation]. Texas A&M University; 2018. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1969.1/174312.

Council of Science Editors:

Coskun S. A Human Driver Model for Autonomous Lane Changing in Highways: Predictive Fuzzy Markov Game Driving Strategy. [Doctoral Dissertation]. Texas A&M University; 2018. Available from: http://hdl.handle.net/1969.1/174312

University of Illinois – Urbana-Champaign

15. Kini, Dileep Raghunath. Verification of linear-time properties for finite probabilistic systems.

Degree: PhD, Computer Science, 2017, University of Illinois – Urbana-Champaign

URL: http://hdl.handle.net/2142/99357

► With computers becoming ubiquitous there is an ever growing necessity to ensure that they are programmed to behave correctly. Formal verification is a discipline within…
(more)

Subjects/Keywords: Probabilistic model checking; Markov decision processes; Linear temporal logic; Automata theory

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Kini, D. R. (2017). Verification of linear-time properties for finite probabilistic systems. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/99357

Chicago Manual of Style (16^{th} Edition):

Kini, Dileep Raghunath. “Verification of linear-time properties for finite probabilistic systems.” 2017. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed September 16, 2019. http://hdl.handle.net/2142/99357.

MLA Handbook (7^{th} Edition):

Kini, Dileep Raghunath. “Verification of linear-time properties for finite probabilistic systems.” 2017. Web. 16 Sep 2019.

Vancouver:

Kini DR. Verification of linear-time properties for finite probabilistic systems. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2017. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/2142/99357.

Council of Science Editors:

Kini DR. Verification of linear-time properties for finite probabilistic systems. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2017. Available from: http://hdl.handle.net/2142/99357

University of Rochester

16. Sainathan, Arvind (1983 - ); Dobson, Gregory. Essays on optimization and system design in service and supply chain management.

Degree: PhD, 2009, University of Rochester

URL: http://hdl.handle.net/1802/7813

► This dissertation is comprised of three essays that model, analyze and optimize service and supply chain systems in the context of some specific issues. In…
(more)

Subjects/Keywords: Optimization; Service system design; Supply chain management; Markov decision processes

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Sainathan, Arvind (1983 - ); Dobson, G. (2009). Essays on optimization and system design in service and supply chain management. (Doctoral Dissertation). University of Rochester. Retrieved from http://hdl.handle.net/1802/7813

Chicago Manual of Style (16^{th} Edition):

Sainathan, Arvind (1983 - ); Dobson, Gregory. “Essays on optimization and system design in service and supply chain management.” 2009. Doctoral Dissertation, University of Rochester. Accessed September 16, 2019. http://hdl.handle.net/1802/7813.

MLA Handbook (7^{th} Edition):

Sainathan, Arvind (1983 - ); Dobson, Gregory. “Essays on optimization and system design in service and supply chain management.” 2009. Web. 16 Sep 2019.

Vancouver:

Sainathan, Arvind (1983 - ); Dobson G. Essays on optimization and system design in service and supply chain management. [Internet] [Doctoral dissertation]. University of Rochester; 2009. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1802/7813.

Council of Science Editors:

Sainathan, Arvind (1983 - ); Dobson G. Essays on optimization and system design in service and supply chain management. [Doctoral Dissertation]. University of Rochester; 2009. Available from: http://hdl.handle.net/1802/7813

Utah State University

17.
Olsen, Alan.
Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable *Markov* *Decision* * Processes*.

Degree: MS, Computer Science, 2011, Utah State University

URL: https://digitalcommons.usu.edu/etd/1035

► Partially-observable *Markov* *decision* *processes* (POMDPs) are especially good at modeling real-world problems because they allow for sensor and effector uncertainty. Unfortunately, such uncertainty makes…
(more)

Subjects/Keywords: Markov Decision Processes; Pond-Hindsight; Hindsight Optimization; Computer Sciences

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Olsen, A. (2011). Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes. (Masters Thesis). Utah State University. Retrieved from https://digitalcommons.usu.edu/etd/1035

Chicago Manual of Style (16^{th} Edition):

Olsen, Alan. “Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes.” 2011. Masters Thesis, Utah State University. Accessed September 16, 2019. https://digitalcommons.usu.edu/etd/1035.

MLA Handbook (7^{th} Edition):

Olsen, Alan. “Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes.” 2011. Web. 16 Sep 2019.

Vancouver:

Olsen A. Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes. [Internet] [Masters thesis]. Utah State University; 2011. [cited 2019 Sep 16]. Available from: https://digitalcommons.usu.edu/etd/1035.

Council of Science Editors:

Olsen A. Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes. [Masters Thesis]. Utah State University; 2011. Available from: https://digitalcommons.usu.edu/etd/1035

Rutgers University

18. Mahmud, Md Pavel, 1981-. Reduced representations for efficient analysis of genomic data: from microarray to high-throughput sequencing.

Degree: PhD, Computer Science, 2014, Rutgers University

URL: https://rucore.libraries.rutgers.edu/rutgers-lib/45336/

►

Since the genomics era has started in the ’70s, microarray technologies have been extensively used for biological applications such as gene expression profiling, copy number… (more)

Subjects/Keywords: Genomes – Analysis; Markov processes – Mathematical models; Bayesian statistical decision theory

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Mahmud, Md Pavel, 1. (2014). Reduced representations for efficient analysis of genomic data: from microarray to high-throughput sequencing. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/45336/

Chicago Manual of Style (16^{th} Edition):

Mahmud, Md Pavel, 1981-. “Reduced representations for efficient analysis of genomic data: from microarray to high-throughput sequencing.” 2014. Doctoral Dissertation, Rutgers University. Accessed September 16, 2019. https://rucore.libraries.rutgers.edu/rutgers-lib/45336/.

MLA Handbook (7^{th} Edition):

Mahmud, Md Pavel, 1981-. “Reduced representations for efficient analysis of genomic data: from microarray to high-throughput sequencing.” 2014. Web. 16 Sep 2019.

Vancouver:

Mahmud, Md Pavel 1. Reduced representations for efficient analysis of genomic data: from microarray to high-throughput sequencing. [Internet] [Doctoral dissertation]. Rutgers University; 2014. [cited 2019 Sep 16]. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/45336/.

Council of Science Editors:

Mahmud, Md Pavel 1. Reduced representations for efficient analysis of genomic data: from microarray to high-throughput sequencing. [Doctoral Dissertation]. Rutgers University; 2014. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/45336/

Queens University

19. Saldi, Naci. Optimal Quantization and Approximation in Source Coding and Stochastic Control .

Degree: Mathematics and Statistics, 2015, Queens University

URL: http://hdl.handle.net/1974/13147

► This thesis deals with non-standard optimal quantization and approximation problems in source coding and stochastic control. The first part of the thesis considers randomized quantization.…
(more)

Subjects/Keywords: Quantization; randomized quantization; Markov decision processes; approximation in stochastic control

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Saldi, N. (2015). Optimal Quantization and Approximation in Source Coding and Stochastic Control . (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/13147

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Saldi, Naci. “Optimal Quantization and Approximation in Source Coding and Stochastic Control .” 2015. Thesis, Queens University. Accessed September 16, 2019. http://hdl.handle.net/1974/13147.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Saldi, Naci. “Optimal Quantization and Approximation in Source Coding and Stochastic Control .” 2015. Web. 16 Sep 2019.

Vancouver:

Saldi N. Optimal Quantization and Approximation in Source Coding and Stochastic Control . [Internet] [Thesis]. Queens University; 2015. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1974/13147.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Saldi N. Optimal Quantization and Approximation in Source Coding and Stochastic Control . [Thesis]. Queens University; 2015. Available from: http://hdl.handle.net/1974/13147

Not specified: Masters Thesis or Doctoral Dissertation

Indian Institute of Science

20.
Abdulla, Mohammed Shahid.
Simulation Based Algorithms For *Markov* *Decision* Process And Stochastic Optimization.

Degree: 2008, Indian Institute of Science

URL: http://hdl.handle.net/2005/812

► In Chapter 2, we propose several two-timescale simulation-based actor-critic algorithms for solution of infinite horizon *Markov* *Decision* *Processes* (MDPs) with finite state-space under the average…
(more)

Subjects/Keywords: Markov Processes - Data Processing; Algorithms; Simulation; Markov Decision Processes (MDPs); Infinite Horizon Markov Decision Processes; Finite Horizon Markov Decision Processes; Stochastic Approximation - Algorithms; Simultaneous Perturbation Stochastic Approximation (SPSA); Network Flow-Control; FH-MDP Algorithms; Stochastic Optimization; Reinforcement Learning Algorithms; Computational Mathematics

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Abdulla, M. S. (2008). Simulation Based Algorithms For Markov Decision Process And Stochastic Optimization. (Thesis). Indian Institute of Science. Retrieved from http://hdl.handle.net/2005/812

Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16^{th} Edition):

Abdulla, Mohammed Shahid. “Simulation Based Algorithms For Markov Decision Process And Stochastic Optimization.” 2008. Thesis, Indian Institute of Science. Accessed September 16, 2019. http://hdl.handle.net/2005/812.

Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7^{th} Edition):

Abdulla, Mohammed Shahid. “Simulation Based Algorithms For Markov Decision Process And Stochastic Optimization.” 2008. Web. 16 Sep 2019.

Vancouver:

Abdulla MS. Simulation Based Algorithms For Markov Decision Process And Stochastic Optimization. [Internet] [Thesis]. Indian Institute of Science; 2008. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/2005/812.

Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Abdulla MS. Simulation Based Algorithms For Markov Decision Process And Stochastic Optimization. [Thesis]. Indian Institute of Science; 2008. Available from: http://hdl.handle.net/2005/812

Not specified: Masters Thesis or Doctoral Dissertation

University of Texas – Austin

21.
-9365-5003.
Dynamic *decision* making under uncertainty for semiconductor manufacturing and healthcare.

Degree: PhD, Operations Research and Industrial Engineering, 2019, University of Texas – Austin

URL: http://dx.doi.org/10.26153/tsw/2085

► This dissertation proposes multiple methods to improve *processes* and make better decisions in manufacturing and healthcare. First, it investigates algorithms for controlling the automated material…
(more)

Subjects/Keywords: Semiconductor; Healthcare; Markov decision processes; Average cost Markov decision process; Statistics; Simulation; Count regression; Ranking; Routing; Robust optimization; Robust Markov decision processes; Robust average cost Markov decision process; Medical decision making; Epilepsy

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

-9365-5003. (2019). Dynamic decision making under uncertainty for semiconductor manufacturing and healthcare. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://dx.doi.org/10.26153/tsw/2085

Note: this citation may be lacking information needed for this citation format:

Author name may be incomplete

Chicago Manual of Style (16^{th} Edition):

-9365-5003. “Dynamic decision making under uncertainty for semiconductor manufacturing and healthcare.” 2019. Doctoral Dissertation, University of Texas – Austin. Accessed September 16, 2019. http://dx.doi.org/10.26153/tsw/2085.

Note: this citation may be lacking information needed for this citation format:

Author name may be incomplete

MLA Handbook (7^{th} Edition):

-9365-5003. “Dynamic decision making under uncertainty for semiconductor manufacturing and healthcare.” 2019. Web. 16 Sep 2019.

Note: this citation may be lacking information needed for this citation format:

Author name may be incomplete

Vancouver:

-9365-5003. Dynamic decision making under uncertainty for semiconductor manufacturing and healthcare. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2019. [cited 2019 Sep 16]. Available from: http://dx.doi.org/10.26153/tsw/2085.

Author name may be incomplete

Council of Science Editors:

-9365-5003. Dynamic decision making under uncertainty for semiconductor manufacturing and healthcare. [Doctoral Dissertation]. University of Texas – Austin; 2019. Available from: http://dx.doi.org/10.26153/tsw/2085

Author name may be incomplete

University of Manchester

22. Liu, Chong. Reinforcement learning with time perception.

Degree: PhD, 2012, University of Manchester

URL: https://www.research.manchester.ac.uk/portal/en/theses/reinforcement-learning-with-time-perception(a03580bd-2dd6-4172-a061-90e8ac3022b8).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554159

► Classical value estimation reinforcement learning algorithms do not perform very well in dynamic environments. On the other hand, the reinforcement learning of animals is quite…
(more)

Subjects/Keywords: 153.15; Reinforcement learning; Spiking neuron models; Markov decision processes (MDPs); Semi-Markov decision processes (SMDPs); Options theory; Time perception; Dynamic environments; Chebyshev's inequality

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Liu, C. (2012). Reinforcement learning with time perception. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/reinforcement-learning-with-time-perception(a03580bd-2dd6-4172-a061-90e8ac3022b8).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554159

Chicago Manual of Style (16^{th} Edition):

Liu, Chong. “Reinforcement learning with time perception.” 2012. Doctoral Dissertation, University of Manchester. Accessed September 16, 2019. https://www.research.manchester.ac.uk/portal/en/theses/reinforcement-learning-with-time-perception(a03580bd-2dd6-4172-a061-90e8ac3022b8).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554159.

MLA Handbook (7^{th} Edition):

Liu, Chong. “Reinforcement learning with time perception.” 2012. Web. 16 Sep 2019.

Vancouver:

Liu C. Reinforcement learning with time perception. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2019 Sep 16]. Available from: https://www.research.manchester.ac.uk/portal/en/theses/reinforcement-learning-with-time-perception(a03580bd-2dd6-4172-a061-90e8ac3022b8).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554159.

Council of Science Editors:

Liu C. Reinforcement learning with time perception. [Doctoral Dissertation]. University of Manchester; 2012. Available from: https://www.research.manchester.ac.uk/portal/en/theses/reinforcement-learning-with-time-perception(a03580bd-2dd6-4172-a061-90e8ac3022b8).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554159

23. Liu, Chong. REINFORCEMENT LEARNING WITH TIME PERCEPTION.

Degree: 2012, University of Manchester

URL: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:161470

► Classical value estimation reinforcement learning algorithms do not perform very well in dynamic environments. On the other hand, the reinforcement learning of animals is quite…
(more)

Subjects/Keywords: Reinforcement learning; Spiking neuron models; Markov decision processes (MDPs); Semi-Markov decision processes (SMDPs); Options theory; Time perception; Dynamic environments; Chebyshev's inequality

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Liu, C. (2012). REINFORCEMENT LEARNING WITH TIME PERCEPTION. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:161470

Chicago Manual of Style (16^{th} Edition):

Liu, Chong. “REINFORCEMENT LEARNING WITH TIME PERCEPTION.” 2012. Doctoral Dissertation, University of Manchester. Accessed September 16, 2019. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:161470.

MLA Handbook (7^{th} Edition):

Liu, Chong. “REINFORCEMENT LEARNING WITH TIME PERCEPTION.” 2012. Web. 16 Sep 2019.

Vancouver:

Liu C. REINFORCEMENT LEARNING WITH TIME PERCEPTION. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2019 Sep 16]. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:161470.

Council of Science Editors:

Liu C. REINFORCEMENT LEARNING WITH TIME PERCEPTION. [Doctoral Dissertation]. University of Manchester; 2012. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:161470

Georgia Tech

24. Burgain, Pierrick Antoine. On the control of airport departure operations.

Degree: PhD, Electrical and Computer Engineering, 2010, Georgia Tech

URL: http://hdl.handle.net/1853/37261

► This thesis is focused on airport departure operations; its objective is to assign a value to surface surveillance information within a collaborative framework. The research…
(more)

Subjects/Keywords: Taxiway; Airport; Operations; Markov; Markov decision process; Aviation; Air transportation; Optimization; Departure; Queue; Collaborative decision making; Virtual queuing; Pushback; Markov processes; Stochastic models; Mathematical models

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Burgain, P. A. (2010). On the control of airport departure operations. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/37261

Chicago Manual of Style (16^{th} Edition):

Burgain, Pierrick Antoine. “On the control of airport departure operations.” 2010. Doctoral Dissertation, Georgia Tech. Accessed September 16, 2019. http://hdl.handle.net/1853/37261.

MLA Handbook (7^{th} Edition):

Burgain, Pierrick Antoine. “On the control of airport departure operations.” 2010. Web. 16 Sep 2019.

Vancouver:

Burgain PA. On the control of airport departure operations. [Internet] [Doctoral dissertation]. Georgia Tech; 2010. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1853/37261.

Council of Science Editors:

Burgain PA. On the control of airport departure operations. [Doctoral Dissertation]. Georgia Tech; 2010. Available from: http://hdl.handle.net/1853/37261

Georgia Tech

25. Li, Ran. Performance optimization of complex resource allocation systems.

Degree: PhD, Industrial and Systems Engineering, 2016, Georgia Tech

URL: http://hdl.handle.net/1853/56273

► The typical control objective for a sequential resource allocation system (RAS) is the optimization of some (time-based) performance index, while ensuring the logical/behavioral correctness of…
(more)

Subjects/Keywords: Resource allocation systems; Deadlock avoidance; Petri nets; Markov decision processes; Stochastic approximation; Disaggregation

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Li, R. (2016). Performance optimization of complex resource allocation systems. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/56273

Chicago Manual of Style (16^{th} Edition):

Li, Ran. “Performance optimization of complex resource allocation systems.” 2016. Doctoral Dissertation, Georgia Tech. Accessed September 16, 2019. http://hdl.handle.net/1853/56273.

MLA Handbook (7^{th} Edition):

Li, Ran. “Performance optimization of complex resource allocation systems.” 2016. Web. 16 Sep 2019.

Vancouver:

Li R. Performance optimization of complex resource allocation systems. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1853/56273.

Council of Science Editors:

Li R. Performance optimization of complex resource allocation systems. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/56273

University of Cincinnati

26. Ramirez, Jose A. Optimal and Simulation-Based Approximate Dynamic Programming Approaches for the Control of Re-Entrant Line Manufacturing Models.

Degree: PhD, Engineering and Applied Science: Electrical Engineering, 2010, University of Cincinnati

URL: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1282329260

► This dissertation considers the application of simulation-based Approximate Dynamic Programming Approaches (ADP) for near-optimal control of Re-entrant Line Manufacturing (RLM) models. This study departs from…
(more)

Subjects/Keywords: Electrical Engineering; approximate dynamic programming; re-entrant lines; queueing networks; optimal control; Markov decision processes

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Ramirez, J. A. (2010). Optimal and Simulation-Based Approximate Dynamic Programming Approaches for the Control of Re-Entrant Line Manufacturing Models. (Doctoral Dissertation). University of Cincinnati. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=ucin1282329260

Chicago Manual of Style (16^{th} Edition):

Ramirez, Jose A. “Optimal and Simulation-Based Approximate Dynamic Programming Approaches for the Control of Re-Entrant Line Manufacturing Models.” 2010. Doctoral Dissertation, University of Cincinnati. Accessed September 16, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1282329260.

MLA Handbook (7^{th} Edition):

Ramirez, Jose A. “Optimal and Simulation-Based Approximate Dynamic Programming Approaches for the Control of Re-Entrant Line Manufacturing Models.” 2010. Web. 16 Sep 2019.

Vancouver:

Ramirez JA. Optimal and Simulation-Based Approximate Dynamic Programming Approaches for the Control of Re-Entrant Line Manufacturing Models. [Internet] [Doctoral dissertation]. University of Cincinnati; 2010. [cited 2019 Sep 16]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1282329260.

Council of Science Editors:

Ramirez JA. Optimal and Simulation-Based Approximate Dynamic Programming Approaches for the Control of Re-Entrant Line Manufacturing Models. [Doctoral Dissertation]. University of Cincinnati; 2010. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=ucin1282329260

Georgia Tech

27. Roberts, David L. Computational techniques for reasoning about and shaping player experiences in interactive narratives.

Degree: PhD, Interactive Computing, 2010, Georgia Tech

URL: http://hdl.handle.net/1853/33910

► Interactive narratives are marked by two characteristics: 1) a space of player interactions, some subset of which are specified as aesthetic goals for the system;…
(more)

Subjects/Keywords: Interactive storytelling; Drama management; Influence; Persuasion; Markov decision processes; Interactive multimedia; Shared virtual environments; Storytelling

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Roberts, D. L. (2010). Computational techniques for reasoning about and shaping player experiences in interactive narratives. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/33910

Chicago Manual of Style (16^{th} Edition):

Roberts, David L. “Computational techniques for reasoning about and shaping player experiences in interactive narratives.” 2010. Doctoral Dissertation, Georgia Tech. Accessed September 16, 2019. http://hdl.handle.net/1853/33910.

MLA Handbook (7^{th} Edition):

Roberts, David L. “Computational techniques for reasoning about and shaping player experiences in interactive narratives.” 2010. Web. 16 Sep 2019.

Vancouver:

Roberts DL. Computational techniques for reasoning about and shaping player experiences in interactive narratives. [Internet] [Doctoral dissertation]. Georgia Tech; 2010. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1853/33910.

Council of Science Editors:

Roberts DL. Computational techniques for reasoning about and shaping player experiences in interactive narratives. [Doctoral Dissertation]. Georgia Tech; 2010. Available from: http://hdl.handle.net/1853/33910

University of Washington

28. Sinha, Saumya. Robust dynamic optimization: theory and applications.

Degree: PhD, 2018, University of Washington

URL: http://hdl.handle.net/1773/42949

► Many applications in *decision*-making use a dynamic optimization framework to model a system evolving uncertainly in discrete time, and an agent who chooses actions/controls from…
(more)

Subjects/Keywords: dynamic programming; Markov decision processes; robust optimization; Operations research; Applied mathematics; Applied mathematics

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Sinha, S. (2018). Robust dynamic optimization: theory and applications. (Doctoral Dissertation). University of Washington. Retrieved from http://hdl.handle.net/1773/42949

Chicago Manual of Style (16^{th} Edition):

Sinha, Saumya. “Robust dynamic optimization: theory and applications.” 2018. Doctoral Dissertation, University of Washington. Accessed September 16, 2019. http://hdl.handle.net/1773/42949.

MLA Handbook (7^{th} Edition):

Sinha, Saumya. “Robust dynamic optimization: theory and applications.” 2018. Web. 16 Sep 2019.

Vancouver:

Sinha S. Robust dynamic optimization: theory and applications. [Internet] [Doctoral dissertation]. University of Washington; 2018. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/1773/42949.

Council of Science Editors:

Sinha S. Robust dynamic optimization: theory and applications. [Doctoral Dissertation]. University of Washington; 2018. Available from: http://hdl.handle.net/1773/42949

Virginia Tech

29. Abdallah AbouSheaisha, Abdallah Sabry. Cross-layer Control for Adaptive Video Streaming over Wireless Access Networks.

Degree: PhD, Electrical and ComputerEngineering, 2016, Virginia Tech

URL: http://hdl.handle.net/10919/78844

► Over the last decade, the wide deployment of wireless access technologies (e.g. WiFi, 3G, and LTE) and the remarkable growth in the volume of streaming…
(more)

Subjects/Keywords: Wireless Networks; Cross-layer Optimization; HTTP Adaptive Video Streaming; Markov Decision Processes; Reinforcement Learning

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Abdallah AbouSheaisha, A. S. (2016). Cross-layer Control for Adaptive Video Streaming over Wireless Access Networks. (Doctoral Dissertation). Virginia Tech. Retrieved from http://hdl.handle.net/10919/78844

Chicago Manual of Style (16^{th} Edition):

Abdallah AbouSheaisha, Abdallah Sabry. “Cross-layer Control for Adaptive Video Streaming over Wireless Access Networks.” 2016. Doctoral Dissertation, Virginia Tech. Accessed September 16, 2019. http://hdl.handle.net/10919/78844.

MLA Handbook (7^{th} Edition):

Abdallah AbouSheaisha, Abdallah Sabry. “Cross-layer Control for Adaptive Video Streaming over Wireless Access Networks.” 2016. Web. 16 Sep 2019.

Vancouver:

Abdallah AbouSheaisha AS. Cross-layer Control for Adaptive Video Streaming over Wireless Access Networks. [Internet] [Doctoral dissertation]. Virginia Tech; 2016. [cited 2019 Sep 16]. Available from: http://hdl.handle.net/10919/78844.

Council of Science Editors:

Abdallah AbouSheaisha AS. Cross-layer Control for Adaptive Video Streaming over Wireless Access Networks. [Doctoral Dissertation]. Virginia Tech; 2016. Available from: http://hdl.handle.net/10919/78844

The Ohio State University

30.
Jiang, Tianyu.
Data-Driven Cyber Vulnerability Maintenance of Network
Vulnerabilities with *Markov* *Decision* * Processes*.

Degree: MS, Industrial and Systems Engineering, 2017, The Ohio State University

URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1494203777781845

► Cyber vulnerability can be exploited by cyber-attackers to achieve valuable information, alter or destroy a cyber-target. Finding a way to generate appropriate cyber vulnerability maintenance…
(more)

Subjects/Keywords: Operations Research; Cyber attackers, Data-Driven Cyber Vulnerability Maintenance of Network Vulnerabilities; Markov Decision Processes

Record Details Similar Records

❌

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6^{th} Edition):

Jiang, T. (2017). Data-Driven Cyber Vulnerability Maintenance of Network Vulnerabilities with Markov Decision Processes. (Masters Thesis). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1494203777781845

Chicago Manual of Style (16^{th} Edition):

Jiang, Tianyu. “Data-Driven Cyber Vulnerability Maintenance of Network Vulnerabilities with Markov Decision Processes.” 2017. Masters Thesis, The Ohio State University. Accessed September 16, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494203777781845.

MLA Handbook (7^{th} Edition):

Jiang, Tianyu. “Data-Driven Cyber Vulnerability Maintenance of Network Vulnerabilities with Markov Decision Processes.” 2017. Web. 16 Sep 2019.

Vancouver:

Jiang T. Data-Driven Cyber Vulnerability Maintenance of Network Vulnerabilities with Markov Decision Processes. [Internet] [Masters thesis]. The Ohio State University; 2017. [cited 2019 Sep 16]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1494203777781845.

Council of Science Editors:

Jiang T. Data-Driven Cyber Vulnerability Maintenance of Network Vulnerabilities with Markov Decision Processes. [Masters Thesis]. The Ohio State University; 2017. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1494203777781845