You searched for subject:(Stochastic Control)
.
Showing records 1 – 30 of
563 total matches.
◁ [1] [2] [3] [4] [5] … [19] ▶

University of New South Wales
1.
Wu, Wei.
Limitations of dynamic programming approach: singularity and time inconsistency.
Degree: Mathematics & Statistics, 2016, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/56208
;
https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40264/SOURCE02?view=true
► Two failures of the dynamic programming (DP) approach to the stochastic optimal control problem are investigated. The first failure arises when we wish to solve…
(more)
▼ Two failures of the dynamic programming (DP) approach to the
stochastic optimal
control problem are investigated. The first failure arises when we wish to solve a class of certain singular
stochastic control problems in continuous time. It has been shown by Lasry and Lions (2000) that this difficulty can be overcome by introducing equivalent standard
stochastic control problems. To solve this class of singular
stochastic control problems, it remains to solve the equivalent standard
stochastic control problems. Since standard
stochastic control problems can be solved by applying the DP approach, this then solves the first failure. In the first part of the thesis, we clarify the idea of Lasry and Lions and extend their work to the case of controlled processes with jumps. This is particularly important in financial modelling where such processes are widely applied. For the purpose of application, we applied our result to an optimal trade execution problem studied by Lasry and Lions (2007b). The second failure of the DP approach arises when we wish to solve a multiperiod portfolio selection problem in which a mean-standard-deviation type criterion (a non-separable criterion) is used. We formulate such a problem as a discrete time
stochastic control problem. By adapting a pseudo dynamic programming principle, we obtain a closed form optimal strategy for investors whose risk tolerances are larger than a lower bound. As a consequence, we develop a multiperiod portfolio selection scheme. The analysis is performed in the market of risky assets only, however, we allow both market transitions and intermediate cash injections and offtakes. This work provides a good basis for future studies of portfolio selection problems with selection criteria chosen from the class of translation-invariant and positive-homogeneous risk measures.
Advisors/Committee Members: Goldys, Beniamin, The University of Sydney, Penev, Spiridon , Mathematics & Statistics, Faculty of Science, UNSW.
Subjects/Keywords: Stochastic optimal control; Dynamic programming; Stochastic control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wu, W. (2016). Limitations of dynamic programming approach: singularity and time inconsistency. (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/56208 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40264/SOURCE02?view=true
Chicago Manual of Style (16th Edition):
Wu, Wei. “Limitations of dynamic programming approach: singularity and time inconsistency.” 2016. Doctoral Dissertation, University of New South Wales. Accessed January 20, 2021.
http://handle.unsw.edu.au/1959.4/56208 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40264/SOURCE02?view=true.
MLA Handbook (7th Edition):
Wu, Wei. “Limitations of dynamic programming approach: singularity and time inconsistency.” 2016. Web. 20 Jan 2021.
Vancouver:
Wu W. Limitations of dynamic programming approach: singularity and time inconsistency. [Internet] [Doctoral dissertation]. University of New South Wales; 2016. [cited 2021 Jan 20].
Available from: http://handle.unsw.edu.au/1959.4/56208 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40264/SOURCE02?view=true.
Council of Science Editors:
Wu W. Limitations of dynamic programming approach: singularity and time inconsistency. [Doctoral Dissertation]. University of New South Wales; 2016. Available from: http://handle.unsw.edu.au/1959.4/56208 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:40264/SOURCE02?view=true

Georgia Tech
2.
Williams, Grady Robert.
Model predictive path integral control: Theoretical foundations and applications to autonomous driving.
Degree: PhD, Computer Science, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62666
► This thesis presents a new approach for stochastic model predictive (optimal) control: model predictive path integral control, which is based on massive parallel sampling of…
(more)
▼ This thesis presents a new approach for
stochastic model predictive (optimal)
control: model predictive path integral
control, which is based on massive parallel sampling of
control trajectories. We first show the theoretical foundations of model predictive path integral
control, which are based on a combination of path integral
control theory and an information theoretic interpretation of
stochastic optimal
control. We then apply the method to high speed autonomous driving on a 1/5 scale vehicle and analyze the performance and robustness of the method. Extensive experimental results are used to identify and solve key problems relating to robustness of the approach, which leads to a robust
stochastic model predictive
control algorithm capable of consistently pushing the limits of performance on the 1/5 scale vehicle.
Advisors/Committee Members: Theodorou, Evangelos A. (advisor), Rehg, James M. (committee member), Egerstedt, Magnus (committee member), Boots, Byron (committee member), Todorov, Emanuel (committee member).
Subjects/Keywords: Stochastic optimal control; Autonomous driving
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Williams, G. R. (2019). Model predictive path integral control: Theoretical foundations and applications to autonomous driving. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62666
Chicago Manual of Style (16th Edition):
Williams, Grady Robert. “Model predictive path integral control: Theoretical foundations and applications to autonomous driving.” 2019. Doctoral Dissertation, Georgia Tech. Accessed January 20, 2021.
http://hdl.handle.net/1853/62666.
MLA Handbook (7th Edition):
Williams, Grady Robert. “Model predictive path integral control: Theoretical foundations and applications to autonomous driving.” 2019. Web. 20 Jan 2021.
Vancouver:
Williams GR. Model predictive path integral control: Theoretical foundations and applications to autonomous driving. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1853/62666.
Council of Science Editors:
Williams GR. Model predictive path integral control: Theoretical foundations and applications to autonomous driving. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62666

University of Toronto
3.
Miklyukh, Volodymyr.
Optimal Inventory Policy Through Dual Sourcing.
Degree: 2017, University of Toronto
URL: http://hdl.handle.net/1807/79155
► We consider a risk-averse firm that utilizes dual-sourcing for perishable or seasonal goods with uncertain customer demand. Using real options theories, we provide two models…
(more)
▼ We consider a risk-averse firm that utilizes dual-sourcing for perishable or seasonal goods with uncertain customer demand. Using real options theories, we provide two models aimed at determining optimal order quantities to maximize the firm's expected profit. Furthermore, we can consider the demand to be an observable process correlated to a traded, which can be hedged to reduce profit uncertainty. A single offshore single local order period (SOSLOP) model provides a pseudo-analytical solution which can be easily solved to determine an optimal offshore and local order quantities based on the manufacturers' lead times, and a more realistic single offshore multiple local order period (SOMLOP) model uses numerical methods to determine optimal order quantities. Finally, a method for matching distributions of expected demands based on managerial estimates can be applied to any of the aforementioned models and be easily incorporated into the industry.
M.A.S.
Advisors/Committee Members: Lawryshyn, Yuri, Chemical Engineering Applied Chemistry.
Subjects/Keywords: Inventory Control; Optimization; Stochastic; 0546
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Miklyukh, V. (2017). Optimal Inventory Policy Through Dual Sourcing. (Masters Thesis). University of Toronto. Retrieved from http://hdl.handle.net/1807/79155
Chicago Manual of Style (16th Edition):
Miklyukh, Volodymyr. “Optimal Inventory Policy Through Dual Sourcing.” 2017. Masters Thesis, University of Toronto. Accessed January 20, 2021.
http://hdl.handle.net/1807/79155.
MLA Handbook (7th Edition):
Miklyukh, Volodymyr. “Optimal Inventory Policy Through Dual Sourcing.” 2017. Web. 20 Jan 2021.
Vancouver:
Miklyukh V. Optimal Inventory Policy Through Dual Sourcing. [Internet] [Masters thesis]. University of Toronto; 2017. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1807/79155.
Council of Science Editors:
Miklyukh V. Optimal Inventory Policy Through Dual Sourcing. [Masters Thesis]. University of Toronto; 2017. Available from: http://hdl.handle.net/1807/79155
4.
Gaïgi, M'hamed.
Problème de contrôle stochastique sous contraintes de risque de liquidité : Stochastic control problems with liquidity risk constraints.
Degree: Docteur es, Mathématiques appliquées, 2015, Evry-Val d'Essonne; École nationale d'ingénieurs de Tunis (Tunisie)
URL: http://www.theses.fr/2015EVRY0001
► Cette thèse porte sur l'étude de quelques problèmes de contrôle stochastique dans un contexte de risque de liquidité et d'impact sur le prix des actifs.…
(more)
▼ Cette thèse porte sur l'étude de quelques problèmes de contrôle stochastique dans un contexte de risque de liquidité et d'impact sur le prix des actifs. La thèse se compose de quatre chapitres.Dans le deuxième chapitre, on propose une modélisation d'un problème d'animation de marché dans un contexte de risque de liquidité en présence de contraintes d'inventaire et de changements de régime. Cette formulation peut être considérée comme étant une extension de précédentes études sur ce sujet. Le résultat principal de cette partie est la caractérisation de la fonction valeur comme solution unique, au sens de la viscosité, d'un système d'équations d'Hamilton-Jacobi-Bellman . On enrichit notre étude par la donnée de quelques résultats numériques.Dans le troisième chapitre, on propose un schéma d'approximation numérique pour résoudre un problème d'optimisation de portefeuille dans un contexte de risque de liquidité et d'impact sur le prix des actifs. On montre que la fonction valeur peut être obtenue comme limite d'une procédure itérative dont chaqueitération représente un problème d'arrêt optimal et on utilise un algorithme numérique, basé sur la quantification optimale, pour calculer la fonction valeur ainsi que la politique de contrôle. La convergence du schéma numérique est obtenue via des critères de monotonicité, stabilité et consistance.Dans le quatrième chapitre, on s'intéresse à un problème couplé de contrôle singulier et de contrôle impulsionnel dans un contexte d'illiquidité. On propose une formulation mathématique pour modéliser la distribution de dividendes et la politique d'investissement d'une entreprise sujette à des contraintes de liquidité. On montre que, sous des coûts de transaction et un impact sur le prix des actifs illiquides, la fonction valeur de l'entreprise est l'unique solution de viscosité d'une équation d'Hamilton-Jacobi-Bellman. On propose aussi une méthode numérique itérative pour calculer la stratégie optimale d'achat, de vente et de distribution de dividendes.
The purpose of this thesis is to study some stochastic control problems with liquidity risk and price impact. The thesis contains four chapters.The second chapter is devoted to the modeling aspects of a market making problem in a liquidity risk framework under inventory constraints and switching regimes. This formulation can be seen as an extension of previous studies on this subject. The main result is the characterization of the value functions as the unique viscosity solutions to the associated Hamilton-Jacobi-Bellman system. We further enrich our study with some numerical results.In the third section, we introduce a numerical scheme to solve an impulse control problem under state constraints arising from optimal portfolio selection under liquidity risk and price impact. We show that the value function could be obtained as the limit of an iterative procedure where each step is an optimal stopping problem and we use a numerical approximation algorithm based on quantization procedure to compute the value function and the optimal…
Advisors/Committee Members: Crépey, Stéphane (thesis director), Mnif, Mohamed (thesis director), Ly Vath, Vathana (thesis director).
Subjects/Keywords: Contrôle stochastique; Stochastic control problem
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gaïgi, M. (2015). Problème de contrôle stochastique sous contraintes de risque de liquidité : Stochastic control problems with liquidity risk constraints. (Doctoral Dissertation). Evry-Val d'Essonne; École nationale d'ingénieurs de Tunis (Tunisie). Retrieved from http://www.theses.fr/2015EVRY0001
Chicago Manual of Style (16th Edition):
Gaïgi, M'hamed. “Problème de contrôle stochastique sous contraintes de risque de liquidité : Stochastic control problems with liquidity risk constraints.” 2015. Doctoral Dissertation, Evry-Val d'Essonne; École nationale d'ingénieurs de Tunis (Tunisie). Accessed January 20, 2021.
http://www.theses.fr/2015EVRY0001.
MLA Handbook (7th Edition):
Gaïgi, M'hamed. “Problème de contrôle stochastique sous contraintes de risque de liquidité : Stochastic control problems with liquidity risk constraints.” 2015. Web. 20 Jan 2021.
Vancouver:
Gaïgi M. Problème de contrôle stochastique sous contraintes de risque de liquidité : Stochastic control problems with liquidity risk constraints. [Internet] [Doctoral dissertation]. Evry-Val d'Essonne; École nationale d'ingénieurs de Tunis (Tunisie); 2015. [cited 2021 Jan 20].
Available from: http://www.theses.fr/2015EVRY0001.
Council of Science Editors:
Gaïgi M. Problème de contrôle stochastique sous contraintes de risque de liquidité : Stochastic control problems with liquidity risk constraints. [Doctoral Dissertation]. Evry-Val d'Essonne; École nationale d'ingénieurs de Tunis (Tunisie); 2015. Available from: http://www.theses.fr/2015EVRY0001

Clemson University
5.
Saine, Mary Elizabeth.
Scheduling Control for Many-Server Queues When Customers Change Class.
Degree: MS, Mathematical Sciences, 2020, Clemson University
URL: https://tigerprints.clemson.edu/all_theses/3270
► We consider a two class, many-server queueing system which allows for customer abandonment and class changes. With the objective to minimize the long-run average…
(more)
▼ We consider a two class, many-server queueing system which allows for customer abandonment and class changes. With the objective to minimize the long-run average holding cost, we formulate a
stochastic queueing
control problem. Instead of solving this directly, we apply a fluid scaling to obtain a deterministic counterpart to the problem. By considering the equilibrium of the deterministic solution, we can solve the resulting
control problem, referred to as the equilibrium
control problem (ECP), and use the solution to propose a priority policy for the original
stochastic queueing system. We prove that in an overloaded system, under a fluid scaling, our policy is asymptotically optimal as it attains the lower bound formed by the solution of the ECP.
Advisors/Committee Members: Xin Liu, Peter Kiessler, Brian Fralix.
Subjects/Keywords: Control; Queue; Scheduling; Stochastic
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Saine, M. E. (2020). Scheduling Control for Many-Server Queues When Customers Change Class. (Masters Thesis). Clemson University. Retrieved from https://tigerprints.clemson.edu/all_theses/3270
Chicago Manual of Style (16th Edition):
Saine, Mary Elizabeth. “Scheduling Control for Many-Server Queues When Customers Change Class.” 2020. Masters Thesis, Clemson University. Accessed January 20, 2021.
https://tigerprints.clemson.edu/all_theses/3270.
MLA Handbook (7th Edition):
Saine, Mary Elizabeth. “Scheduling Control for Many-Server Queues When Customers Change Class.” 2020. Web. 20 Jan 2021.
Vancouver:
Saine ME. Scheduling Control for Many-Server Queues When Customers Change Class. [Internet] [Masters thesis]. Clemson University; 2020. [cited 2021 Jan 20].
Available from: https://tigerprints.clemson.edu/all_theses/3270.
Council of Science Editors:
Saine ME. Scheduling Control for Many-Server Queues When Customers Change Class. [Masters Thesis]. Clemson University; 2020. Available from: https://tigerprints.clemson.edu/all_theses/3270

University of Texas – Austin
6.
Kontaxis, Andrew.
Asymptotics for optimal investment with high-water mark fee.
Degree: PhD, Mathematics, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/31516
► This dissertation studies the problem of optimal investment in a fund charging high-water mark fees. We consider a market consisting of a riskless money-market account…
(more)
▼ This dissertation studies the problem of optimal investment in a fund charging high-water mark fees. We consider a market consisting of a riskless money-market account and a fund charging high-water mark fees at rate λ, with share price given exogenously as a geometric Brownian motion. A small investor invests in this market on an infinite time horizon and seeks to maximize expected utility from consumption rate. Utility is taken to be constant relative risk aversion (CRRA). In this setting, we study the asymptotic behavior of the value function for small values of the fee rate λ. In particular, we determine the first and second derivatives of the value function with respect to λ. We then exhibit for each λ explicit sub-optimal feedback investment and consumption strategies with payoffs that match the value function up to second order in λ.
Advisors/Committee Members: Sîrbu, Mihai (advisor), Gamba, Irene M (committee member), Mendoza-Arriaga, Rafael (committee member), Zariphopoulou, Thaleia (committee member), Zitkovic, Gordan (committee member).
Subjects/Keywords: Stochastic control; Mathematical finance
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kontaxis, A. (2015). Asymptotics for optimal investment with high-water mark fee. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/31516
Chicago Manual of Style (16th Edition):
Kontaxis, Andrew. “Asymptotics for optimal investment with high-water mark fee.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed January 20, 2021.
http://hdl.handle.net/2152/31516.
MLA Handbook (7th Edition):
Kontaxis, Andrew. “Asymptotics for optimal investment with high-water mark fee.” 2015. Web. 20 Jan 2021.
Vancouver:
Kontaxis A. Asymptotics for optimal investment with high-water mark fee. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2152/31516.
Council of Science Editors:
Kontaxis A. Asymptotics for optimal investment with high-water mark fee. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/31516

University of Texas – Austin
7.
Fayvisovich, Roman.
Martingale-generated control structures and a framework for the dynamic programming principle.
Degree: PhD, Mathematics, 2017, University of Texas – Austin
URL: http://hdl.handle.net/2152/62105
► This thesis constructs an abstract framework in which the dynamic programming principle (DPP) can be proven for a broad range of stochastic control problems. Using…
(more)
▼ This thesis constructs an abstract framework in which the dynamic
programming principle (DPP) can be proven for a broad range of
stochastic
control problems. Using a distributional formulation of
stochastic control, we
prove the DPP for problems that optimize over sets of martingale measures. As
an application, we use the classical martingale problem to prove the DPP for
weak solutions of controlled diffusions, and use it show that the value function
is a viscosity solution of the associated Hamilton-Jacobi-Bellman equation.
Advisors/Committee Members: Žitković, Gordan (advisor), Sirbu, Mihai (committee member), Zariphopoulou, Thaleia (committee member), Larsen, Kasper (committee member).
Subjects/Keywords: Stochastic control; Dynamic programming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fayvisovich, R. (2017). Martingale-generated control structures and a framework for the dynamic programming principle. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/62105
Chicago Manual of Style (16th Edition):
Fayvisovich, Roman. “Martingale-generated control structures and a framework for the dynamic programming principle.” 2017. Doctoral Dissertation, University of Texas – Austin. Accessed January 20, 2021.
http://hdl.handle.net/2152/62105.
MLA Handbook (7th Edition):
Fayvisovich, Roman. “Martingale-generated control structures and a framework for the dynamic programming principle.” 2017. Web. 20 Jan 2021.
Vancouver:
Fayvisovich R. Martingale-generated control structures and a framework for the dynamic programming principle. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2017. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2152/62105.
Council of Science Editors:
Fayvisovich R. Martingale-generated control structures and a framework for the dynamic programming principle. [Doctoral Dissertation]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/62105

University of Texas – Austin
8.
-4652-8789.
Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model.
Degree: PhD, Mathematics, 2017, University of Texas – Austin
URL: http://hdl.handle.net/2152/63034
► This dissertation studies the problem of optimal investment and consumption in a market in which there are multiple risky assets. Among those risky assets, there…
(more)
▼ This dissertation studies the problem of optimal investment and consumption in a market in which there are multiple risky assets. Among those risky assets, there is a fund charging high-watermark fees and many other stocks, with share prices given exogenously as a multi-dimensional geometric Lévy process. Additionally, there is a riskless money market account in this market. A small investor invests and consumes simultaneously on an infinite time horizon, and seeks to maximize expected utility from consumption. Utility is taken to be constant relative risk aversion (CRRA). In this setting, we first employ the Dynamic Programming Principle to write down the Hamilton-Jacobi-Bellman (HJB) integro-differential equation associated with this
stochastic control problem. Then, we proceed to show that a classical solution of the HJB equation corresponds to the value function of the
stochastic control problem, and hence the optimal strategies are given in feedback form in terms of the value function. Moreover, we provide numerical results to investigate the impact of various parameters on the investor’s strategies.
Advisors/Committee Members: Sîrbu, Mihai (advisor), Mueller, Peter (committee member), Tompaidis, Efstathios (committee member), Zariphopoulou, Thaleia (committee member), Zitkovic, Gordan (committee member).
Subjects/Keywords: Mathematical finance; Stochastic control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
-4652-8789. (2017). Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/63034
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Chicago Manual of Style (16th Edition):
-4652-8789. “Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model.” 2017. Doctoral Dissertation, University of Texas – Austin. Accessed January 20, 2021.
http://hdl.handle.net/2152/63034.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
MLA Handbook (7th Edition):
-4652-8789. “Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model.” 2017. Web. 20 Jan 2021.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Vancouver:
-4652-8789. Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2017. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2152/63034.
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete
Council of Science Editors:
-4652-8789. Optimal investment with high-watermark fee in a multi-dimensional jump diffusion model. [Doctoral Dissertation]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/63034
Note: this citation may be lacking information needed for this citation format:
Author name may be incomplete

Queens University
9.
Johnston, Andrew.
Networked Control Systems with Unbounded Noise under Information Constraints
.
Degree: Mathematics and Statistics, 2012, Queens University
URL: http://hdl.handle.net/1974/7684
► We investigate the stabilization of unstable multidimensional partially observed single-station, multi-sensor (single-controller) and multi-controller (single-sensor) linear systems controlled over discrete noiseless channels under fixed-rate information…
(more)
▼ We investigate the stabilization of unstable multidimensional partially observed single-station, multi-sensor (single-controller) and multi-controller (single-sensor) linear systems controlled over discrete noiseless channels under fixed-rate information constraints. Stability is achieved under communication requirements that are asymptotically tight in the limit of large sampling periods. Through the use of similarity transforms, sampling and random-time drift conditions we obtain a coding and control policy leading to the existence of a unique invariant distribution and finite second moment for the sampled state. We use a vector stabilization scheme in which all modes of the linear system visit a compact set together infinitely often.
Subjects/Keywords: stochastic control
;
quantizers
;
communication constraints
;
decentralized control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Johnston, A. (2012). Networked Control Systems with Unbounded Noise under Information Constraints
. (Thesis). Queens University. Retrieved from http://hdl.handle.net/1974/7684
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Johnston, Andrew. “Networked Control Systems with Unbounded Noise under Information Constraints
.” 2012. Thesis, Queens University. Accessed January 20, 2021.
http://hdl.handle.net/1974/7684.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Johnston, Andrew. “Networked Control Systems with Unbounded Noise under Information Constraints
.” 2012. Web. 20 Jan 2021.
Vancouver:
Johnston A. Networked Control Systems with Unbounded Noise under Information Constraints
. [Internet] [Thesis]. Queens University; 2012. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1974/7684.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Johnston A. Networked Control Systems with Unbounded Noise under Information Constraints
. [Thesis]. Queens University; 2012. Available from: http://hdl.handle.net/1974/7684
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Stellenbosch University
10.
Ndounkeu, Ludovic Tangpi.
Optimal cross hedging of Insurance derivatives using quadratic BSDEs.
Degree: MSc, Mathematical Sciences, 2011, Stellenbosch University
URL: http://hdl.handle.net/10019.1/17950
► ENGLISH ABSTRACT: We consider the utility portfolio optimization problem of an investor whose activities are influenced by an exogenous financial risk (like bad weather or…
(more)
▼ ENGLISH ABSTRACT: We consider the utility portfolio optimization problem of an investor whose
activities are influenced by an exogenous financial risk (like bad weather or
energy shortage) in an incomplete financial market. We work with a fairly
general non-Markovian model, allowing stochastic correlations between the
underlying assets. This important problem in finance and insurance is tackled
by means of backward stochastic differential equations (BSDEs), which have
been shown to be powerful tools in stochastic control. To lay stress on the
importance and the omnipresence of BSDEs in stochastic control, we present
three methods to transform the control problem into a BSDEs. Namely, the
martingale optimality principle introduced by Davis, the martingale representation
and a method based on Itô-Ventzell’s formula. These approaches enable
us to work with portfolio constraints described by closed, not necessarily convex
sets and to get around the classical duality theory of convex analysis. The
solution of the optimization problem can then be simply read from the solution
of the BSDE. An interesting feature of each of the different approaches is that
the generator of the BSDE characterizing the control problem has a quadratic
growth and depends on the form of the set of constraints. We review some
recent advances on the theory of quadratic BSDEs and its applications. There
is no general existence result for multidimensional quadratic BSDEs. In the
one-dimensional case, existence and uniqueness strongly depend on the form
of the terminal condition. Other topics of investigation are measure solutions
of BSDEs, notably measure solutions of BSDE with jumps and numerical approximations.
We extend the equivalence result of Ankirchner et al. (2009)
between existence of classical solutions and existence of measure solutions to
the case of BSDEs driven by a Poisson process with a bounded terminal condition.
We obtain a numerical scheme to approximate measure solutions. In
fact, the existing self-contained construction of measure solutions gives rise
to a numerical scheme for some classes of Lipschitz BSDEs. Two numerical
schemes for quadratic BSDEs introduced in Imkeller et al. (2010) and based,
respectively, on the Cole-Hopf transformation and the truncation procedure
are implemented and the results are compared.
Keywords: BSDE, quadratic growth, measure solutions, martingale theory,
numerical scheme, indifference pricing and hedging, non-tradable underlying,
defaultable claim, utility maximization.
AFRIKAANSE OPSOMMING: Ons beskou die nuts portefeulje optimalisering probleem van ’n belegger wat
se aktiwiteite beïnvloed word deur ’n eksterne finansiele risiko (soos onweer of
’n energie tekort) in ’n onvolledige finansiële mark. Ons werk met ’n redelik
algemene nie-Markoviaanse model, wat stogastiese korrelasies tussen die onderliggende
bates toelaat. Hierdie belangrike probleem in finansies en versekering
is aangepak deur middel van terugwaartse stogastiese differensiaalvergelykings
…
Advisors/Committee Members: Ghomrasni, Raouf, Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences..
Subjects/Keywords: Mathematics; Backward stochastic differential equations; Stochastic control; Insurance derivatives; Cross hedging
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ndounkeu, L. T. (2011). Optimal cross hedging of Insurance derivatives using quadratic BSDEs. (Masters Thesis). Stellenbosch University. Retrieved from http://hdl.handle.net/10019.1/17950
Chicago Manual of Style (16th Edition):
Ndounkeu, Ludovic Tangpi. “Optimal cross hedging of Insurance derivatives using quadratic BSDEs.” 2011. Masters Thesis, Stellenbosch University. Accessed January 20, 2021.
http://hdl.handle.net/10019.1/17950.
MLA Handbook (7th Edition):
Ndounkeu, Ludovic Tangpi. “Optimal cross hedging of Insurance derivatives using quadratic BSDEs.” 2011. Web. 20 Jan 2021.
Vancouver:
Ndounkeu LT. Optimal cross hedging of Insurance derivatives using quadratic BSDEs. [Internet] [Masters thesis]. Stellenbosch University; 2011. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/10019.1/17950.
Council of Science Editors:
Ndounkeu LT. Optimal cross hedging of Insurance derivatives using quadratic BSDEs. [Masters Thesis]. Stellenbosch University; 2011. Available from: http://hdl.handle.net/10019.1/17950

Columbia University
11.
Bhat, Nikhil.
Tractable approximation algorithms for high dimensional sequential optimization problems.
Degree: 2016, Columbia University
URL: https://doi.org/10.7916/D8JQ10W8
► Sequential decision making problems are ubiquitous in a number of research areas such as operations research, finance, engineering and computer science. The main challenge with…
(more)
▼ Sequential decision making problems are ubiquitous in a number of research areas such as operations research, finance, engineering and computer science. The main challenge with these problems comes from the fact that, firstly, there is uncertainty about the future. And secondly, decisions have to be made over a period of time, sequentially. These problems, in many cases, are modeled as Markov Decision Process (MDP). Most real-life MDPs are ‘high dimensional’ in nature making them challenging from a numerical point of view. We consider a number of such high dimensional MDPs. In some cases such problems can be approximately solved using Approximate Dynamic Programming. In other cases problem specific analysis can be solved to device tractable policies that are near-optimal. In Chapter 2, we presents a novel and practical non-parametric approximate dynamic programming (ADP) algorithm that enjoys graceful, dimension-independent approximation and sample complexity guarantees. In particular, we establish both theoretically and computationally that our proposal can serve as a viable replacement to state of the art parametric ADP algorithms, freeing the designer from carefully specifying an approximation architecture. We accomplish this by ‘kernelizing’ a recent mathematical program for ADP (the ‘smoothed’ approximate LP) proposed by [Desai et al., 2011]. In Chapter 3, we consider a class of stochastic control problems where the action space at each time can be described by a class of matching or, more generally, network flow polytopes. Special cases of this class of dynamic matching problems include many problems that are well-studied in the literature, such as: (i) online keyword matching in Internet advertising (the adwords problem); (ii) the bipartite matching of donated kidneys from cadavers to recipients; and (iii) the allocation of donated kidneys through exchanges over cycles of live donor-patient pairs. We provide an approximate dynamic program (ADP) algorithm for dynamic matching with stochastic arrivals and departures. Our framework is more general than the methods prevalent in the literature in that it is applicable to a broad range of problems characterized by a variety of action polytopes and generic arrival and departure processes. In Chapter 4, we consider the problem of A-B testing when the impact of the treatment is marred by a large number of covariates. Randomization can be highly inefficient in such settings, and thus we consider the problem of optimally allocating test subjects to either treatment with a view to maximizing the efficiency of our estimate of the treatment effect. Our main contribution is a tractable algorithm for this problem in the online setting, where subjects arrive, and must be assigned, sequentially. We characterize the value of optimized allocations relative to randomized allocations and show that this value grows large as the number of covariates grows. In particular, we show that there is a lot to be gained from ‘optimizing’ the process of A-B testing relative to the simple…
Subjects/Keywords: Dynamic programming; Stochastic approximation; Stochastic control theory; Markov processes; Operations research
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bhat, N. (2016). Tractable approximation algorithms for high dimensional sequential optimization problems. (Doctoral Dissertation). Columbia University. Retrieved from https://doi.org/10.7916/D8JQ10W8
Chicago Manual of Style (16th Edition):
Bhat, Nikhil. “Tractable approximation algorithms for high dimensional sequential optimization problems.” 2016. Doctoral Dissertation, Columbia University. Accessed January 20, 2021.
https://doi.org/10.7916/D8JQ10W8.
MLA Handbook (7th Edition):
Bhat, Nikhil. “Tractable approximation algorithms for high dimensional sequential optimization problems.” 2016. Web. 20 Jan 2021.
Vancouver:
Bhat N. Tractable approximation algorithms for high dimensional sequential optimization problems. [Internet] [Doctoral dissertation]. Columbia University; 2016. [cited 2021 Jan 20].
Available from: https://doi.org/10.7916/D8JQ10W8.
Council of Science Editors:
Bhat N. Tractable approximation algorithms for high dimensional sequential optimization problems. [Doctoral Dissertation]. Columbia University; 2016. Available from: https://doi.org/10.7916/D8JQ10W8

University of Notre Dame
12.
Fernando Antonio Garcia.
Valuation of Chemical Operations Under
Uncertainty</h1>.
Degree: Chemical Engineering, 2015, University of Notre Dame
URL: https://curate.nd.edu/show/s7526972c0m
► This dissertation uses process synthesis formulations to develop models that explain how to value chemical processes and minimize financial risk. One of the main…
(more)
▼ This dissertation uses process synthesis
formulations to develop models that explain how to value chemical
processes and minimize financial risk. One of the main goals of
this work is to transition from traditional deterministic models
for price and demand into ones that capture the presence of
uncertainty to provide plant owners and investors with financially
sound managing/investing strategies. Three
methodologies are being proposed as part of the approach to value
chemical processes. The first is the construction of a one-period
replicating portfolio using the binomial asset pricing model as a
basis to describe chemical price dynamics. Additionally, it will be
shown that when constructing a portfolio where the number of
linearly independent asset prices exceeds the number of future
states that these prices can attain, the value for the process will
not be unique. The second procedure introduces
the second order
stochastic dominance (SSD) criterion which
provides a less conservative bound on the process cost. The
criterion incorporates information about the investor’s attitude
towards risk. SSD also generalizes a widely used measure of risk,
Value at Risk (VaR). Finally, this analysis
will be extended to multiple time periods, which will provide the
investor with the ability to make decisions at different stages of
the investment horizon. This will be achieved primarily with the
development of a real-time optimization framework known as Model
Predictive
Control (MPC). The formulation will be
stochastic and
subject to Conditional Value at Risk (CVaR) as a risk management
technique.
Advisors/Committee Members: Hsueh Chia Chang, Committee Member, Mark Stadtherr, Committee Member, Jeffrey Kantor, Committee Chair, Eduardo Wolf , Committee Member.
Subjects/Keywords: Simple Process; Valuation; Second Order Stochastic Dominance; Stochastic Model Predictive Control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Garcia, F. A. (2015). Valuation of Chemical Operations Under
Uncertainty</h1>. (Thesis). University of Notre Dame. Retrieved from https://curate.nd.edu/show/s7526972c0m
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Garcia, Fernando Antonio. “Valuation of Chemical Operations Under
Uncertainty</h1>.” 2015. Thesis, University of Notre Dame. Accessed January 20, 2021.
https://curate.nd.edu/show/s7526972c0m.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Garcia, Fernando Antonio. “Valuation of Chemical Operations Under
Uncertainty</h1>.” 2015. Web. 20 Jan 2021.
Vancouver:
Garcia FA. Valuation of Chemical Operations Under
Uncertainty</h1>. [Internet] [Thesis]. University of Notre Dame; 2015. [cited 2021 Jan 20].
Available from: https://curate.nd.edu/show/s7526972c0m.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Garcia FA. Valuation of Chemical Operations Under
Uncertainty</h1>. [Thesis]. University of Notre Dame; 2015. Available from: https://curate.nd.edu/show/s7526972c0m
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University College Cork
13.
Visentin, Andrea.
Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches.
Degree: 2020, University College Cork
URL: http://hdl.handle.net/10468/10601
► The objective of this work is to introduce techniques for the computation of optimal and near-optimal inventory control policy parameters for the stochastic inventory control…
(more)
▼ The objective of this work is to introduce techniques for the computation of optimal and near-optimal inventory
control policy parameters for the
stochastic inventory
control problem under Scarf’s setting. A common aspect of the solutions presented herein is the usage of
stochastic dynamic programming approaches, a mathematical programming technique introduced by Bellman.
Stochastic dynamic programming is hybridised with branch-and-bound, binary search, constraint programming and other computational techniques to develop innovative and competitive solutions.
In this work, the classic single-item, single location-inventory
control with penalty cost under the independent
stochastic demand is extended to model a fixed review cost. This cost is charged when the inventory level is assessed at the beginning of a period. This operation is costly in practice and including it can lead to significant savings. This makes it possible to model an order cancellation penalty charge.
The first contribution hereby presented is the first
stochastic dynamic program- ming that captures Bookbinder and Tan’s static-dynamic uncertainty
control policy with penalty cost. Numerous techniques are available in the literature to compute such parameters; however, they all make assumptions on the de- mand probability distribution. This technique has many similarities to Scarf’s
stochastic dynamic programming formulation, and it does not require any ex- ternal solver to be deployed. Memoisation and binary search techniques are deployed to improve computational performances. Extensive computational studies show that this new model has a tighter optimality gap compared to the state of the art.
The second contribution is to introduce the first procedure to compute cost- optimal parameters for the well-known (R, s, S) policy. Practitioners widely use such a policy; however, the determination of its parameters is considered com- putationally prohibitive. A technique that hybridises
stochastic dynamic pro- gramming and branch-and-bound is presented, alongside with computational enhancements. Computing the optimal policy allows the determination of op- timality gaps for future heuristics. This approach can solve instances of consid- erable size, making it usable by practitioners. The computational study shows the reduction of the cost that such a system can provide.
Thirdly, this work presents the first heuristics for determining the near-optimal parameters for the (R,s,S) policy. The first is an algorithm that formally models the (R,s,S) policy computation in the form of a functional equation. The second is a heuristic formed by a hybridisation of (R,S) and (s,S) policy parameters solvers. These heuristics can compute near-optimal parameters in a fraction of time compared to the exact methods. They can be used to speed up the optimal branch-and-bound technique.
The last contribution is the introduction of a technique to encode dynamic programming in constraint programming. Constraint programming provides the user with an expressive modelling language…
Advisors/Committee Members: Prestwich, Steven David, Brown, Kenneth N., SFI.
Subjects/Keywords: Inventory control; Dynamic programming; Stochastic programming; Stochastic lot sizing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Visentin, A. (2020). Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches. (Thesis). University College Cork. Retrieved from http://hdl.handle.net/10468/10601
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Visentin, Andrea. “Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches.” 2020. Thesis, University College Cork. Accessed January 20, 2021.
http://hdl.handle.net/10468/10601.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Visentin, Andrea. “Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches.” 2020. Web. 20 Jan 2021.
Vancouver:
Visentin A. Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches. [Internet] [Thesis]. University College Cork; 2020. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/10468/10601.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Visentin A. Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches. [Thesis]. University College Cork; 2020. Available from: http://hdl.handle.net/10468/10601
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Georgia Tech
14.
Exarchos, Ioannis.
Stochastic optimal control - a forward and backward sampling approach.
Degree: PhD, Aerospace Engineering, 2017, Georgia Tech
URL: http://hdl.handle.net/1853/59263
► Stochastic optimal control has seen significant recent development, motivated by its success in a plethora of engineering applications, such as autonomous systems, robotics, neuroscience, and…
(more)
▼ Stochastic optimal
control has seen significant recent development, motivated by its success in a plethora of engineering applications, such as autonomous systems, robotics, neuroscience, and financial engineering. Despite the many theoretical and algorithmic advancements that made such a success possible, several obstacles remain; most notable are (i) the mitigation of the curse of dimensionality inherent in optimal
control problems, (ii) the design of efficient algorithms that allow for fast, online computation, and (iii) the expansion of the class of optimal
control problems that can be addressed by algorithms in engineering practice. The aim of this dissertation is the development of a learning
stochastic control framework which capitalizes on the innate relationship between certain nonlinear partial differential equations (PDEs) and forward and backward
stochastic differential equations (FBSDEs), demonstrated by a nonlinear version of the Feynman-Kac lemma. By means of this lemma, we are able to obtain a probabilistic representation of the solution to the nonlinear Hamilton-Jacobi-Bellman PDE, expressed in form of a system of decoupled FBSDEs. This system of FBSDEs can then be simulated by employing linear regression techniques. We present a novel discretization scheme for FBSDEs, and enhance the resulting algorithm with importance sampling, thereby constructing an iterative scheme that is capable of learning the optimal
control without an initial guess, even in systems with highly nonlinear, underactuated dynamics. The framework we develop within this dissertation addresses several classes of
stochastic optimal
control, such as L2, L1, risk sensitive
control, as well as some classes of differential games, in both fixed-final-time as well as first-exit settings.
Advisors/Committee Members: Tsiotras, Panagiotis (advisor), Theodorou, Evangelos A. (advisor), Haddad, Wassim M. (committee member), Zhou, Haomin (committee member), Popescu, Ionel (committee member).
Subjects/Keywords: Stochastic optimal control; Forward and backward stochastic differential equations
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Exarchos, I. (2017). Stochastic optimal control - a forward and backward sampling approach. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/59263
Chicago Manual of Style (16th Edition):
Exarchos, Ioannis. “Stochastic optimal control - a forward and backward sampling approach.” 2017. Doctoral Dissertation, Georgia Tech. Accessed January 20, 2021.
http://hdl.handle.net/1853/59263.
MLA Handbook (7th Edition):
Exarchos, Ioannis. “Stochastic optimal control - a forward and backward sampling approach.” 2017. Web. 20 Jan 2021.
Vancouver:
Exarchos I. Stochastic optimal control - a forward and backward sampling approach. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1853/59263.
Council of Science Editors:
Exarchos I. Stochastic optimal control - a forward and backward sampling approach. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/59263

University of Manchester
15.
Zhou, Yuyang.
Performance Improvement for Stochastic Systems Using
State Estimation.
Degree: 2018, University of Manchester
URL: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314396
► Recent developments in the practice control field have heightened the need for performance enhancement. The designed controller should not only guarantee the variables to follow…
(more)
▼ Recent developments in the practice
control field
have heightened the need for performance enhancement. The designed
controller should not only guarantee the variables to follow their
set point values, but also ought to focus on the performance of
systems like quality, efficiency, etc. Hence, with the fact that
the inevitable noises are widely existing during industry
processes, the randomness of the tracking errors can be considered
as a critical performance to improve further. In addition, due to
the fact that some controllers for industrial processes cannot be
changed once the parameters are designed, it is crucial to design a
control algorithm to minimise the randomness of tracking error
without changing the existing closed-loop
control. In order to
achieve the above objectives, a class of novel algorithms are
proposed in this thesis for different types of systems with
unmeasurable states. Without changing the existing closed-loop
proportional integral(PI) controller, the compensative controller
is extra added to reduce the randomness of tracking error. That
means the PI controller can always guarantee the basic tracking
property while the designed compensative signal can be removed any
time without affecting the normal operation. Instead of just using
the output information as PI controller, the compensative
controller is designed to minimise the randomness of tracking error
using estimated states information. Since most system states are
unmeasurable, proper filters are employed to estimate the system
states. Based on the
stochastic system
control theory, the
criterion to characterise the system randomness are valid to
different systems. Therefore a brief review about the basic
concepts of
stochastic system
control contained in this thesis.
More specifically, there are overshoot minimisation for linear
deterministic systems, minimum variance
control for linear Gaussian
stochastic systems, and minimum entropy
control for non-linear and
non-Gaussian
stochastic systems. Furthermore, the stability
analysis of each system is discussed in mean-square sense. To
illustrate the effectiveness of presented
control methods, the
simulation results are given. Finally, the works of this thesis are
summarised and the future work towards to the limitations existed
in the proposed algorithms are listed.
Advisors/Committee Members: GRAY, JOHN J, Wang, Hong, Gray, John.
Subjects/Keywords: stochastic system control; performance enhancement; minimum entropy control; pdf control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhou, Y. (2018). Performance Improvement for Stochastic Systems Using
State Estimation. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314396
Chicago Manual of Style (16th Edition):
Zhou, Yuyang. “Performance Improvement for Stochastic Systems Using
State Estimation.” 2018. Doctoral Dissertation, University of Manchester. Accessed January 20, 2021.
http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314396.
MLA Handbook (7th Edition):
Zhou, Yuyang. “Performance Improvement for Stochastic Systems Using
State Estimation.” 2018. Web. 20 Jan 2021.
Vancouver:
Zhou Y. Performance Improvement for Stochastic Systems Using
State Estimation. [Internet] [Doctoral dissertation]. University of Manchester; 2018. [cited 2021 Jan 20].
Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314396.
Council of Science Editors:
Zhou Y. Performance Improvement for Stochastic Systems Using
State Estimation. [Doctoral Dissertation]. University of Manchester; 2018. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:314396

Georgia Tech
16.
Okamoto, Kazuhide.
Optimal covariance steering: Theory and its application to autonomous driving.
Degree: PhD, Aerospace Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62260
► Optimal control under uncertainty has been one of the central research topics in the control community for decades. While a number of theories have been…
(more)
▼ Optimal
control under uncertainty has been one of the central research topics in the
control community for decades. While a number of theories have been developed to
control a single state from an initial state to a target state, in some situations, it is preferable to simultaneously compute
control commands for multiple states that start from an initial distribution and converge to a target distribution. This dissertation aims to develop a
stochastic optimal
control theory that, in addition to the mean, explicitly steers the state covariance. Specifically, we focus on the
control of linear time-varying (LTV) systems with additive Gaussian noise. The task is to steer a Gaussian-distributed initial system state distribution to a target Gaussian distribution, while minimizing a state and
control expectation-dependent quadratic cost under probabilistic state constraints. Notice that, in such systems, the system state keeps being Gaussian distributed. Because Gaussian distributions can be fully described by the first two moments, the proposed optimal covariance steering (OCS) theory allows us to
control the whole distribution of the state and quantify the effect of uncertainty without conducting Monte-Carlo simulations. We propose to use a
control policy that is an affine function of filtered disturbances, which utilizes the results of convex optimization theory and efficiently finds the solution. After the OCS theory for LTV systems is introduced, we extend the theory to vehicle path planning problems. While several path planning algorithms have been proposed, many of them have dealt with deterministic dynamics or
stochastic dynamics with open-loop un- certainty, i.e., the uncertainty of the system state is not controlled and, typically, increases with time due to exogenous disturbances, which may lead to the design of potentially conservative nominal paths. A typical approach to deal with disturbances is to use a lower-level local feedback controller after the nominal path is computed. This unidirectional dependence of the feedback controller on the path planner makes the nominal path unnecessarily conservative. The path-planning approach we develop based on the OCS theory computes the nominal path based on the closed-loop evolution of the system uncertainty by simultaneously optimizing the feedforward and feedback
control commands. We validate the performance using numerical simulations with single and multiple vehicle path planning problems. Furthermore, we introduce an optimal covariance steering controller for linear systems with input hard constraints. As many real-world systems have input constraints (e.g., air- craft and spacecraft have minimum/maximum thrust), this problem formulation will allow us to deal with realistic scenarios. In order to incorporate input hard constraints in the OCS theory framework, we use element-wise saturation functions and limit the effect of disturbance to the
control commands. We prove that this problem formulation leads to a convex programming problem and demonstrate the…
Advisors/Committee Members: Tsiotras, Panagiotis (advisor), Clarke, Jahn-Paul (committee member), Chernova, Sonia (committee member), Rogers, Jonathan (committee member), Chen, Yongxin (committee member).
Subjects/Keywords: Stochastic control; Optimal control; Model predictive control; Vehicle path planning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Okamoto, K. (2019). Optimal covariance steering: Theory and its application to autonomous driving. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62260
Chicago Manual of Style (16th Edition):
Okamoto, Kazuhide. “Optimal covariance steering: Theory and its application to autonomous driving.” 2019. Doctoral Dissertation, Georgia Tech. Accessed January 20, 2021.
http://hdl.handle.net/1853/62260.
MLA Handbook (7th Edition):
Okamoto, Kazuhide. “Optimal covariance steering: Theory and its application to autonomous driving.” 2019. Web. 20 Jan 2021.
Vancouver:
Okamoto K. Optimal covariance steering: Theory and its application to autonomous driving. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1853/62260.
Council of Science Editors:
Okamoto K. Optimal covariance steering: Theory and its application to autonomous driving. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62260

University of Manchester
17.
Zhou, Yuyang.
Performance improvement for stochastic systems using state estimation.
Degree: PhD, 2018, University of Manchester
URL: https://www.research.manchester.ac.uk/portal/en/theses/performance-improvement-for-stochastic-systems-using-state-estimation(ab663282-47dc-450a-9e00-135aacb33e25).html
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748057
► Recent developments in the practice control field have heightened the need for performance enhancement. The designed controller should not only guarantee the variables to follow…
(more)
▼ Recent developments in the practice control field have heightened the need for performance enhancement. The designed controller should not only guarantee the variables to follow their set point values, but also ought to focus on the performance of systems like quality, efficiency, etc. Hence, with the fact that the inevitable noises are widely existing during industry processes, the randomness of the tracking errors can be considered as a critical performance to improve further. In addition, due to the fact that some controllers for industrial processes cannot be changed once the parameters are designed, it is crucial to design a control algorithm to minimise the randomness of tracking error without changing the existing closed-loop control. In order to achieve the above objectives, a class of novel algorithms are proposed in this thesis for different types of systems with unmeasurable states. Without changing the existing closed-loop proportional integral(PI) controller, the compensative controller is extra added to reduce the randomness of tracking error. That means the PI controller can always guarantee the basic tracking property while the designed compensative signal can be removed any time without affecting the normal operation. Instead of just using the output information as PI controller, the compensative controller is designed to minimise the randomness of tracking error using estimated states information. Since most system states are unmeasurable, proper filters are employed to estimate the system states. Based on the stochastic system control theory, the criterion to characterise the system randomness are valid to different systems. Therefore a brief review about the basic concepts of stochastic system control contained in this thesis. More specifically, there are overshoot minimisation for linear deterministic systems, minimum variance control for linear Gaussian stochastic systems, and minimum entropy control for non-linear and non-Gaussian stochastic systems. Furthermore, the stability analysis of each system is discussed in mean-square sense. To illustrate the effectiveness of presented control methods, the simulation results are given. Finally, the works of this thesis are summarised and the future work towards to the limitations existed in the proposed algorithms are listed.
Subjects/Keywords: 629.8; pdf control; minimum entropy control; stochastic system control; performance enhancement
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhou, Y. (2018). Performance improvement for stochastic systems using state estimation. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/performance-improvement-for-stochastic-systems-using-state-estimation(ab663282-47dc-450a-9e00-135aacb33e25).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748057
Chicago Manual of Style (16th Edition):
Zhou, Yuyang. “Performance improvement for stochastic systems using state estimation.” 2018. Doctoral Dissertation, University of Manchester. Accessed January 20, 2021.
https://www.research.manchester.ac.uk/portal/en/theses/performance-improvement-for-stochastic-systems-using-state-estimation(ab663282-47dc-450a-9e00-135aacb33e25).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748057.
MLA Handbook (7th Edition):
Zhou, Yuyang. “Performance improvement for stochastic systems using state estimation.” 2018. Web. 20 Jan 2021.
Vancouver:
Zhou Y. Performance improvement for stochastic systems using state estimation. [Internet] [Doctoral dissertation]. University of Manchester; 2018. [cited 2021 Jan 20].
Available from: https://www.research.manchester.ac.uk/portal/en/theses/performance-improvement-for-stochastic-systems-using-state-estimation(ab663282-47dc-450a-9e00-135aacb33e25).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748057.
Council of Science Editors:
Zhou Y. Performance improvement for stochastic systems using state estimation. [Doctoral Dissertation]. University of Manchester; 2018. Available from: https://www.research.manchester.ac.uk/portal/en/theses/performance-improvement-for-stochastic-systems-using-state-estimation(ab663282-47dc-450a-9e00-135aacb33e25).html ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.748057

University of Illinois – Urbana-Champaign
18.
Li, Dapeng.
Closed-loop analysis and feedback design in the presence of limited information.
Degree: PhD, 0133, 2011, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/24370
► Recent progress in communication technologies and their use in feedback control systems motivate to look deeper into the interplay of control and communication in the…
(more)
▼ Recent progress in communication technologies and their use in feedback
control systems motivate to look deeper into the interplay of
control and communication in the closed-loop feedback architecture. Among several research directions on this topic, a great deal of attention has been given to the fundamental limitations in the presence communication constraints. Entropy rate inequalities corresponding to the information flux in a typical causal closed loop have been derived towards obtaining a Bode-like integral formula.
This work extends the discrete-time result to continuous-time systems. The main challenge in this extension is that Kolmogorov's entropy rate equality, which is fundamental to the derivation of the result in discrete-time case, does not hold for continuous-time systems. Mutual information rate instead of entropy rate is used to represent the information flow in the closed-loop, and a limiting relationship due to Pinsker towards obtaining the mutual information rate between two continuous time processes from their discretized sequence is used to derive the Bode-like formula. The results are further extended to switched systems and a Bode integral formula is obtained under the assumption that the switching sequence is an ergodic Markov chain. To enable simplified calculation of the resulting lower bound, some Lie algebraic conditions are developed.
Besides analysis results, this dissertation also includes joint
control/communication design for closed-loop stability of performance. We consider the stabilization problem within Linear Quadratic Regulator framework, where a
control gain is chosen to minimize a linear quadratic cost functional while
subject to the input power constraint imposed by an additive Gaussian channel which closes the loop. Also focused on Gaussian channel, the channel noise attenuation problem is addressed, by using H-infinity/H2 methodology. Similar feedback optimal estimation problem is solved by using Kalman filtering theory.
Advisors/Committee Members: Hovakimyan, Naira (advisor), Hovakimyan, Naira (Committee Chair), Kumar, P. R. (committee member), Dullerud, Geir E. (committee member), Mehta, Prashant G. (committee member).
Subjects/Keywords: Control Theory; Information Theory; Robust Control; Sensitivity Function; Stochastic Control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, D. (2011). Closed-loop analysis and feedback design in the presence of limited information. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/24370
Chicago Manual of Style (16th Edition):
Li, Dapeng. “Closed-loop analysis and feedback design in the presence of limited information.” 2011. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed January 20, 2021.
http://hdl.handle.net/2142/24370.
MLA Handbook (7th Edition):
Li, Dapeng. “Closed-loop analysis and feedback design in the presence of limited information.” 2011. Web. 20 Jan 2021.
Vancouver:
Li D. Closed-loop analysis and feedback design in the presence of limited information. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2011. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2142/24370.
Council of Science Editors:
Li D. Closed-loop analysis and feedback design in the presence of limited information. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2011. Available from: http://hdl.handle.net/2142/24370

Texas A&M University
19.
Fisher, James Robert.
Stability analysis and control of stochastic dynamic systems using polynomial chaos.
Degree: PhD, Aerospace Engineering, 2009, Texas A&M University
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2853
► Recently, there has been a growing interest in analyzing stability and developing controls for stochastic dynamic systems. This interest arises out of a need to…
(more)
▼ Recently, there has been a growing interest in analyzing stability and developing
controls for
stochastic dynamic systems. This interest arises out of a need to develop
robust
control strategies for systems with uncertain dynamics. While traditional
robust
control techniques ensure robustness, these techniques can be conservative as
they do not utilize the risk associated with the uncertainty variation. To improve
controller performance, it is possible to include the probability of each parameter
value in the
control design. In this manner, risk can be taken for parameter values
with low probability and performance can be improved for those of higher probability.
To accomplish this, one must solve the resulting stability and
control problems
for the associated
stochastic system. In general, this is accomplished using sampling
based methods by creating a grid of parameter values and solving the problem for
each associated parameter. This can lead to problems that are difficult to solve and
may possess no analytical solution.
The novelty of this dissertation is the utilization of non-sampling based methods
to solve
stochastic stability and optimal
control problems. The polynomial chaos expansion
is able to approximate the evolution of the uncertainty in state trajectories
induced by
stochastic system uncertainty with arbitrary accuracy. This approximation
is used to transform the
stochastic dynamic system into a deterministic system
that can be analyzed in an analytical framework. In this dissertation, we describe the generalized polynomial chaos expansion and
present a framework for transforming
stochastic systems into deterministic systems.
We present conditions for analyzing the stability of the resulting systems. In addition,
a framework for solving L2 optimal
control problems is presented. For linear systems,
feedback laws for the infinite-horizon L2 optimal
control problem are presented. A
framework for solving finite-horizon optimal
control problems with time-correlated
stochastic forcing is also presented. The
stochastic receding horizon
control problem
is also solved using the new deterministic framework. Results are presented that
demonstrate the links between stability of the original
stochastic system and the
approximate system determined from the polynomial chaos approximation. The solutions
of these
stochastic stability and
control problems are illustrated throughout
with examples.
Advisors/Committee Members: Bhattacharya, Raktim (advisor), Chakravorty, Suman (committee member), Hurtado, John E. (committee member), Junkins, John L. (committee member), Swaroop, D.V.A.H.G. (committee member), Vadali, Srinivas R. (committee member).
Subjects/Keywords: Control; Stochastic Systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fisher, J. R. (2009). Stability analysis and control of stochastic dynamic systems using polynomial chaos. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2853
Chicago Manual of Style (16th Edition):
Fisher, James Robert. “Stability analysis and control of stochastic dynamic systems using polynomial chaos.” 2009. Doctoral Dissertation, Texas A&M University. Accessed January 20, 2021.
http://hdl.handle.net/1969.1/ETD-TAMU-2853.
MLA Handbook (7th Edition):
Fisher, James Robert. “Stability analysis and control of stochastic dynamic systems using polynomial chaos.” 2009. Web. 20 Jan 2021.
Vancouver:
Fisher JR. Stability analysis and control of stochastic dynamic systems using polynomial chaos. [Internet] [Doctoral dissertation]. Texas A&M University; 2009. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2853.
Council of Science Editors:
Fisher JR. Stability analysis and control of stochastic dynamic systems using polynomial chaos. [Doctoral Dissertation]. Texas A&M University; 2009. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2853

University of Edinburgh
20.
Alrasheedi, Adel Fahad.
Stochastic joint replenishment problems : periodic review policies.
Degree: PhD, 2015, University of Edinburgh
URL: http://hdl.handle.net/1842/10480
► Operations Managers of manufacturing systems, distribution systems, and supply chains address lot sizing and scheduling problems as part of their duties. These problems are concerned…
(more)
▼ Operations Managers of manufacturing systems, distribution systems, and supply chains address lot sizing and scheduling problems as part of their duties. These problems are concerned with decisions related to the size of orders and their schedule. In general, products share or compete for common resources and thus require coordination of their replenishment decisions whether replenishment involves manufacturing operations or not. This research is concerned with joint replenishment problems (JRPs) which are part of multi-item lot sizing and scheduling problems in manufacturing and distribution systems in single echelon/stage systems. The principal purpose of this research is to develop three new periodic review policies for stochastic joint replenishment problem. It also highlights the lack of research on joint replenishment problems with different demand classes (DSJRP). Therefore, periodic review policy is developed for this problem where the inventory system faces different demand classes that are deterministic demand and stochastic demand. Heuristic Algorithms have been developed to obtain (near) optimal parameters for the three policies as well as a heuristic algorithm has been developed for DSJRP. Numerical tests against literature benchmarks have been presented.
Subjects/Keywords: 519.7; inventory control; stochastic demand; periodic review
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Alrasheedi, A. F. (2015). Stochastic joint replenishment problems : periodic review policies. (Doctoral Dissertation). University of Edinburgh. Retrieved from http://hdl.handle.net/1842/10480
Chicago Manual of Style (16th Edition):
Alrasheedi, Adel Fahad. “Stochastic joint replenishment problems : periodic review policies.” 2015. Doctoral Dissertation, University of Edinburgh. Accessed January 20, 2021.
http://hdl.handle.net/1842/10480.
MLA Handbook (7th Edition):
Alrasheedi, Adel Fahad. “Stochastic joint replenishment problems : periodic review policies.” 2015. Web. 20 Jan 2021.
Vancouver:
Alrasheedi AF. Stochastic joint replenishment problems : periodic review policies. [Internet] [Doctoral dissertation]. University of Edinburgh; 2015. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1842/10480.
Council of Science Editors:
Alrasheedi AF. Stochastic joint replenishment problems : periodic review policies. [Doctoral Dissertation]. University of Edinburgh; 2015. Available from: http://hdl.handle.net/1842/10480

Hong Kong University of Science and Technology
21.
Yu, Peiwen.
Three essays on stochastic dynamic inventory management.
Degree: 2014, Hong Kong University of Science and Technology
URL: http://repository.ust.hk/ir/Record/1783.1-86167
;
https://doi.org/10.14711/thesis-b1288887
;
http://repository.ust.hk/ir/bitstream/1783.1-86167/1/th_redirect.html
► My dissertation focuses on stochastic dynamic inventory theory and its applications in inventory management, especially in perishable inventory systems. Essay 1: We apply the concept…
(more)
▼ My dissertation focuses on stochastic dynamic inventory theory and its applications in inventory management, especially in perishable inventory systems. Essay 1: We apply the concept of multimodularity in three stochastic dynamic inventory problems in which state and decision variables are economic substitutes. The first is clearance sales of perishable goods. The second is sourcing from multiple suppliers· with different leadtimes. The third is transshipment under capacity constraints. In all three problems, we establish monotone optimal polices with bounded sensitivity. Multimodularity proves to be an effective tool for these problems because it implies substitutability, it is preserved under minimization, and it leads directly to monotone optimal policies with bounded sensitivity. Essay 2: We study joint replenishment and clearance sales of perishable goods under a general finite lifetime and a last-in-first-out (LIFO) issuing rule, a problem common in retailing. We show that the optimal policies can be characterized by two thresholds for each age group of inventory: a lower one and a higher one. For an age group of inventory with a remaining lifetime of two periods or longer, clearance sales may take place when its inventory level is above its higher threshold. There is no clearance sale if its inventory level is below its lower threshold and the inventory levels in all the fresher age groups are also below their corresponding lower thresholds. The optimal policy for the age group of inventory with a one-period remaining lifetime is different. Clearance sales may occur if its inventory level is above its higher threshold or below its lower threshold. The phenomenon that a clearance sale happens when the inventory is low is driven by the need to segregate the newest inventory from the oldest inventory and is unique to the LIFO issuing rule. The optimal policy requires a full inventory record of every age group and its computation is challenging. We consider two myopic heuristics that require only partial information. The first requires only the information about the total inventory and the second requires the information about the total inventory as well as the information about the inventory with a one-period remaining lifetime. Our numerical studies show that the second outperforms the first significantly and its performance is consistently very close to that of the optimal policy. Essay 3: Retailers of perishable goods are often faced with the choice between more expensive packaging that can extend shelf life of their products and less expensive packaging that cannot. Different choices will lead to different sales, costs, and waste, and different choices require different inventory control policies. In this paper, we study the coordination of inventory and packaging decisions in a retailing environment. Items in an active package have a longer lifetime than those in a regular package and cost more. We consider two types of customers: the selective customers only buy items with a sufficiently long remaining…
Subjects/Keywords: Inventory control
; Mathematical models
; Stochastic processes
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yu, P. (2014). Three essays on stochastic dynamic inventory management. (Thesis). Hong Kong University of Science and Technology. Retrieved from http://repository.ust.hk/ir/Record/1783.1-86167 ; https://doi.org/10.14711/thesis-b1288887 ; http://repository.ust.hk/ir/bitstream/1783.1-86167/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Yu, Peiwen. “Three essays on stochastic dynamic inventory management.” 2014. Thesis, Hong Kong University of Science and Technology. Accessed January 20, 2021.
http://repository.ust.hk/ir/Record/1783.1-86167 ; https://doi.org/10.14711/thesis-b1288887 ; http://repository.ust.hk/ir/bitstream/1783.1-86167/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Yu, Peiwen. “Three essays on stochastic dynamic inventory management.” 2014. Web. 20 Jan 2021.
Vancouver:
Yu P. Three essays on stochastic dynamic inventory management. [Internet] [Thesis]. Hong Kong University of Science and Technology; 2014. [cited 2021 Jan 20].
Available from: http://repository.ust.hk/ir/Record/1783.1-86167 ; https://doi.org/10.14711/thesis-b1288887 ; http://repository.ust.hk/ir/bitstream/1783.1-86167/1/th_redirect.html.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Yu P. Three essays on stochastic dynamic inventory management. [Thesis]. Hong Kong University of Science and Technology; 2014. Available from: http://repository.ust.hk/ir/Record/1783.1-86167 ; https://doi.org/10.14711/thesis-b1288887 ; http://repository.ust.hk/ir/bitstream/1783.1-86167/1/th_redirect.html
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of New Mexico
22.
Lesser, Kendra Anne.
Computational Techniques for Stochastic Reachability.
Degree: Electrical and Computer Engineering, 2015, University of New Mexico
URL: http://hdl.handle.net/1928/25795
► As automated control systems grow in prevalence and complexity, there is an increasing demand for verification and controller synthesis methods to ensure these systems perform…
(more)
▼ As automated
control systems grow in prevalence and complexity, there is an increasing demand for verification and controller synthesis methods to ensure these systems perform safely and to desired specifications. In addition, uncertain or
stochastic behaviors are often exhibited (such as wind affecting the motion of an aircraft), making probabilistic verification desirable.
Stochastic reachability analysis provides a formal means of generating the set of initial states that meets a given objective (such as safety or reachability) with a desired level of probability, known as the reachable (or safe) set, depending on the objective. However, the applicability of reachability analysis is limited in the scope and size of system it can address. First, generating
stochastic reachable or viable sets is computationally intensive, and most existing methods rely on an optimal
control formulation that requires solving a dynamic program, and which scales exponentially in the dimension of the state space. Second, almost no results exist for extending
stochastic reachability analysis to systems with incomplete information, such that the controller does not have access to the full state of the system. This thesis addresses both of the above limitations, and introduces novel computational methods for generating
stochastic reachable sets for both perfectly and partially observable systems. We initially consider a linear system with additive Gaussian noise, and introduce two methods for computing
stochastic reachable sets that do not require dynamic programming. The first method uses a particle approximation to formulate a deterministic mixed integer linear program that produces an estimate to reachability probabilities. The second method uses a convex chance-constrained optimization problem to generate an under-approximation to the reachable set. Using these methods we are able to generate
stochastic reachable sets for a four-dimensional spacecraft docking example in far less time than it would take had we used a dynamic program. We then focus on discrete time
stochastic hybrid systems, which provide a flexible modeling framework for systems that exhibit mode-dependent behavior, and whose state space has both discrete and continuous components. We incorporate a
stochastic observation process into the hybrid system model, and derive both theoretical and computational results for generating
stochastic reachable sets
subject to an observation process. The derivation of an information state allows us to recast the problem as one of perfect information, and we prove that solving a dynamic program over the information state is equivalent to solving the original problem. We then demonstrate that the dynamic program to solve the reachability problem for a partially observable
stochastic hybrid system shares the same properties as for a partially observable Markov decision process (POMDP) with an additive cost function, and so we can exploit approximation strategies designed for POMDPs to solve the reachability problem. To do…
Advisors/Committee Members: Oishi, Meeko, Fierro, Rafael, Hayat, Majeed, Tapia, Lydia, Erwin, Richard Scott.
Subjects/Keywords: Control Theory; Reachability; Stochastic Hybrid Systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lesser, K. A. (2015). Computational Techniques for Stochastic Reachability. (Doctoral Dissertation). University of New Mexico. Retrieved from http://hdl.handle.net/1928/25795
Chicago Manual of Style (16th Edition):
Lesser, Kendra Anne. “Computational Techniques for Stochastic Reachability.” 2015. Doctoral Dissertation, University of New Mexico. Accessed January 20, 2021.
http://hdl.handle.net/1928/25795.
MLA Handbook (7th Edition):
Lesser, Kendra Anne. “Computational Techniques for Stochastic Reachability.” 2015. Web. 20 Jan 2021.
Vancouver:
Lesser KA. Computational Techniques for Stochastic Reachability. [Internet] [Doctoral dissertation]. University of New Mexico; 2015. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1928/25795.
Council of Science Editors:
Lesser KA. Computational Techniques for Stochastic Reachability. [Doctoral Dissertation]. University of New Mexico; 2015. Available from: http://hdl.handle.net/1928/25795

University of Southern California
23.
Theodorou, Evangelos A.
Iterative path integral stochastic optimal control: theory
and applications to motor control.
Degree: PhD, Computer Science, 2011, University of Southern California
URL: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680
► Motivated by the limitations of current optimal control and reinforcement learning methods in terms of their efficiency and scalability, this thesis proposes an iterative stochastic…
(more)
▼ Motivated by the limitations of current optimal
control and reinforcement learning methods in terms of their
efficiency and scalability, this thesis proposes an iterative
stochastic optimal
control approach based on the generalized path
integral formalism. More precisely, we suggest the use of the
framework of
stochastic optimal
control with path integrals to
derive a novel approach to RL with parameterized policies. While
solidly grounded in value function estimation and optimal
control
based on the
stochastic Hamilton Jacobi Bellman (HJB) equation,
policy improvements can be transformed into an approximation
problem of a path integral which has no open algorithmic parameters
other than the exploration noise. The resulting algorithm can be
conceived of as model-based, semi-model-based, or even model free,
depending on how the learning problem is structured. The new
algorithm, Policy Improvement with Path Integrals (PI2),
demonstrates interesting similarities with previous RL research in
the framework of probability matching and provides intuition why
the slightly heuristically motivated probability matching approach
can actually perform well. Applications to high dimensional robotic
systems are presented for a variety of tasks that require optimal
planning and gain scheduling.; In addition to the work on
generalized path integral
stochastic optimal
control, in this
thesis we extend model based iterative optimal
control algorithms
to the
stochastic setting. More precisely we derive the
Differential Dynamic Programming algorithm for
stochastic systems
with state and
control multiplicative noise. Finally, in the last
part of this thesis, model based iterative optimal
control methods
are applied to bio-mechanical models of the index finger with the
goal to find the underlying tendon forces applied for the movements
of, tapping and flexing.
Advisors/Committee Members: Schaal, Stefan (Committee Chair), Valero-Cuevas, Francisco (Committee Member), Sukhatme, Gaurav S. (Committee Member), Todorov, Emo (Committee Member), Schweighofer, Nicolas (Committee Member).
Subjects/Keywords: stochastic optimal control; reinforcement learning,; robotics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Theodorou, E. A. (2011). Iterative path integral stochastic optimal control: theory
and applications to motor control. (Doctoral Dissertation). University of Southern California. Retrieved from http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680
Chicago Manual of Style (16th Edition):
Theodorou, Evangelos A. “Iterative path integral stochastic optimal control: theory
and applications to motor control.” 2011. Doctoral Dissertation, University of Southern California. Accessed January 20, 2021.
http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680.
MLA Handbook (7th Edition):
Theodorou, Evangelos A. “Iterative path integral stochastic optimal control: theory
and applications to motor control.” 2011. Web. 20 Jan 2021.
Vancouver:
Theodorou EA. Iterative path integral stochastic optimal control: theory
and applications to motor control. [Internet] [Doctoral dissertation]. University of Southern California; 2011. [cited 2021 Jan 20].
Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680.
Council of Science Editors:
Theodorou EA. Iterative path integral stochastic optimal control: theory
and applications to motor control. [Doctoral Dissertation]. University of Southern California; 2011. Available from: http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll127/id/468575/rec/3680

Georgia Tech
24.
Ye, Fan.
Information relaxation in stochastic optimal control.
Degree: PhD, Industrial and Systems Engineering, 2015, Georgia Tech
URL: http://hdl.handle.net/1853/55511
► Dynamic programming is a principal method for analyzing stochastic optimal control problems. However, the exact computation of dynamic programming can be intractable in large-scale problems…
(more)
▼ Dynamic programming is a principal method for analyzing
stochastic optimal
control problems. However, the exact computation of dynamic programming can be intractable in large-scale problems due to the "curse of dimensionality". Various approximate dynamic programming methods have been proposed to address this issue and they can often generate suboptimal policies, though it is generally difficult to tell how far these suboptimal policies are from optimal. To this end, this thesis concerns with studying the
stochastic control problems from a duality perspective and generating upper bounds on maximal rewards, which complements lower bounds on maximal rewards that can be derived by simulation under heuristic policies. If the gap between the lower and upper bounds is small, it implies that the heuristic policy must be close to optimal. The approach of "information relaxation'' considered in this thesis, proposed by Brown et al. 2013, relaxes the non-anticipativity constraint that requires the decisions to depend only on the information available to the decision maker and impose a penalty that punishes such a violation. This thesis further explores theories of information relaxation and computational methods in several
stochastic optimal
control problems. We first study the interaction of Lagrangian relaxation and information relaxation in weakly coupled dynamic program. A commonly studied approach builds on the property that this high-dimensional problem can be decoupled by dualizing the resource constraints via Lagrangian relaxation. We generalize the information relaxation approach to improve upon the Lagrangian bound and develop a computational method to tackle large-scale problems. Second, we formulate the information relaxation-based duality in an important class of continuous-time decision-making models – controlled Markov diffusion. We find that this continuous-time model admits an optimal penalty in compact expression – an Ito
stochastic integral, which enables us to construct approximate penalties in simple forms and achieve tight dual bounds, and to facilitate the computation of dual bounds significantly compared with that of the discrete-time model. Finally, we consider the problem of optimal stopping of discrete-time continuous-state partially observable Markov processes. We develop a filtering-based dual approach, which relies on the martingale duality formulation of the optimal stopping problem and the particle filtering technique.
Advisors/Committee Members: Zhou, Enlu (advisor), Ahmed, Shabbir (committee member), Shapiro, Alexander (committee member), White, Chelsea (committee member), Zhang, Fumin (committee member).
Subjects/Keywords: Dynamic programming; Stochastic control; Information relaxation; Duality
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ye, F. (2015). Information relaxation in stochastic optimal control. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55511
Chicago Manual of Style (16th Edition):
Ye, Fan. “Information relaxation in stochastic optimal control.” 2015. Doctoral Dissertation, Georgia Tech. Accessed January 20, 2021.
http://hdl.handle.net/1853/55511.
MLA Handbook (7th Edition):
Ye, Fan. “Information relaxation in stochastic optimal control.” 2015. Web. 20 Jan 2021.
Vancouver:
Ye F. Information relaxation in stochastic optimal control. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1853/55511.
Council of Science Editors:
Ye F. Information relaxation in stochastic optimal control. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/55511

University of Arizona
25.
Lei, Henry.
Strategies for Two Alternative Forced Choice Navigation Tasks
.
Degree: 2020, University of Arizona
URL: http://hdl.handle.net/10150/648650
► In the engineering community, there has been a growing interest in emulating robust continuous decision mechanisms, often found implicitly in biological systems, in engineered autonomous…
(more)
▼ In the engineering community, there has been a growing interest in emulating robust continuous decision mechanisms, often found implicitly in biological systems, in engineered autonomous systems. The Two Alternative Forced Choice navigation problem, where an agent is required to decide between two possible navigation tasks based on a noisy signal, is a natural model problem for studying such mechanisms. In this thesis, we aim to generate and systematically study a variety of decision-making strategies in terms of the expected time, distance and error rates. We look at five in particular; the first four are preliminary, based off of various heuristics, while the fifth follows a model predictive
control framework with a novel adaptive cost function that penalizes the
control magnitude and deviation from a dynamic “artificial” goal. The strategies are studied using a variety of computational and analytic methods; for the model predictive
control strategy in particular, closed form results for the expected trajectory in both deterministic and
stochastic environments are presented.
Advisors/Committee Members: Reverdy, Paul B (advisor), Thanga, Jekan (committeemember), Butcher, Eric (committeemember).
Subjects/Keywords: Model Predictive Control;
Signal Processing;
Stochastic Systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lei, H. (2020). Strategies for Two Alternative Forced Choice Navigation Tasks
. (Masters Thesis). University of Arizona. Retrieved from http://hdl.handle.net/10150/648650
Chicago Manual of Style (16th Edition):
Lei, Henry. “Strategies for Two Alternative Forced Choice Navigation Tasks
.” 2020. Masters Thesis, University of Arizona. Accessed January 20, 2021.
http://hdl.handle.net/10150/648650.
MLA Handbook (7th Edition):
Lei, Henry. “Strategies for Two Alternative Forced Choice Navigation Tasks
.” 2020. Web. 20 Jan 2021.
Vancouver:
Lei H. Strategies for Two Alternative Forced Choice Navigation Tasks
. [Internet] [Masters thesis]. University of Arizona; 2020. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/10150/648650.
Council of Science Editors:
Lei H. Strategies for Two Alternative Forced Choice Navigation Tasks
. [Masters Thesis]. University of Arizona; 2020. Available from: http://hdl.handle.net/10150/648650

University of Michigan
26.
Berning Jr, Andrew.
Control and Optimization for Aerospace Systems with Stochastic Disturbances, Uncertainties, and Constraints.
Degree: PhD, Aerospace Engineering, 2020, University of Michigan
URL: http://hdl.handle.net/2027.42/162992
► The topic of this dissertation is the control and optimization of aerospace systems under the influence of stochastic disturbances, uncertainties, and subject to chance constraints.…
(more)
▼ The topic of this dissertation is the
control and optimization of aerospace systems under the influence of
stochastic disturbances, uncertainties, and
subject to chance constraints. This problem is motivated by the uncertain operating environments of many aerospace systems, and the ever-present push to extract greater performance from these systems while maintaining safety. Explicitly accounting for the
stochastic disturbances and uncertainties in the constrained
control design confers the ability to assign the probability of constraint satisfaction depending on the level of risk that is deemed acceptable and allows for the possibility of theoretical constraint satisfaction guarantees.
Along these lines, this dissertation presents novel contributions addressing four different problems: 1) chance-constrained path planning for small unmanned aerial vehicles in urban environments, 2) chance-constrained spacecraft relative motion planning in low-Earth orbit, 3)
stochastic optimization of suborbital launch operations, and 4) nonlinear model predictive
control for tracking near rectilinear halo orbits and a proposed
stochastic extension. For the first problem, existing dynamic and informed rapidly-expanding random trees algorithms are combined with a novel quadratic programming-based collision detection algorithm to enable computationally efficient, chance-constrained path planning. For the second problem, a previously proposed constrained relative motion approach based on chained positively invariant sets is extended in this dissertation to the case where the spacecraft dynamics are controlled using output feedback on noisy measurements and are
subject to
stochastic disturbances. Connectivity between nodes is determined through the use of chance-constrained admissible sets, guaranteeing that constraints are met with a specified probability. For the third problem, a novel approach to suborbital launch operations is presented. It utilizes linear covariance propagation and
stochastic clustering optimization to create an effective software-only method for decreasing the probability of a dangerous landing with no physical changes to the vehicle and only minimal changes to its flight controls software. For the fourth problem, the use of suboptimal nonlinear model predictive
control (NMPC) coupled with low-thrust actuators is considered for station-keeping on near rectilinear halo orbits. The nonlinear optimization problems in NMPC are solved with time-distributed sequential quadratic programming techniques utilizing the FBstab algorithm. A
stochastic extension for this problem is also proposed. The results are illustrated using detailed numerical simulations.
Advisors/Committee Members: Girard, Anouck Renee (committee member), Kolmanovsky, Ilya Vladimir (committee member), Ridley, Aaron James (committee member), Bieniawski, Stefan (committee member).
Subjects/Keywords: Model Predictive Control; Stochastic Control; Stochastic Optimization; Quadratic Programming; Spacecraft Applications; Aerospace Engineering; Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Berning Jr, A. (2020). Control and Optimization for Aerospace Systems with Stochastic Disturbances, Uncertainties, and Constraints. (Doctoral Dissertation). University of Michigan. Retrieved from http://hdl.handle.net/2027.42/162992
Chicago Manual of Style (16th Edition):
Berning Jr, Andrew. “Control and Optimization for Aerospace Systems with Stochastic Disturbances, Uncertainties, and Constraints.” 2020. Doctoral Dissertation, University of Michigan. Accessed January 20, 2021.
http://hdl.handle.net/2027.42/162992.
MLA Handbook (7th Edition):
Berning Jr, Andrew. “Control and Optimization for Aerospace Systems with Stochastic Disturbances, Uncertainties, and Constraints.” 2020. Web. 20 Jan 2021.
Vancouver:
Berning Jr A. Control and Optimization for Aerospace Systems with Stochastic Disturbances, Uncertainties, and Constraints. [Internet] [Doctoral dissertation]. University of Michigan; 2020. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2027.42/162992.
Council of Science Editors:
Berning Jr A. Control and Optimization for Aerospace Systems with Stochastic Disturbances, Uncertainties, and Constraints. [Doctoral Dissertation]. University of Michigan; 2020. Available from: http://hdl.handle.net/2027.42/162992

Western Kentucky University
27.
Cheng, Gang.
Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains.
Degree: MS, Department of Mathematics, 2013, Western Kentucky University
URL: https://digitalcommons.wku.edu/theses/1236
► Stochastic dynamic programming is a recursive method for solving sequential or multistage decision problems. It helps economists and mathematicians construct and solve a huge…
(more)
▼ Stochastic dynamic programming is a recursive method for solving sequential or multistage decision problems. It helps economists and mathematicians construct and solve a huge variety of sequential decision making problems in
stochastic cases. Research on
stochastic dynamic programming is important and meaningful because
stochastic dynamic programming reflects the behavior of the decision maker without risk aversion; i.e., decision making under uncertainty. In the solution process, it is extremely difficult to represent the existing or future state precisely since uncertainty is a state of having limited knowledge. Indeed, compared to the deterministic case, which is decision making under certainty, the
stochastic case is more realistic and gives more accurate results because the majority of problems in reality inevitably have many unknown parameters. In addition, time scale calculus theory is applicable to any field in which a dynamic process can be described with discrete or continuous models. Many
stochastic dynamic models are discrete or continuous, so the results of time scale calculus are directly applicable to them as well. The aim of this thesis is to introduce a general form of a
stochastic dynamic sequence problem on complex discrete time domains and to find the optimal sequence which maximizes the sequence problem.
Advisors/Committee Members: Ferhan Atici (Director), Tom Richmond, Di Wu.
Subjects/Keywords: Dynamic Programming; Stochastic Programming; Stochastic Control Theory; Stochastic Differential Equations; Stochastic Analysis; Martingales (Mathematics); Analysis; Applied Mathematics; Mathematics; Statistics and Probability
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cheng, G. (2013). Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains. (Masters Thesis). Western Kentucky University. Retrieved from https://digitalcommons.wku.edu/theses/1236
Chicago Manual of Style (16th Edition):
Cheng, Gang. “Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains.” 2013. Masters Thesis, Western Kentucky University. Accessed January 20, 2021.
https://digitalcommons.wku.edu/theses/1236.
MLA Handbook (7th Edition):
Cheng, Gang. “Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains.” 2013. Web. 20 Jan 2021.
Vancouver:
Cheng G. Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains. [Internet] [Masters thesis]. Western Kentucky University; 2013. [cited 2021 Jan 20].
Available from: https://digitalcommons.wku.edu/theses/1236.
Council of Science Editors:
Cheng G. Analyzing and Solving Non-Linear Stochastic Dynamic Models on Non-Periodic Discrete Time Domains. [Masters Thesis]. Western Kentucky University; 2013. Available from: https://digitalcommons.wku.edu/theses/1236

University of California – Santa Cruz
28.
Anderson, Ross.
Uncertainty-Anticipating Stochastic Optimal Feedback Control of Autonomous Vehicle Models.
Degree: Applied Mathematics and Statistics, 2014, University of California – Santa Cruz
URL: http://www.escholarship.org/uc/item/3400q1w1
► Control of autonomous vehicle teams has emerged as a key topic in the control and robotics communities, owing to a growing range of applications that…
(more)
▼ Control of autonomous vehicle teams has emerged as a key topic in the control and robotics communities, owing to a growing range of applications that can benefit from the increased functionality provided by multiple vehicles. However, the mathematical analysis of the vehicle control problems is complicated by their nonholonomic and kinodynamic constraints, and, due to environmental uncertainties and information flow constraints, the vehicles operate with heightened uncertainty about the team's future motion. In this dissertation, we are motivated by autonomous vehicle control problems that highlight these uncertainties, with in particular attention paid to the uncertainty in the future motion of a secondary agent. Focusing on the Dubins vehicle and unicycle model, we propose a stochastic modeling and optimal feedback control approach that anticipates the uncertainty inherent to the systems. We first consider the application of a Dubins vehicle that should maintain a nominal distance from a target with an unknown future trajectory, such as a tagged animal or vehicle. Stochasticity is introduced in the problem by assuming that the target's motion can be modeled as a Wiener process, and the possibility for the loss of target observations is modeled using stochastic transitions between discrete states. An optimal control policy that is consistent with the stochastic kinematics is computed and is shown to perform well both in the case of a Brownian target and for natural, smooth target motion. We also characterize the resulting optimal feedback control laws in comparison to their deterministic counterparts for the case of a Dubins vehicle in a stochastically varying wind. Turning to the case of multiple vehicles, we develop a method using a Kalman smoothing algorithm for multiple vehicles to enhance an underlying analytic feedback control. The vehicles achieve a formation optimally and in a manner that is robust to uncertainty. To deal with a key implementation issue of these controllers on autonomous vehicle systems, we propose a self-triggering scheme for stochastic control systems, whereby the time points at which the control loop should be closed are computed from predictions of the process in a way that ensures stability.
Subjects/Keywords: Applied mathematics; Robotics; dubins vehicle; nonlinear control; path-integral control; self-triggered control; stochastic optimal control; stochastic processes
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Anderson, R. (2014). Uncertainty-Anticipating Stochastic Optimal Feedback Control of Autonomous Vehicle Models. (Thesis). University of California – Santa Cruz. Retrieved from http://www.escholarship.org/uc/item/3400q1w1
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Anderson, Ross. “Uncertainty-Anticipating Stochastic Optimal Feedback Control of Autonomous Vehicle Models.” 2014. Thesis, University of California – Santa Cruz. Accessed January 20, 2021.
http://www.escholarship.org/uc/item/3400q1w1.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Anderson, Ross. “Uncertainty-Anticipating Stochastic Optimal Feedback Control of Autonomous Vehicle Models.” 2014. Web. 20 Jan 2021.
Vancouver:
Anderson R. Uncertainty-Anticipating Stochastic Optimal Feedback Control of Autonomous Vehicle Models. [Internet] [Thesis]. University of California – Santa Cruz; 2014. [cited 2021 Jan 20].
Available from: http://www.escholarship.org/uc/item/3400q1w1.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Anderson R. Uncertainty-Anticipating Stochastic Optimal Feedback Control of Autonomous Vehicle Models. [Thesis]. University of California – Santa Cruz; 2014. Available from: http://www.escholarship.org/uc/item/3400q1w1
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Oxford
29.
Evans, Martin A.
Multiplicative robust and stochastic MPC with application to wind turbine control.
Degree: PhD, 2014, University of Oxford
URL: http://ora.ox.ac.uk/objects/uuid:0ad9b878-00f3-4cfa-a683-148765e3ae39
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.635233
► A robust model predictive control algorithm is presented that explicitly handles multiplicative, or parametric, uncertainty in linear discrete models over a finite horizon. The uncertainty…
(more)
▼ A robust model predictive control algorithm is presented that explicitly handles multiplicative, or parametric, uncertainty in linear discrete models over a finite horizon. The uncertainty in the predicted future states and inputs is bounded by polytopes. The computational cost of running the controller is reduced by calculating matrices offline that provide a means to construct outer approximations to robust constraints to be applied online. The robust algorithm is extended to problems of uncertain models with an allowed probability of violation of constraints. The probabilistic degrees of satisfaction are approximated by one-step ahead sampling, with a greedy solution to the resulting mixed integer problem. An algorithm is given to enlarge a robustly invariant terminal set to exploit the probabilistic constraints. Exponential basis functions are used to create a Robust MPC algorithm for which the predictions are defined over the infinite horizon. The control degrees of freedom are weights that define the bounds on the state and input uncertainty when multiplied by the basis functions. The controller handles multiplicative and additive uncertainty. Robust MPC is applied to the problem of wind turbine control. Rotor speed and tower oscillations are controlled by a low sample rate robust predictive controller. The prediction model has multiplicative and additive uncertainty due to the uncertainty in short-term future wind speeds and in model linearisation. Robust MPC is compared to nominal MPC by means of a high-fidelity numerical simulation of a wind turbine under the two controllers in a wide range of simulated wind conditions.
Subjects/Keywords: 629.8; Control engineering; predictive control; model predictive control; robust control; stochastic MPC; probabilistic control; control theory; wind power; wind turbine control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Evans, M. A. (2014). Multiplicative robust and stochastic MPC with application to wind turbine control. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:0ad9b878-00f3-4cfa-a683-148765e3ae39 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.635233
Chicago Manual of Style (16th Edition):
Evans, Martin A. “Multiplicative robust and stochastic MPC with application to wind turbine control.” 2014. Doctoral Dissertation, University of Oxford. Accessed January 20, 2021.
http://ora.ox.ac.uk/objects/uuid:0ad9b878-00f3-4cfa-a683-148765e3ae39 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.635233.
MLA Handbook (7th Edition):
Evans, Martin A. “Multiplicative robust and stochastic MPC with application to wind turbine control.” 2014. Web. 20 Jan 2021.
Vancouver:
Evans MA. Multiplicative robust and stochastic MPC with application to wind turbine control. [Internet] [Doctoral dissertation]. University of Oxford; 2014. [cited 2021 Jan 20].
Available from: http://ora.ox.ac.uk/objects/uuid:0ad9b878-00f3-4cfa-a683-148765e3ae39 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.635233.
Council of Science Editors:
Evans MA. Multiplicative robust and stochastic MPC with application to wind turbine control. [Doctoral Dissertation]. University of Oxford; 2014. Available from: http://ora.ox.ac.uk/objects/uuid:0ad9b878-00f3-4cfa-a683-148765e3ae39 ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.635233

University of Oxford
30.
Ng, Desmond Han Tien.
Stochastic model predictive control.
Degree: PhD, 2011, University of Oxford
URL: http://ora.ox.ac.uk/objects/uuid:b56df5ea-10ee-428f-aeb9-1479ce9a7b5f
;
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543533
► The work in this thesis focuses on the development of a Stochastic Model Predictive Control (SMPC) algorithm for linear systems with additive and multiplicative stochastic…
(more)
▼ The work in this thesis focuses on the development of a Stochastic Model Predictive Control (SMPC) algorithm for linear systems with additive and multiplicative stochastic uncertainty subjected to linear input/state constraints. Constraints can be in the form of hard constraints, which must be satisfied at all times, or soft constraints, which can be violated up to a pre-defined limit on the frequency of violation or the expected number of violations in a given period. When constraints are included in the SMPC algorithm, the difficulty arising from stochastic model parameters manifests itself in the online optimization in two ways. Namely, the difficulty lies in predicting the probability distribution of future states and imposing constraints on closed loop responses through constraints on predictions. This problem is overcome through the introduction of layered tubes around a centre trajectory. These tubes are optimized online in order to produce a systematic and less conservative approach of handling constraints. The layered tubes centered around a nominal trajectory achieve soft constraint satisfaction through the imposition of constraints on the probabilities of one-step-ahead transition of the predicted state between the layered tubes and constraints on the probability of one-step-ahead constraint violations. An application in the field of Sustainable Development policy is used as an example. With some adaptation, the algorithm is extended the case where the uncertainty is not identically and independently distributed. Also, by including linearization errors, it is extended to non-linear systems with additive uncertainty.
Subjects/Keywords: 003.5; Control engineering; Tube MPC; Stochastic MPC; Model Predictive Control
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ng, D. H. T. (2011). Stochastic model predictive control. (Doctoral Dissertation). University of Oxford. Retrieved from http://ora.ox.ac.uk/objects/uuid:b56df5ea-10ee-428f-aeb9-1479ce9a7b5f ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543533
Chicago Manual of Style (16th Edition):
Ng, Desmond Han Tien. “Stochastic model predictive control.” 2011. Doctoral Dissertation, University of Oxford. Accessed January 20, 2021.
http://ora.ox.ac.uk/objects/uuid:b56df5ea-10ee-428f-aeb9-1479ce9a7b5f ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543533.
MLA Handbook (7th Edition):
Ng, Desmond Han Tien. “Stochastic model predictive control.” 2011. Web. 20 Jan 2021.
Vancouver:
Ng DHT. Stochastic model predictive control. [Internet] [Doctoral dissertation]. University of Oxford; 2011. [cited 2021 Jan 20].
Available from: http://ora.ox.ac.uk/objects/uuid:b56df5ea-10ee-428f-aeb9-1479ce9a7b5f ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543533.
Council of Science Editors:
Ng DHT. Stochastic model predictive control. [Doctoral Dissertation]. University of Oxford; 2011. Available from: http://ora.ox.ac.uk/objects/uuid:b56df5ea-10ee-428f-aeb9-1479ce9a7b5f ; https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543533
◁ [1] [2] [3] [4] [5] … [19] ▶
.