You searched for subject:(Convex Optimization)
.
Showing records 1 – 30 of
520 total matches.
◁ [1] [2] [3] [4] [5] … [18] ▶
1.
Uthayakumar, R.
Study on convergence of optimization problems;.
Degree: 2014, INFLIBNET
URL: http://shodhganga.inflibnet.ac.in/handle/10603/17964
► In this thesis, various notions of convergence of sequence of sets and functions and their applications in the convergence of the optimal values under the…
(more)
▼ In this thesis, various notions of convergence of
sequence of sets and functions and their applications in the
convergence of the optimal values under the perturbations of the
constrained set and/or the objective function are presented. Also,
we present the conditions under which a sequence of constrained
convex, non-convex optimization problems converge in some sense to
the given constrained convex, non-convex optimization problem.
Further, Hadamard Well-Posedness with respect to SP convergence is
defined and compared with Strong Well-Posedness,
newlineLevitin-Polyak Well-Posedness and Tykhonov Well-Posedness.
Also some newlineWell-Posedness theories are developed for B-vex
optimization problems. newline newline
References included
Advisors/Committee Members: Kanniappan, P.
Subjects/Keywords: Convergence; Convex; Functions; Non-convex; Optimization; Sets
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Uthayakumar, R. (2014). Study on convergence of optimization problems;. (Thesis). INFLIBNET. Retrieved from http://shodhganga.inflibnet.ac.in/handle/10603/17964
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Uthayakumar, R. “Study on convergence of optimization problems;.” 2014. Thesis, INFLIBNET. Accessed March 04, 2021.
http://shodhganga.inflibnet.ac.in/handle/10603/17964.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Uthayakumar, R. “Study on convergence of optimization problems;.” 2014. Web. 04 Mar 2021.
Vancouver:
Uthayakumar R. Study on convergence of optimization problems;. [Internet] [Thesis]. INFLIBNET; 2014. [cited 2021 Mar 04].
Available from: http://shodhganga.inflibnet.ac.in/handle/10603/17964.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Uthayakumar R. Study on convergence of optimization problems;. [Thesis]. INFLIBNET; 2014. Available from: http://shodhganga.inflibnet.ac.in/handle/10603/17964
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

NSYSU
2.
Zhang, Shu-Bin.
Study on Digital Filter Design and Coefficient Quantization.
Degree: Master, Communications Engineering, 2011, NSYSU
URL: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727111-135237
► In this thesis, the basic theory is convex optimization theory[1]. And we study the problem about how to transfer to convex optimization problem from the…
(more)
▼ In this thesis, the basic theory is
convex optimization theory[1]. And we study
the problem about how to transfer to
convex optimization problem from the filter
design problem. So that we can guarantee the solution is the globally optimized
solution. As we get the filter coefficients, we quantize them, then to reduce the
quantization bits of the filter coefficients by using the algorithm[2]. At last, we try
to change the sequence of quantization, and compared the result with the result of
the method[2].
Advisors/Committee Members: Wan-Jen Huang (chair), Chih-Wen Chang (chair), Chih-Peng Li (chair), Chao-Kai Wen (committee member), Pang-An Ting (chair).
Subjects/Keywords: Filter; Optimization; Convex; Bits; Quantization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, S. (2011). Study on Digital Filter Design and Coefficient Quantization. (Thesis). NSYSU. Retrieved from http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727111-135237
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Zhang, Shu-Bin. “Study on Digital Filter Design and Coefficient Quantization.” 2011. Thesis, NSYSU. Accessed March 04, 2021.
http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727111-135237.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Zhang, Shu-Bin. “Study on Digital Filter Design and Coefficient Quantization.” 2011. Web. 04 Mar 2021.
Vancouver:
Zhang S. Study on Digital Filter Design and Coefficient Quantization. [Internet] [Thesis]. NSYSU; 2011. [cited 2021 Mar 04].
Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727111-135237.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Zhang S. Study on Digital Filter Design and Coefficient Quantization. [Thesis]. NSYSU; 2011. Available from: http://etd.lib.nsysu.edu.tw/ETD-db/ETD-search/view_etd?URN=etd-0727111-135237
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Victoria University of Wellington
3.
Jellyman, Dayle Raymond.
Convex Optimization for Distributed Acoustic Beamforming.
Degree: 2017, Victoria University of Wellington
URL: http://hdl.handle.net/10063/6650
► Beamforming filter optimization can be performed over a distributed wireless sensor network, but the output calculation remains either centralized or linked in time to the…
(more)
▼ Beamforming filter
optimization can be performed over a distributed wireless sensor network, but the output calculation remains either centralized or linked in time to the weights
optimization. We propose a distributed method for calculating the beamformer output which is independent of the filter
optimization. The new method trades a small decrease in signal to noise performance for a large decrease in transmission power. Background is given on distributed
convex optimization and acoustic beamforming. The new model is described with analysis of its behaviour under independent noise. Simulation results demonstrate the desirable properties of the new model in comparison with centralized output computation.
Advisors/Committee Members: Kleijn, Bastiaan.
Subjects/Keywords: Distributed; Beamforming; Convex optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jellyman, D. R. (2017). Convex Optimization for Distributed Acoustic Beamforming. (Masters Thesis). Victoria University of Wellington. Retrieved from http://hdl.handle.net/10063/6650
Chicago Manual of Style (16th Edition):
Jellyman, Dayle Raymond. “Convex Optimization for Distributed Acoustic Beamforming.” 2017. Masters Thesis, Victoria University of Wellington. Accessed March 04, 2021.
http://hdl.handle.net/10063/6650.
MLA Handbook (7th Edition):
Jellyman, Dayle Raymond. “Convex Optimization for Distributed Acoustic Beamforming.” 2017. Web. 04 Mar 2021.
Vancouver:
Jellyman DR. Convex Optimization for Distributed Acoustic Beamforming. [Internet] [Masters thesis]. Victoria University of Wellington; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/10063/6650.
Council of Science Editors:
Jellyman DR. Convex Optimization for Distributed Acoustic Beamforming. [Masters Thesis]. Victoria University of Wellington; 2017. Available from: http://hdl.handle.net/10063/6650

Université Catholique de Louvain
4.
Martin, Benoît.
Autonomous microgrids for rural electrification : joint investment planning of power generation and distribution through convex optimization.
Degree: 2018, Université Catholique de Louvain
URL: http://hdl.handle.net/2078.1/214246
► Autonomous microgrid planning requires to consider both investments in network and generation assets as there is no connection to another power system. In this problem,…
(more)
▼ Autonomous microgrid planning requires to consider both investments in network and generation assets as there is no connection to another power system. In this problem, the goal is to find the least cost investment solution able to supply power to a group of loads. The cost of the system includes investment and operational costs, and the operational feasibility of an investment solution has to be ensured. Hence, AC power flow feasibility is integrated in the model for a selected subset of operational conditions. The integrality of investment decisions and the non-convexity of the AC power flow equations make the model MINLP, which is intractable for real-world cases and offers no guarantee that a solution is the global optimum to the problem. The first part of this thesis is devoted to the development of a hierarchy of growing complexity convex relaxations of the problem in which the goal is to identify the trade-off between the accuracy of the model and the computational burden. In a second part, the uncertainties related to power injections, i.e. load consumptions and RES production, are taken into account. A robust formulation of the joint planning problem is developed for this purpose and the results are compared to those obtained beforehand with deterministic formulations.
(FSA - Sciences de l'ingénieur) – UCL, 2018
Advisors/Committee Members: UCL - SST/IMMC/MEED - Mechatronic, Electrical Energy, and Dynamics Systems, UCL - SST/ICTM - Institute of Information and Communication Technologies, Electronics and Applied Mathematics, UCL - Ecole Polytechnique de Louvain, Glineur, François, Pardoen, Thomas, Vallée, François, Haesen, Edwin, Leemput, Niels, Pilo, Fabrizio, Papavasiliou, Anthony, De Jaeger, Emmanuel.
Subjects/Keywords: Planning; Microgrid; Convex optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Martin, B. (2018). Autonomous microgrids for rural electrification : joint investment planning of power generation and distribution through convex optimization. (Thesis). Université Catholique de Louvain. Retrieved from http://hdl.handle.net/2078.1/214246
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Martin, Benoît. “Autonomous microgrids for rural electrification : joint investment planning of power generation and distribution through convex optimization.” 2018. Thesis, Université Catholique de Louvain. Accessed March 04, 2021.
http://hdl.handle.net/2078.1/214246.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Martin, Benoît. “Autonomous microgrids for rural electrification : joint investment planning of power generation and distribution through convex optimization.” 2018. Web. 04 Mar 2021.
Vancouver:
Martin B. Autonomous microgrids for rural electrification : joint investment planning of power generation and distribution through convex optimization. [Internet] [Thesis]. Université Catholique de Louvain; 2018. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/2078.1/214246.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Martin B. Autonomous microgrids for rural electrification : joint investment planning of power generation and distribution through convex optimization. [Thesis]. Université Catholique de Louvain; 2018. Available from: http://hdl.handle.net/2078.1/214246
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
5.
Cosentino, Alessandro.
Quantum State Local Distinguishability via Convex Optimization.
Degree: 2015, University of Waterloo
URL: http://hdl.handle.net/10012/9572
► Entanglement and nonlocality play a fundamental role in quantum computing. To understand the interplay between these phenomena, researchers have considered the model of local operations…
(more)
▼ Entanglement and nonlocality play a fundamental role in quantum computing. To understand the interplay between these phenomena, researchers have considered the model of local operations and classical communication, or LOCC for short, which is a restricted subset of all possible operations that can be performed on a multipartite quantum system. The task of distinguishing states from a set that is known a priori to all parties is one of the most basic problems among those used to test the power of LOCC protocols, and it has direct applications to quantum data hiding, secret sharing and quantum channel capacity. The focus of this thesis is on state distinguishability problems for classes of quantum operations that are more powerful than LOCC, yet more restricted than global operations, namely the classes of separable and positive-partial-transpose (PPT) measurements. We
build a framework based on convex optimization (on cone programming, in particular) to study such problems. Compared to previous approaches to the problem, the method described in this thesis provides precise numerical bounds and quantitative analytic results. By combining the duality theory of cone programming with the channel-state duality, we also establish a novel connection between the state distinguishability problem and the study of positive linear maps, which is a topic of independent interest in quantum information theory.
We apply our framework to several questions that were left open in previous works regarding the distinguishability of maximally entangled states and unextendable product sets. First, we exhibit small sets of orthogonal maximally entangled states in that are not perfectly distinguishable by LOCC. As a consequence of this, we show a gap between the power of PPT and separable measurements for the task of distinguishing sets consisting only of maximally entangled states. Furthermore, we prove tight bounds on the entanglement cost that is necessary to distinguish any sets of Bell states, thus showing that quantum teleportation is optimal for this task. Finally, we provide an easily checkable characterization of when an unextendable product set is perfectly discriminated by separable measurements, along with the first known example of an unextendable product set that cannot be perfectly discriminated by separable measurements.
Subjects/Keywords: quantum information; convex optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cosentino, A. (2015). Quantum State Local Distinguishability via Convex Optimization. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/9572
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Cosentino, Alessandro. “Quantum State Local Distinguishability via Convex Optimization.” 2015. Thesis, University of Waterloo. Accessed March 04, 2021.
http://hdl.handle.net/10012/9572.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Cosentino, Alessandro. “Quantum State Local Distinguishability via Convex Optimization.” 2015. Web. 04 Mar 2021.
Vancouver:
Cosentino A. Quantum State Local Distinguishability via Convex Optimization. [Internet] [Thesis]. University of Waterloo; 2015. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/10012/9572.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Cosentino A. Quantum State Local Distinguishability via Convex Optimization. [Thesis]. University of Waterloo; 2015. Available from: http://hdl.handle.net/10012/9572
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Texas – Austin
6.
Berning, Andrew Walter, Jr.
Verification of successive convexification algorithm.
Degree: MSin Engineering, Aerospace engineering, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/41579
► In this report, I describe a technique which allows a non-convex optimal control problem to be expressed and solved in a convex manner. I then…
(more)
▼ In this report, I describe a technique which allows a non-
convex optimal control problem to be expressed and solved in a
convex manner. I then verify the resulting solution to ensure its physical feasibility and its optimality. The original, non-
convex problem is the fuel-optimal powered landing problem with aerodynamic drag. The non-convexities present in this problem include mass depletion dynamics, aerodynamic drag, and free final time. Through the use of lossless convexification and successive convexification, this problem can be formulated as a series of iteratively solved
convex problems that requires only a guess of a final time of flight. The solution’s physical feasibility is verified through a nonlinear simulation built in Simulink, while its optimality is verified through the general nonlinear optimal control software GPOPS-II.
Advisors/Committee Members: Akella, Maruthi Ram, 1972- (advisor), Acikmese, Behcet (committee member).
Subjects/Keywords: Convex; Convexification; Optimization; Verification
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Berning, Andrew Walter, J. (2016). Verification of successive convexification algorithm. (Masters Thesis). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/41579
Chicago Manual of Style (16th Edition):
Berning, Andrew Walter, Jr. “Verification of successive convexification algorithm.” 2016. Masters Thesis, University of Texas – Austin. Accessed March 04, 2021.
http://hdl.handle.net/2152/41579.
MLA Handbook (7th Edition):
Berning, Andrew Walter, Jr. “Verification of successive convexification algorithm.” 2016. Web. 04 Mar 2021.
Vancouver:
Berning, Andrew Walter J. Verification of successive convexification algorithm. [Internet] [Masters thesis]. University of Texas – Austin; 2016. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/2152/41579.
Council of Science Editors:
Berning, Andrew Walter J. Verification of successive convexification algorithm. [Masters Thesis]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/41579

University of Texas – Austin
7.
Park, Dohyung.
Efficient non-convex algorithms for large-scale learning problems.
Degree: PhD, Electrical and Computer engineering, 2016, University of Texas – Austin
URL: http://hdl.handle.net/2152/46581
► The emergence of modern large-scale datasets has led to a huge interest in the problem of learning hidden complex structures. Not only can models from…
(more)
▼ The emergence of modern large-scale datasets has led to a huge interest in the problem of learning hidden complex structures. Not only can models from such structures fit the datasets, they also have good generalization performance in the regime where the number of samples are limited compared to the dimensionality. However, one of the main issues is finding computationally efficient algorithms to learn the models. While
convex relaxation provides polynomial-time algorithms with strong theoretical guarantees, there are demands for even faster algorithms with competitive performances, due to the large volume of the practical datasets. In this dissertation, we consider three types of algorithms, greedy methods, alternating minimization, and non-
convex gradient descent, that have been key non-
convex approaches to tackle the large-scale learning problems. For each theme, we focus on a specific problem and design an algorithm based on the designing ideas. We begin with the problem of subspace clustering, where one needs to learn underlying unions of subspaces from a set of data points around the subspaces. We develop two greedy algorithms that can perfectly cluster the points and recover the subspaces. The next problem of interest is collaborative ranking, where underlying low-rank preference matrices are to be learned from pairwise comparisons of the entries. We present an alternating minimization based algorithm. Finally, we develop a non-
convex gradient descent algorithm for general low-rank matrix
optimization problems. All of these algorithms exhibit low computational complexities as well as competitive statistical performances, which make them scalable and suitable for a variety of practical applications of the problems. Analysis of the algorithms provides theoretical guarantees of their performances.
Advisors/Committee Members: Sanghavi, Sujay Rajendra, 1979- (advisor), Caramanis, Constantine (advisor), Ghosh, Joydeep (committee member), Dimakis, Alexandros G (committee member), Price, Eric (committee member).
Subjects/Keywords: Machine learning; Non-convex optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Park, D. (2016). Efficient non-convex algorithms for large-scale learning problems. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/46581
Chicago Manual of Style (16th Edition):
Park, Dohyung. “Efficient non-convex algorithms for large-scale learning problems.” 2016. Doctoral Dissertation, University of Texas – Austin. Accessed March 04, 2021.
http://hdl.handle.net/2152/46581.
MLA Handbook (7th Edition):
Park, Dohyung. “Efficient non-convex algorithms for large-scale learning problems.” 2016. Web. 04 Mar 2021.
Vancouver:
Park D. Efficient non-convex algorithms for large-scale learning problems. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2016. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/2152/46581.
Council of Science Editors:
Park D. Efficient non-convex algorithms for large-scale learning problems. [Doctoral Dissertation]. University of Texas – Austin; 2016. Available from: http://hdl.handle.net/2152/46581

University of Texas – Austin
8.
Wang, Ye, Ph. D.
Novel convex optimization techniques for circuit analysis and synthesis.
Degree: PhD, Electrical and Computer Engineering, 2018, University of Texas – Austin
URL: http://hdl.handle.net/2152/67661
► Technology scaling brings about the need for computationally efficient methods for circuit analysis, optimization, and synthesis. Convex optimization is a special class of mathematical optimization…
(more)
▼ Technology scaling brings about the need for computationally efficient methods for circuit analysis,
optimization, and synthesis.
Convex optimization is a special class of mathematical
optimization problems which minimize
convex functions over
convex sets.
For general
optimization problems, finding a local minimum is often easier than finding a global optimal.
The inherent convexity ensures that a local minimum must be a global minimum so that
convex optimization problems can be solved efficiently and reliably.
These techniques are widely used in the EDA community because not only many problems can inherently be modeled as
convex optimization problems, but also
convex approximations work well even for non-
convex problems.
In particular, this dissertation explores several problems in VLSI design by applying recent
convex optimization techniques.
This dissertation first addresses the equation-based analog synthesis which finds design parameters that achieve optimal performance with a given circuit topology.
The problem of analog synthesis in deep submicron technology is that
convex modeling of circuit performance functions is not accurate enough while non-
convex models are generally hard to solve.
This dissertation proposes two techniques to solving this problem by leveraging the recent advances in semidefinite programming problem (SDP) relaxation of polynomial
optimization.
The first technique addresses the computational intractability of SDP relaxation in polynomial
optimization formulation.
Modeling analog performance functions via general polynomials allows for highly accurate fitting of SPICE data due to their inherent non-convexity and non-linearity.
However, the computational complexity of solving the corresponding SDP relaxation scales exponentially with the degree of a polynomial.
This dissertation studies fitting high-degree but sparse polynomials with special structure to enable efficient subsequent
optimization phase.
Results demonstrate that by handling higher-degree polynomials, the fitting error can be drastically reduced, which translates to a significant increase in the rate of constraint satisfaction.
The second technique addresses the main challenge of using geometric programming (GP): the mismatch between what GP can accurately fit and the behavior of true analog performance functions.
This approach proposes fitting device models as high-order monomials, defined as the exponential functions of polynomials in the logarithmic variables.
The introduction of high-order monomials allows significant improvements in model accuracy.
Using SDP-relaxations, the proposed strategy is able to obtain efficient near-optimal global solutions.
Next, this dissertation focuses on computational improvements in power grid reduction.
Analysis and simulation of power grids for large integrated circuits is computationally expensive due to the tremendous number of elements in power grids.
The goal of power grid reduction is to produce a sparse approximation of the original grid with…
Advisors/Committee Members: Orshansky, Michael (advisor), Caramanis, Constantine (advisor), Sun, Nan (committee member), Swartzlander, Earl (committee member), Zeng, Zhiyu (committee member).
Subjects/Keywords: Convex optimization; EDA problems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, Ye, P. D. (2018). Novel convex optimization techniques for circuit analysis and synthesis. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/67661
Chicago Manual of Style (16th Edition):
Wang, Ye, Ph D. “Novel convex optimization techniques for circuit analysis and synthesis.” 2018. Doctoral Dissertation, University of Texas – Austin. Accessed March 04, 2021.
http://hdl.handle.net/2152/67661.
MLA Handbook (7th Edition):
Wang, Ye, Ph D. “Novel convex optimization techniques for circuit analysis and synthesis.” 2018. Web. 04 Mar 2021.
Vancouver:
Wang, Ye PD. Novel convex optimization techniques for circuit analysis and synthesis. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2018. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/2152/67661.
Council of Science Editors:
Wang, Ye PD. Novel convex optimization techniques for circuit analysis and synthesis. [Doctoral Dissertation]. University of Texas – Austin; 2018. Available from: http://hdl.handle.net/2152/67661

Rutgers University
9.
Yao, Wang, 1985-.
Approximate versions of the alternating direction method of multipliers.
Degree: PhD, Operations Research, 2016, Rutgers University
URL: https://rucore.libraries.rutgers.edu/rutgers-lib/51517/
► Convex optimization is at the core of many of today's analysis tools for large datasets, and in particular machine learning methods. This thesis will develop…
(more)
▼ Convex optimization is at the core of many of today's analysis tools for large datasets, and in particular machine learning methods. This thesis will develop approximate versions of the alternating directrion of multipliers (ADMM) for the general setting of minimizing the sum of two convex functions. The alternating direction method of multipliers is a form of augmented Lagrangian algorithm that has experienced a renaissance in recent years due to its applicability to optimization problems arising from ``big data'' and image processing applications, and the relative ease with which it may be implemented in parallel and distributed computational environments. There are two fundamental approaches for proving the convergence of the ADMM, each based on a different form of two-way emph{splitting}, that is, expressing a mapping as the sum of two simpler mappings. The first approach is based on Douglas-Rachford operator splitting theory, and yields considerable insight into the convergence of the ADMM. The second convergence proof approach is at its core based on the Lagrangian splitting analysis. We present three new approximate versions of ADMM based on both convergence analyses, all of which require only knowledge of subgradients of the subproblem objectives, rather bounds on the distance to the exact subproblem solution. One version, which applies only to certain common special cases, is based on combining the operator splitting analysis of the ADMM with a relative-error proximal point algorithm of Solodov and Svaiter. A byproduct of this analysis is a new, relative-error version of the Douglas-Rachford splitting algorithm for monotone operators. The other two approximate versions of the ADMM are more general and based on the Lagrangian splitting analysis of the ADMM: one uses a summable absolute error criterion, and the other uses a relative error criterion and an auxiliary iterate sequence. We experimentally compare our new algorithms to an essentially exact form of the ADMM and to an inexact form that can be easily derived from prior theory (but again applies only to certain common special cases). These experiments show that our methods can significantly reduce total computational effort when iterative methods are used to solve ADMM subproblems.
Advisors/Committee Members: Ruszczynski, Andrzej (chair).
Subjects/Keywords: Mathematical optimization; Convex functions
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yao, Wang, 1. (2016). Approximate versions of the alternating direction method of multipliers. (Doctoral Dissertation). Rutgers University. Retrieved from https://rucore.libraries.rutgers.edu/rutgers-lib/51517/
Chicago Manual of Style (16th Edition):
Yao, Wang, 1985-. “Approximate versions of the alternating direction method of multipliers.” 2016. Doctoral Dissertation, Rutgers University. Accessed March 04, 2021.
https://rucore.libraries.rutgers.edu/rutgers-lib/51517/.
MLA Handbook (7th Edition):
Yao, Wang, 1985-. “Approximate versions of the alternating direction method of multipliers.” 2016. Web. 04 Mar 2021.
Vancouver:
Yao, Wang 1. Approximate versions of the alternating direction method of multipliers. [Internet] [Doctoral dissertation]. Rutgers University; 2016. [cited 2021 Mar 04].
Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/51517/.
Council of Science Editors:
Yao, Wang 1. Approximate versions of the alternating direction method of multipliers. [Doctoral Dissertation]. Rutgers University; 2016. Available from: https://rucore.libraries.rutgers.edu/rutgers-lib/51517/

Princeton University
10.
Bullins, Brian Anderson.
Efficient Higher-Order Optimization for Machine Learning
.
Degree: PhD, 2019, Princeton University
URL: http://arks.princeton.edu/ark:/88435/dsp01zg64tp85c
► In recent years, stochastic gradient descent (SGD) has taken center stage for training large-scale models in machine learning. Although some higher-order methods have improved iteration…
(more)
▼ In recent years, stochastic gradient descent (SGD) has taken center stage for training large-scale models in machine learning. Although some higher-order methods have improved iteration complexity in theory, the per-iteration costs render them unusable when faced with millions of parameters and training examples.
In this thesis, I will present several works which enable higher-order
optimization to be as scalable as first-order methods. The first method is a stochastic second-order algorithm for
convex optimization, called LiSSA, which uses Hessian information to construct an unbiased Newton step in time linear in the problem dimension. To bypass the typical efficiency barriers for second-order methods, we harness the ERM structure in standard machine learning tasks.
While
convex problems allow for global convergence, recent state-of-the-art models, such as deep neural networks, highlight the importance of developing a better understanding of non-
convex guarantees. In order to handle this challenging setting, I will present FastCubic, a Hessian-based method which provably converges to first-order critical points faster than gradient descent, while additionally converging to second-order critical points. Finally, I will establish how to leverage even higher-order derivative information by means of the FastQuartic algorithm, which achieves faster convergence for both smooth and non-smooth
convex optimization problems by combining efficient tensor methods with near-optimal higher-order acceleration.
Advisors/Committee Members: Hazan, Elad (advisor).
Subjects/Keywords: convex optimization;
higher-order;
machine learning;
non-convex optimization;
second-order
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bullins, B. A. (2019). Efficient Higher-Order Optimization for Machine Learning
. (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01zg64tp85c
Chicago Manual of Style (16th Edition):
Bullins, Brian Anderson. “Efficient Higher-Order Optimization for Machine Learning
.” 2019. Doctoral Dissertation, Princeton University. Accessed March 04, 2021.
http://arks.princeton.edu/ark:/88435/dsp01zg64tp85c.
MLA Handbook (7th Edition):
Bullins, Brian Anderson. “Efficient Higher-Order Optimization for Machine Learning
.” 2019. Web. 04 Mar 2021.
Vancouver:
Bullins BA. Efficient Higher-Order Optimization for Machine Learning
. [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2021 Mar 04].
Available from: http://arks.princeton.edu/ark:/88435/dsp01zg64tp85c.
Council of Science Editors:
Bullins BA. Efficient Higher-Order Optimization for Machine Learning
. [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp01zg64tp85c

Penn State University
11.
Wang, Zi.
First-Order Methods for Large Scale Convex Optimization.
Degree: 2016, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/13485zxw121
► The revolution of storage technology in the past few decades made it possible to gather tremendous amount of data anywhere from demand and sales records…
(more)
▼ The revolution of storage technology in the past few decades made it possible to gather tremendous amount of data anywhere from demand and sales records to web user behavior, customer ratings, software logs and patient data in healthcare. Recognizing patterns and discovering knowledge from large amount of data becomes more and more important, and has attracted significant attention in operations research (OR), statistics and computer science field. Mathematical programming is an essential tool within these fields, and especially for data mining and machine learning, and it plays a significant role for data-driven predictions/decisions and pattern recognition.
The major challenge while solving those large-scale
optimization problems is to process large data sets within practically tolerable run-times. This is where the advantages of first-order algorithms becomes clearly apparent. These methods only use gradient information, and are particularly good at computing medium accuracy solutions. In contrast, interior point method computations that exploit second-order information quickly become intractable, even for moderate-size problems, since the complexity of each factorization of a n × n matrix in interior point methods is O(n
3). The memory required for second-order methods could also be an issue in practice for problems with dense data matrices due to limited RAM. Another benefit of using first-order methods is that one can exploit additional structural information of the problem to further improve the efficiency of these algorithms.
In this dissertation, we studied
convex regression, and multi-agent consensus
optimization problems; and developed new fast first-order iterative algorithms to efficiently compute ε-optimal and ε-feasible solutions to these large-scale
optimization problems in parallel, distributed, or asynchronous computation settings while carefully managing memory usage. The proposed algorithms are able to take advantage of the structural information of the specific problems we considered in this dissertation, and have strong capability to deal with large-scale problems. Our numerical results showed the advantages of our proposed methods over other traditional methods in terms of speed, memory usage, and especially communication requirements for distributed methods.
Advisors/Committee Members: Necdet S Aybat, Dissertation Advisor/Co-Advisor, Necdet S Aybat, Committee Chair/Co-Chair, Vinayak V Shanbhag, Committee Member, Enrique del Castillo, Committee Member, Constantino Manuel Lagoa, Outside Member.
Subjects/Keywords: first-order methods; convex optimization; distributed optimization; convex regression; multi-agent consensus optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, Z. (2016). First-Order Methods for Large Scale Convex Optimization. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/13485zxw121
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Wang, Zi. “First-Order Methods for Large Scale Convex Optimization.” 2016. Thesis, Penn State University. Accessed March 04, 2021.
https://submit-etda.libraries.psu.edu/catalog/13485zxw121.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Wang, Zi. “First-Order Methods for Large Scale Convex Optimization.” 2016. Web. 04 Mar 2021.
Vancouver:
Wang Z. First-Order Methods for Large Scale Convex Optimization. [Internet] [Thesis]. Penn State University; 2016. [cited 2021 Mar 04].
Available from: https://submit-etda.libraries.psu.edu/catalog/13485zxw121.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Wang Z. First-Order Methods for Large Scale Convex Optimization. [Thesis]. Penn State University; 2016. Available from: https://submit-etda.libraries.psu.edu/catalog/13485zxw121
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Iowa
12.
Xu, Yi.
Accelerating convex optimization in machine learning by leveraging functional growth conditions.
Degree: PhD, Computer Science, 2019, University of Iowa
URL: https://ir.uiowa.edu/etd/7048
► In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important…
(more)
▼ In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional
optimization algorithms; thus it becomes very important to develop efficient and effective
optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on
convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic
optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic
optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic
optimization. We first considered stochastic
optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic
optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of \widetilde O(1/ε
1-θ) and \widetilde O(1/ε
2(1-θ)) respectively. Then, we considered a family of
optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of O(1/ε) to \widetilde…
Advisors/Committee Members: Yang, Tianbao, 1971- (supervisor).
Subjects/Keywords: Convex Optimization; Local Error Bound; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xu, Y. (2019). Accelerating convex optimization in machine learning by leveraging functional growth conditions. (Doctoral Dissertation). University of Iowa. Retrieved from https://ir.uiowa.edu/etd/7048
Chicago Manual of Style (16th Edition):
Xu, Yi. “Accelerating convex optimization in machine learning by leveraging functional growth conditions.” 2019. Doctoral Dissertation, University of Iowa. Accessed March 04, 2021.
https://ir.uiowa.edu/etd/7048.
MLA Handbook (7th Edition):
Xu, Yi. “Accelerating convex optimization in machine learning by leveraging functional growth conditions.” 2019. Web. 04 Mar 2021.
Vancouver:
Xu Y. Accelerating convex optimization in machine learning by leveraging functional growth conditions. [Internet] [Doctoral dissertation]. University of Iowa; 2019. [cited 2021 Mar 04].
Available from: https://ir.uiowa.edu/etd/7048.
Council of Science Editors:
Xu Y. Accelerating convex optimization in machine learning by leveraging functional growth conditions. [Doctoral Dissertation]. University of Iowa; 2019. Available from: https://ir.uiowa.edu/etd/7048

Universidade Nova
13.
Soares, Diogo Lopes.
Design of multidimensional compact constellations with high power efficiency.
Degree: 2013, Universidade Nova
URL: http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/11111
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Advisors/Committee Members: Dinis, Rui, Beko, Marko.
Subjects/Keywords: Multidimensional constellations; Power efficiency; Convex optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Soares, D. L. (2013). Design of multidimensional compact constellations with high power efficiency. (Thesis). Universidade Nova. Retrieved from http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/11111
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Soares, Diogo Lopes. “Design of multidimensional compact constellations with high power efficiency.” 2013. Thesis, Universidade Nova. Accessed March 04, 2021.
http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/11111.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Soares, Diogo Lopes. “Design of multidimensional compact constellations with high power efficiency.” 2013. Web. 04 Mar 2021.
Vancouver:
Soares DL. Design of multidimensional compact constellations with high power efficiency. [Internet] [Thesis]. Universidade Nova; 2013. [cited 2021 Mar 04].
Available from: http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/11111.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Soares DL. Design of multidimensional compact constellations with high power efficiency. [Thesis]. Universidade Nova; 2013. Available from: http://www.rcaap.pt/detail.jsp?id=oai:run.unl.pt:10362/11111
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Université Catholique de Louvain
14.
Orban de Xivry, François-Xavier.
Nearest stable system.
Degree: 2013, Université Catholique de Louvain
URL: http://hdl.handle.net/2078.1/132586
► Stability is a universal concept which we experience in our everyday lives. It plays a central role in the study of dynamical systems and is…
(more)
▼ Stability is a universal concept which we experience in our everyday lives. It plays a central role in the study of dynamical systems and is still the subject of intensive research by the control community.
The question of finding the nearest stable system appears in system identification where one sometimes faces unstable models created from the perturbed data of a stable system. In many cases, solving the problem in the spectral space does not make sense due to the sensitivity of the poles of the system to perturbations. We thus investigate algorithms for finding a nearest stable system in the space of coefficients.
Very few existing methods are available to solve the problem. This is due to the nonsmooth, nonconvex nature of the set of stable systems. Moreover, the methods that can solve the problem are not always able to actually guarantee that the solution will be stable.
Our work contains the original description of two methods for matrices and two methods for polynomials which substantially expand the number of optimization methods available to the user for the resolution of the nearest stable system. These methods cover all the possible variations of the problem : for polynomials or for matrices, in continuous-time or in discrete-time, for real or complex data. We prove the convergence of the methods to a stationary point of the formulation.
The various approaches proposed in the thesis will provide new tools and inspiration for the resolution of closely related problems in a field in constant evolution and full of challenges.
(FSA 3) – UCL, 2013
Advisors/Committee Members: UCL - SST/ICTM/INMA - Pôle en ingénierie mathématique, Van Dooren, Paul, Nesterov, Yurii, Lefèvre, Philippe, Absil, Pierre-Antoine, Overton, Michael, Van Barel, Marc.
Subjects/Keywords: Stability; Dynamical system; Convex optimization; Nonconvex; Nonsmooth
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Orban de Xivry, F. (2013). Nearest stable system. (Thesis). Université Catholique de Louvain. Retrieved from http://hdl.handle.net/2078.1/132586
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Orban de Xivry, François-Xavier. “Nearest stable system.” 2013. Thesis, Université Catholique de Louvain. Accessed March 04, 2021.
http://hdl.handle.net/2078.1/132586.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Orban de Xivry, François-Xavier. “Nearest stable system.” 2013. Web. 04 Mar 2021.
Vancouver:
Orban de Xivry F. Nearest stable system. [Internet] [Thesis]. Université Catholique de Louvain; 2013. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/2078.1/132586.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Orban de Xivry F. Nearest stable system. [Thesis]. Université Catholique de Louvain; 2013. Available from: http://hdl.handle.net/2078.1/132586
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Ghana
15.
Katsekpor, T.
Iterative Methods for Large Scale Convex Optimization
.
Degree: 2017, University of Ghana
URL: http://ugspace.ug.edu.gh/handle/123456789/23393
► This thesis presents a detailed description and analysis of Bregman’s iterative method for convex programming with linear constraints. Row and block action methods for large…
(more)
▼ This thesis presents a detailed description and analysis of Bregman’s iterative
method for convex programming with linear constraints. Row and block action
methods for large scale problems are adopted for convex feasibility problems. This
motivates Bregman type methods for optimization.
A new simultaneous version of the Bregman’s method for the optimization of
Bregman function subject to linear constraints is presented and an extension of
the method and its application to solving convex optimization problems is also
made.
Closed-form formulae are known for Bregman’s method for the particular cases of
entropy maximization like Shannon and Burg’s entropies. The algorithms such as
the Multiplicative Algebraic Reconstruction Technique (MART) and the related
methods use closed-form formulae in their iterations. We present a generalization
of these closed-form formulae of Bregman’s method when the objective function
variables are separated and analyze its convergence.
We also analyze the algorithm MART when the problem is inconsistent and give
some convergence results.
Subjects/Keywords: Iterative Methods;
Large Scale Convex;
Optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Katsekpor, T. (2017). Iterative Methods for Large Scale Convex Optimization
. (Doctoral Dissertation). University of Ghana. Retrieved from http://ugspace.ug.edu.gh/handle/123456789/23393
Chicago Manual of Style (16th Edition):
Katsekpor, T. “Iterative Methods for Large Scale Convex Optimization
.” 2017. Doctoral Dissertation, University of Ghana. Accessed March 04, 2021.
http://ugspace.ug.edu.gh/handle/123456789/23393.
MLA Handbook (7th Edition):
Katsekpor, T. “Iterative Methods for Large Scale Convex Optimization
.” 2017. Web. 04 Mar 2021.
Vancouver:
Katsekpor T. Iterative Methods for Large Scale Convex Optimization
. [Internet] [Doctoral dissertation]. University of Ghana; 2017. [cited 2021 Mar 04].
Available from: http://ugspace.ug.edu.gh/handle/123456789/23393.
Council of Science Editors:
Katsekpor T. Iterative Methods for Large Scale Convex Optimization
. [Doctoral Dissertation]. University of Ghana; 2017. Available from: http://ugspace.ug.edu.gh/handle/123456789/23393

Iowa State University
16.
Li, Chong.
Fundamental limitations on communication channels with noisy feedback: information flow, capacity and bounds.
Degree: 2013, Iowa State University
URL: https://lib.dr.iastate.edu/etd/13421
► Since the success of obtaining the capacity (i.e. the maximal achievable transmission rate under which the message can be recovered with arbitrarily small probability of…
(more)
▼ Since the success of obtaining the capacity (i.e. the maximal achievable transmission rate under which the message can be recovered with arbitrarily small probability of error) for non-feedback point-to-point communication channels by C. Shannon (in 1948), Information Theory has been proved to be a powerful tool to derive fundamental limitations in communication systems. During the last decade, motivated by the emerging of networked systems, information theorists have turned lots of their attention to communication channels with feedback (through another channel from receiver to transmitter). Under the assumption that the feedback channel is noiseless, a large body of notable results have been derived, although much work still needs to be done. However, when this ideal assumption is removed, i.e., the feedback channel is noisy, only few valuable results can be found in the literature and many challenging problems are still open.
This thesis aims to address some of these long-standing noisy feedback problems, with concentration on the channel capacity. First of all, we analyze the fundamental information flow in noisy feedback channels. We introduce a new notion, the residual directed information, in order to characterize the noisy feedback channel capacity for which the standard directed information can not be used. As an illustration, finite-alphabet noisy feedback channels have been studied in details. Next, we provide an information flow decomposition equality which serves as a foundation of other novel results in this thesis.
With the result of information flow decomposition in hand, we next investigate time-varying Gaussian channels with additive Gaussian noise feedback. Following the notable Cover-Pombra results in 1989, we define the n-block noisy feedback capacity and derive a pair of n-block upper and lower bounds on the n-block noisy feedback capacity. These bounds can be obtained by efficiently solving convex optimization problems. Under the assumption of stationarity on the additive Gaussian noises, we show that the limits of these n-block bounds can be characterized in a power spectral optimization form. In addition, two computable lower bounds are derived for the Shannon capacity.
Next, we consider a class of channels where feedback could not increase the capacity and thus the noisy feedback capacity equals to the non-feedback capacity. We derive a necessary condition (characterized by the directed information) for the capacity-achieving channel codes. The condition implies that using noisy feedback is detrimental to achievable rate, i.e, the capacity can not be achieved by using noisy feedback.
Finally, we introduce a new framework of communication channels with noisy feedback where the feedback information received by the transmitter is also available to the decoder with some finite delays. We investigate the capacity and linear coding schemes for this extended noisy feedback channels.
To summarize, this thesis firstly provides a foundation (i.e. information flow analysis) for analyzing communications…
Subjects/Keywords: Capacity; Convex Optimization; Feedback; Information Theory; Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, C. (2013). Fundamental limitations on communication channels with noisy feedback: information flow, capacity and bounds. (Thesis). Iowa State University. Retrieved from https://lib.dr.iastate.edu/etd/13421
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Li, Chong. “Fundamental limitations on communication channels with noisy feedback: information flow, capacity and bounds.” 2013. Thesis, Iowa State University. Accessed March 04, 2021.
https://lib.dr.iastate.edu/etd/13421.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Li, Chong. “Fundamental limitations on communication channels with noisy feedback: information flow, capacity and bounds.” 2013. Web. 04 Mar 2021.
Vancouver:
Li C. Fundamental limitations on communication channels with noisy feedback: information flow, capacity and bounds. [Internet] [Thesis]. Iowa State University; 2013. [cited 2021 Mar 04].
Available from: https://lib.dr.iastate.edu/etd/13421.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Li C. Fundamental limitations on communication channels with noisy feedback: information flow, capacity and bounds. [Thesis]. Iowa State University; 2013. Available from: https://lib.dr.iastate.edu/etd/13421
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Technology, Sydney
17.
Ho, Huu Minh Tam.
Interference management in 5G cellular networks.
Degree: 2016, University of Technology, Sydney
URL: http://hdl.handle.net/10453/102703
► This dissertation is concerned with the nonconvex optimization problems of interference management under the consideration of new disruptive technologies in the fifth-generation cellular networks. These…
(more)
▼ This dissertation is concerned with the nonconvex optimization problems of interference management under the consideration of new disruptive technologies in the fifth-generation cellular networks. These problems are the key to the successful roll-out of these new technologies but have remained unsolved due to their mathematical challenge. Therefore, this dissertation provides novel minorants/majorants of the nonconvex functions which are then used for the successive convex approximation framework.
The first considered technology is heterogeneous networks (HetNet) in which base stations (BSs) of various sizes and types are densely deployed in the same area. Although HetNet provides a significant improvement in spectral efficiency and offloading, designing an optimal power transmission and association control policy is challenging, especially when both quality-of-service (QoS) and backhaul capacity are considered. Maximizing the total network throughput or the fairness among users in HetNet are challenging mixed integer nonconvex optimization problems. Iterative algorithms based on alternating descent and successive convex programming are proposed to address such problems.
Next, we consider a full-duplex multi-user multiple-input multiple-output (FD MU-MIMO) multicell network in which base stations simultaneously serve both downlink (DL) users and uplink (UL) users on the same frequency band via multiple antennas to potentially double the spectral efficiency. Since the use of FD radios introduces additional self-interference (SI) and cross interference of UL between DL transmissions, the minimum cell throughput maximization and the sum network throughput maximization with QoS guarantee are nonconvex challenging problems. To solve such challenging optimization problems, we develop path-following algorithms based on successive convex quadratic programming framework. As a byproduct, the proposed algorithms can be extended to the optimal precoding matrix design in a half-duplex MU-MIMO multicell network with the Han-Kobayashi transmission strategy.
Finally, the last research work stems from the need of prolonging user equipments’ battery life in power-limited networks. Toward this end, we consider the optimal design of precoding matrices in the emerging energy-harvesting-enabled (EH-enabled) MU-MIMO networks in which BSs can transfer information and energy to UEs on the same channel using either power splitting (PS) or time switching (TS) mechanisms. The total network throughput maximization problem under QoS constraints and EH constraints with either PS or TS in FD networks is computationally difficult due to nonconcave objective function and nonconvex constraints. We propose new inner approximations of these problems based on which a successive convex programming framework is applied to address them.
Subjects/Keywords: Convex programming.; Mathematical optimization.; Algorithms.; Nonconvex programming.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ho, H. M. T. (2016). Interference management in 5G cellular networks. (Thesis). University of Technology, Sydney. Retrieved from http://hdl.handle.net/10453/102703
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Ho, Huu Minh Tam. “Interference management in 5G cellular networks.” 2016. Thesis, University of Technology, Sydney. Accessed March 04, 2021.
http://hdl.handle.net/10453/102703.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Ho, Huu Minh Tam. “Interference management in 5G cellular networks.” 2016. Web. 04 Mar 2021.
Vancouver:
Ho HMT. Interference management in 5G cellular networks. [Internet] [Thesis]. University of Technology, Sydney; 2016. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/10453/102703.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Ho HMT. Interference management in 5G cellular networks. [Thesis]. University of Technology, Sydney; 2016. Available from: http://hdl.handle.net/10453/102703
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology
18.
Zhang, H.M. (author).
Distributed Convex Optimization: A Study on the Primal-Dual Method of Multipliers.
Degree: 2015, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:932db0bb-da4c-4ffe-892a-036d01a8071b
► The Primal-Dual Method of Multipliers (PDMM) is a new algorithm that solves convex optimization problems in a distributed manner. This study focuses on the convergence…
(more)
▼ The Primal-Dual Method of Multipliers (PDMM) is a new algorithm that solves convex optimization problems in a distributed manner. This study focuses on the convergence behavior of the PDMM. For a deeper understanding, the PDMM algorithm was applied to distributed averaging and distributed dictionary learning problems. The results were compared to those of other state-of-the-art algorithms. The experiments show that the PDMM algorithm not only has a fast convergence rate but also robust performance against transmission failures in the network. Furthermore, on the basis of these experiments, the convergence rate of the PDMM was analyzed. Different attempts at proving the linear convergence rate were carried out. As a result, the linear convergence rate has been proven under certain conditions.
MSc
Electrical Engineering
Electrical Engineering, Mathematics and Computer Science
Advisors/Committee Members: Zhang, G. (mentor), Heusdens, R. (mentor).
Subjects/Keywords: convex optimization; distributed signal processing; ADMM; PDMM
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhang, H. M. (. (2015). Distributed Convex Optimization: A Study on the Primal-Dual Method of Multipliers. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:932db0bb-da4c-4ffe-892a-036d01a8071b
Chicago Manual of Style (16th Edition):
Zhang, H M (author). “Distributed Convex Optimization: A Study on the Primal-Dual Method of Multipliers.” 2015. Masters Thesis, Delft University of Technology. Accessed March 04, 2021.
http://resolver.tudelft.nl/uuid:932db0bb-da4c-4ffe-892a-036d01a8071b.
MLA Handbook (7th Edition):
Zhang, H M (author). “Distributed Convex Optimization: A Study on the Primal-Dual Method of Multipliers.” 2015. Web. 04 Mar 2021.
Vancouver:
Zhang HM(. Distributed Convex Optimization: A Study on the Primal-Dual Method of Multipliers. [Internet] [Masters thesis]. Delft University of Technology; 2015. [cited 2021 Mar 04].
Available from: http://resolver.tudelft.nl/uuid:932db0bb-da4c-4ffe-892a-036d01a8071b.
Council of Science Editors:
Zhang HM(. Distributed Convex Optimization: A Study on the Primal-Dual Method of Multipliers. [Masters Thesis]. Delft University of Technology; 2015. Available from: http://resolver.tudelft.nl/uuid:932db0bb-da4c-4ffe-892a-036d01a8071b

University of Minnesota
19.
Choi, Hyungjin.
Quantication of the Impact of Uncertainty in Power Systems using Convex Optimization.
Degree: PhD, Electrical Engineering, 2017, University of Minnesota
URL: http://hdl.handle.net/11299/190457
► Rampant integration of renewable resources (e.g., photovoltaic and wind-energy conversion systems) and uncontrollable and elastic loads (e.g., plug-in hybrid electric vehicles) are rapidly transforming power…
(more)
▼ Rampant integration of renewable resources (e.g., photovoltaic and wind-energy conversion systems) and uncontrollable and elastic loads (e.g., plug-in hybrid electric vehicles) are rapidly transforming power systems. In this environment, an analytic method to quantify the impact of parametric and input uncertainty will be critical to ensure the reliable operation of next-generation power systems. This task is analytically and computationally challenging since power-system dynamics are nonlinear in nature. In this thesis, we present analytic methods to quantify the impact of parametric and input uncertainties for two important applications in power systems: i) uncertainty propagation in power-system differential-algebraic equation model and power flow, and ii) robust stability assessment of power-system dynamics. For the first topic, an optimization-based method is presented to estimate maximum and minimum bounds on state variables while acknowleding worst-case parametric and input uncertainties in the model. The approach leverages a second-order Taylor-series expansion of the states around a nominal (known) solution. Maximum and minimum bounds are then estimated from either Semidefinite relaxation of Quadratically-Constrained Quadratic-Programming or Alternating Direction Method of Multipliers. For the second topic, an analytical method to quantify power systems stability margins while acknowleding uncertainty is presented within the framework of Lyapunov's direct method. It focuses on the algorithmic construction of Lyapunov functions and the estimation of the robust Region-Of-Attraction with Sum-of-Squares optimization problems which can be translated into semidefinite problems. For both topics, numerical case studies are presented for different test systems to demonstrate and validate the proposed methods.
Subjects/Keywords: Convex Optimization; Power Systems; Sensitivity; Stability; Uncertainty
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Choi, H. (2017). Quantication of the Impact of Uncertainty in Power Systems using Convex Optimization. (Doctoral Dissertation). University of Minnesota. Retrieved from http://hdl.handle.net/11299/190457
Chicago Manual of Style (16th Edition):
Choi, Hyungjin. “Quantication of the Impact of Uncertainty in Power Systems using Convex Optimization.” 2017. Doctoral Dissertation, University of Minnesota. Accessed March 04, 2021.
http://hdl.handle.net/11299/190457.
MLA Handbook (7th Edition):
Choi, Hyungjin. “Quantication of the Impact of Uncertainty in Power Systems using Convex Optimization.” 2017. Web. 04 Mar 2021.
Vancouver:
Choi H. Quantication of the Impact of Uncertainty in Power Systems using Convex Optimization. [Internet] [Doctoral dissertation]. University of Minnesota; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/11299/190457.
Council of Science Editors:
Choi H. Quantication of the Impact of Uncertainty in Power Systems using Convex Optimization. [Doctoral Dissertation]. University of Minnesota; 2017. Available from: http://hdl.handle.net/11299/190457

University of Waterloo
20.
Karimi, Mehdi.
Convex Optimization via Domain-Driven Barriers and Primal-Dual Interior-Point Methods.
Degree: 2017, University of Waterloo
URL: http://hdl.handle.net/10012/12209
► This thesis studies the theory and implementation of infeasible-start primal-dual interior-point methods for convex optimization problems. Convex optimization has applications in many fields of engineering…
(more)
▼ This thesis studies the theory and implementation of infeasible-start primal-dual interior-point methods for convex optimization problems. Convex optimization has applications in many fields of engineering and science such as data analysis, control theory, signal processing, relaxation and randomization, and robust optimization. In addition to strong and elegant theories, the potential for creating efficient and robust software has made convex optimization very popular. Primal-dual algorithms have yielded efficient solvers for convex optimization problems in conic form over symmetric cones (linear-programming (LP), second-order cone programming (SOCP), and semidefinite programing (SDP)). However, many other highly demanded convex optimization problems lack comparable solvers. To close this gap, we have introduced a general optimization setup, called \emph{Domain-Driven}, that covers many interesting classes of optimization. Domain-Driven means our techniques are directly applied to the given ``good" formulation without a forced reformulation in a conic form. Moreover, this approach also naturally handles the cone constraints and hence the conic form.
A problem is in the Domain-Driven setup if it can be formulated as minimizing a linear function over a convex set, where the convex set is equipped with an efficient self-concordant barrier with an easy-to-evaluate Legendre-Fenchel conjugate. We show how general this setup is by providing several interesting classes of examples. LP, SOCP, and SDP are covered by the Domain-Driven setup. More generally, consider all convex cones with the property that both the cone and its dual admit efficiently computable self-concordant barriers. Then, our Domain-Driven setup can handle any conic optimization problem formulated using direct sums of these cones and their duals. Then, we show how to construct interesting convex sets as the direct sum of the epigraphs of univariate convex functions. This construction, as a special case, contains problems such as geometric programming, p-norm optimization, and entropy programming, the solutions of which are in great demand in engineering and science. Another interesting class of convex sets that (optimization over it) is contained in the Domain-Driven setup is the generalized epigraph of a matrix norm. This, as a special case, allows us to minimize the nuclear norm over a linear subspace that has applications in machine learning and big data. Domain-Driven setup contains the combination of all the above problems; for example, we can have a problem with LP and SDP constraints, combined with ones defined by univariate convex functions or the epigraph of a matrix norm.
We review the literature on infeasible-start algorithms and discuss the pros and cons of different methods to show where our algorithms stand among them. This thesis contains a chapter about several properties for self-concordant functions. Since we are dealing with general convex sets, many of these properties are used frequently in the design and analysis of our…
Subjects/Keywords: convex optimization; primal-dual interior-point methods
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Karimi, M. (2017). Convex Optimization via Domain-Driven Barriers and Primal-Dual Interior-Point Methods. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/12209
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Karimi, Mehdi. “Convex Optimization via Domain-Driven Barriers and Primal-Dual Interior-Point Methods.” 2017. Thesis, University of Waterloo. Accessed March 04, 2021.
http://hdl.handle.net/10012/12209.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Karimi, Mehdi. “Convex Optimization via Domain-Driven Barriers and Primal-Dual Interior-Point Methods.” 2017. Web. 04 Mar 2021.
Vancouver:
Karimi M. Convex Optimization via Domain-Driven Barriers and Primal-Dual Interior-Point Methods. [Internet] [Thesis]. University of Waterloo; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/10012/12209.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Karimi M. Convex Optimization via Domain-Driven Barriers and Primal-Dual Interior-Point Methods. [Thesis]. University of Waterloo; 2017. Available from: http://hdl.handle.net/10012/12209
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
21.
Umenberger, Jack.
Convex Identifcation of Stable Dynamical Systems
.
Degree: 2017, University of Sydney
URL: http://hdl.handle.net/2123/17321
► This thesis concerns the scalable application of convex optimization to data-driven modeling of dynamical systems, termed system identi cation in the control community. Two problems…
(more)
▼ This thesis concerns the scalable application of convex optimization to data-driven modeling of dynamical systems, termed system identi cation in the control community. Two problems commonly arising in system identi cation are model instability (e.g. unreliability of long-term, open-loop predictions), and nonconvexity of quality-of- t criteria, such as simulation error (a.k.a. output error). To address these problems, this thesis presents convex parametrizations of stable dynamical systems, convex quality-of- t criteria, and e cient algorithms to optimize the latter over the former. In particular, this thesis makes extensive use of Lagrangian relaxation, a technique for generating convex approximations to nonconvex optimization problems. Recently, Lagrangian relaxation has been used to approximate simulation error and guarantee nonlinear model stability via semide nite programming (SDP), however, the resulting SDPs have large dimension, limiting their practical utility. The rst contribution of this thesis is a custom interior point algorithm that exploits structure in the problem to signi cantly reduce computational complexity. The new algorithm enables empirical comparisons to established methods including Nonlinear ARX, in which superior generalization to new data is demonstrated. Equipped with this algorithmic machinery, the second contribution of this thesis is the incorporation of model stability constraints into the maximum likelihood framework. Speci - cally, Lagrangian relaxation is combined with the expectation maximization (EM) algorithm to derive tight bounds on the likelihood function, that can be optimized over a convex parametrization of all stable linear dynamical systems. Two di erent formulations are presented, one of which gives higher delity bounds when disturbances (a.k.a. process noise) dominate measurement noise, and vice versa. Finally, identi cation of positive systems is considered. Such systems enjoy substantially simpler stability and performance analysis compared to the general linear time-invariant iv Abstract (LTI) case, and appear frequently in applications where physical constraints imply nonnegativity of the quantities of interest. Lagrangian relaxation is used to derive new convex parametrizations of stable positive systems and quality-of- t criteria, and substantial improvements in accuracy of the identi ed models, compared to existing approaches based on weighted equation error, are demonstrated. Furthermore, the convex parametrizations of stable systems based on linear Lyapunov functions are shown to be amenable to distributed optimization, which is useful for identi cation of large-scale networked dynamical systems.
Subjects/Keywords: system identification;
convex optimization;
positive systems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Umenberger, J. (2017). Convex Identifcation of Stable Dynamical Systems
. (Thesis). University of Sydney. Retrieved from http://hdl.handle.net/2123/17321
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Umenberger, Jack. “Convex Identifcation of Stable Dynamical Systems
.” 2017. Thesis, University of Sydney. Accessed March 04, 2021.
http://hdl.handle.net/2123/17321.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Umenberger, Jack. “Convex Identifcation of Stable Dynamical Systems
.” 2017. Web. 04 Mar 2021.
Vancouver:
Umenberger J. Convex Identifcation of Stable Dynamical Systems
. [Internet] [Thesis]. University of Sydney; 2017. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/2123/17321.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Umenberger J. Convex Identifcation of Stable Dynamical Systems
. [Thesis]. University of Sydney; 2017. Available from: http://hdl.handle.net/2123/17321
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Delft University of Technology
22.
Zattoni Scroccaro, Pedro (author).
Online Convex Optimization with Predictions: Static and Dynamic Environments.
Degree: 2020, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:ce13b0da-fb0a-4e9f-b5a4-ef9b0dadf29b
► In this thesis, we study Online Convex Optimization algorithms that exploit predictive and/or dynamical information about a problem instance. These features are inspired by recent…
(more)
▼ In this thesis, we study Online Convex Optimization algorithms that exploit predictive and/or dynamical information about a problem instance. These features are inspired by recent developments in the Online Mirror Decent literature. When the Player's performance is compared with the best fixed decision in hindsight, we show that it is possible to achieve constant regret bounds under perfect gradient predictions and optimal minimax bounds in the worst-case, generalizing previous results from the literature. For dynamic environments, we propose a new algorithm, and show that it achieves dynamic regret bounds that exploit both gradient predictions and knowledge about the dynamics of the action sequence that the Player's performance is being compared with. We present results for both convex and strongly convex costs. Finally, we provide numerical experiments that corroborate our theoretical results.
Mechanical Engineering | Systems and Control
Advisors/Committee Members: Mohajerin Esfahani, P. (mentor), Grammatico, S. (graduation committee), Atasoy, B. (graduation committee), Sharifi Kolarijani, A. (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Online Convex Optimization; Prediction; Online Learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zattoni Scroccaro, P. (. (2020). Online Convex Optimization with Predictions: Static and Dynamic Environments. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:ce13b0da-fb0a-4e9f-b5a4-ef9b0dadf29b
Chicago Manual of Style (16th Edition):
Zattoni Scroccaro, Pedro (author). “Online Convex Optimization with Predictions: Static and Dynamic Environments.” 2020. Masters Thesis, Delft University of Technology. Accessed March 04, 2021.
http://resolver.tudelft.nl/uuid:ce13b0da-fb0a-4e9f-b5a4-ef9b0dadf29b.
MLA Handbook (7th Edition):
Zattoni Scroccaro, Pedro (author). “Online Convex Optimization with Predictions: Static and Dynamic Environments.” 2020. Web. 04 Mar 2021.
Vancouver:
Zattoni Scroccaro P(. Online Convex Optimization with Predictions: Static and Dynamic Environments. [Internet] [Masters thesis]. Delft University of Technology; 2020. [cited 2021 Mar 04].
Available from: http://resolver.tudelft.nl/uuid:ce13b0da-fb0a-4e9f-b5a4-ef9b0dadf29b.
Council of Science Editors:
Zattoni Scroccaro P(. Online Convex Optimization with Predictions: Static and Dynamic Environments. [Masters Thesis]. Delft University of Technology; 2020. Available from: http://resolver.tudelft.nl/uuid:ce13b0da-fb0a-4e9f-b5a4-ef9b0dadf29b

University of Waterloo
23.
Wang, Houze.
A Convergent Hierarchy of Certificates for Constrained Signomial Positivity.
Degree: 2020, University of Waterloo
URL: http://hdl.handle.net/10012/16361
► Optimization is at the heart of many engineering problems. Many optimization problems, however, are computationally intractable. One approach to tackle such intractability is to find…
(more)
▼ Optimization is at the heart of many engineering problems. Many optimization problems, however, are computationally intractable. One approach to tackle such intractability is to find a tractable problem whose solution, when found, approximates that of the original problem. Specifically, convex optimization problems are often efficiently solvable, and finding a convex formulation that approximates a nonconvex problem, known as convex
relaxation, is an effective approach.
This work concerns a particular class of optimization problem, namely constrained signomial optimization. Based on the idea that optimization of a function is equivalent to verifying its positivity, we first study a certificate of signomial positivity over a constrained set, which finds a decomposition of the signomial into sum of parts that are verifiably positive via convex constraints. However, the certificate only provides a sufficient condition for positivity. The main contribution of the work is to show that by multiplying additionally more complex functions, larger subset of signomials that are positive over a
compact convex set, and eventually all, may be certified by the above method. The result is analogous to classic Positivstellensatz results from algebraic geometry which certifies polynomial positivity by finding its representation with sum of square polynomials.
The result provides a convergent hierarchy of certificate for signomial positivity over a constrained set that is increasingly more complete. The hierarchy of certificate in turn gives a convex relaxation algorithm that computes the lower bounds of constrained signomial optimization problems that are increasingly tighter at the cost of additional computational complexity. At some finite level of the hierarchy, we obtain the optimal solution.
Subjects/Keywords: Convex Optimization; Signomial Programming; Algebraic Geometry
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, H. (2020). A Convergent Hierarchy of Certificates for Constrained Signomial Positivity. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/16361
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Wang, Houze. “A Convergent Hierarchy of Certificates for Constrained Signomial Positivity.” 2020. Thesis, University of Waterloo. Accessed March 04, 2021.
http://hdl.handle.net/10012/16361.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Wang, Houze. “A Convergent Hierarchy of Certificates for Constrained Signomial Positivity.” 2020. Web. 04 Mar 2021.
Vancouver:
Wang H. A Convergent Hierarchy of Certificates for Constrained Signomial Positivity. [Internet] [Thesis]. University of Waterloo; 2020. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/10012/16361.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Wang H. A Convergent Hierarchy of Certificates for Constrained Signomial Positivity. [Thesis]. University of Waterloo; 2020. Available from: http://hdl.handle.net/10012/16361
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Portland State University
24.
Tran, Tuyen Dang Thanh.
Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering.
Degree: PhD, Mathematics and Statistics, 2020, Portland State University
URL: https://pdxscholar.library.pdx.edu/open_access_etds/5482
► This thesis contains contributions in two main areas: calculus rules for generalized differentiation and optimization methods for solving nonsmooth nonconvex problems with applications to…
(more)
▼ This thesis contains contributions in two main areas: calculus rules for generalized differentiation and
optimization methods for solving nonsmooth nonconvex problems with applications to multifacility location and clustering. A variational geometric approach is used for developing calculus rules for subgradients and Fenchel conjugates of
convex functions that are not necessarily differentiable in locally
convex topological and Banach spaces. These calculus rules are useful for further applications to nonsmooth
optimization from both theoretical and numerical aspects. Next, we consider
optimization methods for solving nonsmooth
optimization problems in which the objective functions are not necessarily
convex. We particularly focus on the class of functions representable as differences of
convex functions. This class of functions is broad enough to cover many problems in facility location and clustering, while the generalized differentiation tools from
convex analysis can be applied. We develop algorithms for solving a number of multifacility location and clustering problems and computationally implement these algorithms via MATLAB. The methods used throughout this thesis involve DC programming, Nesterov's smoothing technique, and the DCA, a numerical algorithm for minimizing differences of
convex functions to cope with the nonsmoothness and nonconvexity.
Advisors/Committee Members: Mau Nam Nguyen.
Subjects/Keywords: Convex domains; Mathematical optimization; Calculus; Mathematics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Tran, T. D. T. (2020). Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering. (Doctoral Dissertation). Portland State University. Retrieved from https://pdxscholar.library.pdx.edu/open_access_etds/5482
Chicago Manual of Style (16th Edition):
Tran, Tuyen Dang Thanh. “Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering.” 2020. Doctoral Dissertation, Portland State University. Accessed March 04, 2021.
https://pdxscholar.library.pdx.edu/open_access_etds/5482.
MLA Handbook (7th Edition):
Tran, Tuyen Dang Thanh. “Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering.” 2020. Web. 04 Mar 2021.
Vancouver:
Tran TDT. Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering. [Internet] [Doctoral dissertation]. Portland State University; 2020. [cited 2021 Mar 04].
Available from: https://pdxscholar.library.pdx.edu/open_access_etds/5482.
Council of Science Editors:
Tran TDT. Convex and Nonconvex Optimization Techniques for Multifacility Location and Clustering. [Doctoral Dissertation]. Portland State University; 2020. Available from: https://pdxscholar.library.pdx.edu/open_access_etds/5482
25.
Linhares Rodrigues, Andre.
Approximation Algorithms for Distributionally Robust Stochastic Optimization.
Degree: 2019, University of Waterloo
URL: http://hdl.handle.net/10012/14639
► Two-stage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios,…
(more)
▼ Two-stage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we take first-stage actions knowing only the underlying distribution and before a scenario is realized, and may take additional second-stage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A common criticism levied at this model is that the underlying probability distribution is itself often imprecise. To address this, an approach that is quite versatile and has gained popularity in the stochastic-optimization literature is the two-stage distributionally robust stochastic model: given a collection D of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in D.
There has been almost no prior work however on developing approximation algorithms for distributionally robust problems where the underlying scenario collection is discrete, as is the case with discrete-optimization problems. We provide frameworks for designing approximation algorithms in such settings when the collection D is a ball around a central distribution, defined relative to two notions of distance between probability distributions: Wasserstein metrics (which include the L_1 metric) and the L_infinity metric. Our frameworks yield efficient algorithms even in settings with an exponential number of scenarios, where the central distribution may only be accessed via a sampling oracle.
For distributionally robust optimization under a Wasserstein ball, we first show that one can utilize the sample average approximation (SAA) method (solve the distributionally robust problem with an empirical estimate of the central distribution) to reduce the problem to the case where the central distribution has a polynomial-size support, and is represented explicitly. This follows because we argue that a distributionally robust problem can be reduced in a novel way to a standard two-stage stochastic problem with bounded inflation factor, which enables one to use the SAA machinery developed for two-stage stochastic problems. Complementing this, we show how to approximately solve a fractional relaxation of the SAA problem (i.e., the distributionally robust problem obtained by replacing the original central distribution with its empirical estimate). Unlike in two-stage {stochastic, robust} optimization with polynomially many scenarios, this turns out to be quite challenging. We utilize a variant of the ellipsoid method for convex optimization in conjunction with several new ideas to show that the SAA problem can be approximately solved provided that we have an (approximation) algorithm for a certain max-min problem that is akin to, and generalizes, the k-max-min problem (find the worst-case scenario consisting of at most k elements) encountered in two-stage robust optimization. We obtain such an algorithm for various…
Subjects/Keywords: approximation algorithms; stochastic optimization; discrete optimization; convex optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Linhares Rodrigues, A. (2019). Approximation Algorithms for Distributionally Robust Stochastic Optimization. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/14639
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Linhares Rodrigues, Andre. “Approximation Algorithms for Distributionally Robust Stochastic Optimization.” 2019. Thesis, University of Waterloo. Accessed March 04, 2021.
http://hdl.handle.net/10012/14639.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Linhares Rodrigues, Andre. “Approximation Algorithms for Distributionally Robust Stochastic Optimization.” 2019. Web. 04 Mar 2021.
Vancouver:
Linhares Rodrigues A. Approximation Algorithms for Distributionally Robust Stochastic Optimization. [Internet] [Thesis]. University of Waterloo; 2019. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/10012/14639.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Linhares Rodrigues A. Approximation Algorithms for Distributionally Robust Stochastic Optimization. [Thesis]. University of Waterloo; 2019. Available from: http://hdl.handle.net/10012/14639
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Loughborough University
26.
Rossetti, Gaia.
Mathematical optimization techniques for cognitive radar networks.
Degree: PhD, 2018, Loughborough University
URL: http://hdl.handle.net/2134/33419
► This thesis discusses mathematical optimization techniques for waveform design in cognitive radars. These techniques have been designed with an increasing level of sophistication, starting from…
(more)
▼ This thesis discusses mathematical optimization techniques for waveform design in cognitive radars. These techniques have been designed with an increasing level of sophistication, starting from a bistatic model (i.e. two transmitters and a single receiver) and ending with a cognitive network (i.e. multiple transmitting and multiple receiving radars). The environment under investigation always features strong signal-dependent clutter and noise. All algorithms are based on an iterative waveform-filter optimization. The waveform optimization is based on convex optimization techniques and the exploitation of initial radar waveforms characterized by desired auto and cross-correlation properties. Finally, robust optimization techniques are introduced to account for the assumptions made by cognitive radars on certain second order statistics such as the covariance matrix of the clutter. More specifically, initial optimization techniques were proposed for the case of bistatic radars. By maximizing the signal to interference and noise ratio (SINR) under certain constraints on the transmitted signals, it was possible to iteratively optimize both the orthogonal transmission waveforms and the receiver filter. Subsequently, the above work was extended to a convex optimization framework for a waveform design technique for bistatic radars where both radars transmit and receive to detect targets. The method exploited prior knowledge of the environment to maximize the accumulated target return signal power while keeping the disturbance power to unity at both radar receivers. The thesis further proposes convex optimization based waveform designs for multiple input multiple output (MIMO) based cognitive radars. All radars within the system are able to both transmit and receive signals for detecting targets. The proposed model investigated two complementary optimization techniques. The first one aims at optimizing the signal to interference and noise ratio (SINR) of a specific radar while keeping the SINR of the remaining radars at desired levels. The second approach optimizes the SINR of all radars using a max-min optimization criterion. To account for possible mismatches between actual parameters and estimated ones, this thesis includes robust optimization techniques. Initially, the multistatic, signal-dependent model was tested against existing worst-case and probabilistic methods. These methods appeared to be over conservative and generic for the considered signal-dependent clutter scenario. Therefore a new approach was derived where uncertainty was assumed directly on the radar cross-section and Doppler parameters of the clutters. Approximations based on Taylor series were invoked to make the optimization problem convex and {subsequently} determine robust waveforms with specific SINR outage constraints. Finally, this thesis introduces robust optimization techniques for through-the-wall radars. These are also cognitive but rely on different optimization techniques than the ones previously discussed. By noticing the similarities…
Subjects/Keywords: 621.3848; Waveform optimization; Convex optimization; Robust optimization; Cognitive radars
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rossetti, G. (2018). Mathematical optimization techniques for cognitive radar networks. (Doctoral Dissertation). Loughborough University. Retrieved from http://hdl.handle.net/2134/33419
Chicago Manual of Style (16th Edition):
Rossetti, Gaia. “Mathematical optimization techniques for cognitive radar networks.” 2018. Doctoral Dissertation, Loughborough University. Accessed March 04, 2021.
http://hdl.handle.net/2134/33419.
MLA Handbook (7th Edition):
Rossetti, Gaia. “Mathematical optimization techniques for cognitive radar networks.” 2018. Web. 04 Mar 2021.
Vancouver:
Rossetti G. Mathematical optimization techniques for cognitive radar networks. [Internet] [Doctoral dissertation]. Loughborough University; 2018. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/2134/33419.
Council of Science Editors:
Rossetti G. Mathematical optimization techniques for cognitive radar networks. [Doctoral Dissertation]. Loughborough University; 2018. Available from: http://hdl.handle.net/2134/33419

Penn State University
27.
Jalilzadeh, Afrooz.
Variance-reduced First-Order Methods for Convex Stochastic Optimization and Monotone Stochastic Variational Inequality Problems.
Degree: 2020, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/17968azj5286
► Optimization problems with expectation-valued objectives are afflicted by a difficulty in that the expectation of neither the objective nor the gradient can be evaluated in…
(more)
▼ Optimization problems with expectation-valued objectives are afflicted by a difficulty in that the expectation of neither the objective nor the gradient can be evaluated in closed-form. A popular Monte-Carlo sampling approach to resolve this issue is stochastic approximation (SA). Traditional versions of such schemes are generally characterized by a relatively poor convergence rate compared to their deterministic counterparts, a consequence of using noise-afflicted samples of the gradient and diminishing step length. This significantly affects practical behavior and limits the ability to resolve stochastic
optimization and variational problems. Motivated by such shortcomings, we focus on the development of variance-reduced methods for stochastic
optimization and variational inequality problems that achieve deterministic rates of convergence while achieving near- optimal sample-complexity bounds. The dissertation is partitioned into three parts.
In the first part of the dissertation, we consider two distinct avenues that utilize smoothing, acceleration and variance-reduction techniques. Both avenues display deterministic rates of convergence while displaying near-optimal sample-complexities. When the problem is nonsmooth and strongly
convex, traditional stochastic subgradient (SSG) schemes often display poor behavior, arising in part from noisy subgradients and diminishing steplengths. Instead, we apply a variable sample-size accelerated proximal scheme (VS-APM) on the Moreau envelope of the objective function; we term such a scheme as (mVS-APM). The method displays linear convergence in inexact gradient steps, each of which requires utilizing an inner (SSG) scheme. Specifically, (mVS-APM) achieves an optimal oracle complexity in (SSG) steps. Under a weaker assumption of suitable state-dependent bounds on subgradients, an unaccelerated variant (mVS-PM) displays linear convergence and optimal oracle complexity. For merely
convex problems, smooth VS-APM (sVS- APM) produces sequences for which the expected sub-optimality diminishes at the rate of O(1/k) with an optimal oracle complexity. Our findings allow for solving stochastic
optimization problems in orders of magnitude less time when sampling is cheap.
In the second part of this dissertation, we develop a seamless framework that combines smoothing, regularization, and variance-reduction within a stochastic quasi-Newton (SQN) framework, supported by a regularized smooth limited memory BFGS (rsL- BFGS) scheme. This leads to both new and faster rates of convergence and the resulting scheme is endowed with the ability to handle nonsmooth and constrained problems, which has traditionally been a shortcoming of quasi-Newton schemes. In strongly
convex regimes with state- dependent noise, the proposed variable sample-size stochastic quasi-Newton (VS-SQN) scheme admits a non-asymptotic linear rate of convergence. In nonsmooth (but smoothable) regimes, using Moreau smoothing retains the linear convergence rate while using more general smoothing leads to a deterioration of…
Advisors/Committee Members: Vinayak V Shanbhag, Dissertation Advisor/Co-Advisor, Vinayak V Shanbhag, Committee Chair/Co-Chair, Necdet S Aybat, Committee Member, Eunhye Song, Committee Member, Xingyuan Fang, Outside Member, Ling Rothrock, Program Head/Chair.
Subjects/Keywords: Stochastic Optimization; Convex Optimization; Stochastic Approximation; Nonsmooth Optimization; Stochastic Variational Inequality
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jalilzadeh, A. (2020). Variance-reduced First-Order Methods for Convex Stochastic Optimization and Monotone Stochastic Variational Inequality Problems. (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/17968azj5286
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Jalilzadeh, Afrooz. “Variance-reduced First-Order Methods for Convex Stochastic Optimization and Monotone Stochastic Variational Inequality Problems.” 2020. Thesis, Penn State University. Accessed March 04, 2021.
https://submit-etda.libraries.psu.edu/catalog/17968azj5286.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Jalilzadeh, Afrooz. “Variance-reduced First-Order Methods for Convex Stochastic Optimization and Monotone Stochastic Variational Inequality Problems.” 2020. Web. 04 Mar 2021.
Vancouver:
Jalilzadeh A. Variance-reduced First-Order Methods for Convex Stochastic Optimization and Monotone Stochastic Variational Inequality Problems. [Internet] [Thesis]. Penn State University; 2020. [cited 2021 Mar 04].
Available from: https://submit-etda.libraries.psu.edu/catalog/17968azj5286.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Jalilzadeh A. Variance-reduced First-Order Methods for Convex Stochastic Optimization and Monotone Stochastic Variational Inequality Problems. [Thesis]. Penn State University; 2020. Available from: https://submit-etda.libraries.psu.edu/catalog/17968azj5286
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Texas A&M University
28.
Fan, Siqi.
Learning Gaussian Latent Graphical Models Via Partial Convex Optimization.
Degree: MS, Electrical Engineering, 2019, Texas A&M University
URL: http://hdl.handle.net/1969.1/188729
► Latent Gaussian graphical models are very useful in probabilistic modeling to measure the statistical relationships between different variables and present them in the form of…
(more)
▼ Latent Gaussian graphical models are very useful in probabilistic modeling to measure the statistical relationships between different variables and present them in the form of a graph. These models are applied to a variety of domains, such as economics, social network modeling and natural sciences. However, traditional ways of learning latent Gaussian models have some weakness. For example, algorithms for latent tree graphical models usually cannot handle a complex model as they have strict presumptions about graph structure. The presumptions of tree graphical models do not perform well in a lot of real data. Besides, some use the
convex optimization of maximum likelihood estimator (MLE) to solve the model. However, the computation of
convex optimization increases exponentially while the number of variables increases. Thus, we come up with a fast, local-based algorithm that combines
convex optimization with latent tree graphical models to save a lot computation while also handle complex models.
This algorithm first applies ChowLiu Recursive Grouping to find the skeleton of the final graph and learn the hidden variables in the tree-like part of the graph. Then, for the part that variables have a complex relationship with each other, we propose a local
convex optimization to compute the structure. After combining structures for two different parts, the algorithm generates a result that performs better than latent tree graphical models measured by loglikelihood and has much less computation than traditional
convex optimization.
Advisors/Committee Members: Tian, Chao (advisor), Chaspari, Theodora (committee member), Liu, Tie (committee member), Yoon, Byung-Jun (committee member).
Subjects/Keywords: Gaussian latent graphical models; Convex optimization; Chow-Liu algorithm; CL Recursive Grouping; Partial convex optimization.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fan, S. (2019). Learning Gaussian Latent Graphical Models Via Partial Convex Optimization. (Masters Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/188729
Chicago Manual of Style (16th Edition):
Fan, Siqi. “Learning Gaussian Latent Graphical Models Via Partial Convex Optimization.” 2019. Masters Thesis, Texas A&M University. Accessed March 04, 2021.
http://hdl.handle.net/1969.1/188729.
MLA Handbook (7th Edition):
Fan, Siqi. “Learning Gaussian Latent Graphical Models Via Partial Convex Optimization.” 2019. Web. 04 Mar 2021.
Vancouver:
Fan S. Learning Gaussian Latent Graphical Models Via Partial Convex Optimization. [Internet] [Masters thesis]. Texas A&M University; 2019. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/1969.1/188729.
Council of Science Editors:
Fan S. Learning Gaussian Latent Graphical Models Via Partial Convex Optimization. [Masters Thesis]. Texas A&M University; 2019. Available from: http://hdl.handle.net/1969.1/188729

University of Waterloo
29.
Sremac, Stefan.
Error Bounds and Singularity Degree in Semidefinite Programming.
Degree: 2020, University of Waterloo
URL: http://hdl.handle.net/10012/15583
► An important process in optimization is to determine the quality of a proposed solution. This usually entails calculation of the distance of a proposed solution…
(more)
▼ An important process in optimization is to determine the quality of a proposed solution. This usually entails calculation of the distance of a proposed solution to the optimal set and is referred to as forward error. Since the optimal set is not known, we generally view forward error as intractable. An alternative to forward error is to measure the violation in the constraints or optimality conditions. This is referred to as backward error and it is generally easy to compute. A major issue in optimization occurs when a proposed solution has small backward error, i.e., looks good to the user, but has large forward error, i.e., is far from the optimal set.
In 2001, Jos Sturm developed a remarkable upper bound on forward error for spectrahedra (optimal sets of semidefinite programs) in terms of backward error. His bound creates a hierarchy among spectrahedra that is based on singularity degree, an integer between 0 and n-1, derived from facial reduction. For problems with small singularity degree, forward error is similar to backward error, but this may not be true for problems with large singularity degree.
In this thesis we provide a method to obtain numerical lower bounds on forward error, thereby complimenting the bounds of Sturm. While the bounds of Sturm identify good convergence, our bounds allow us to detect poor convergence. Our approach may also be used to provide lower bounds on singularity degree, a measure that is difficult to compute in some instances. We show that large singularity degree leads to some undesirable convergence properties for a specific family of central paths.
We apply our results in a theoretical sense to some Toeplitz matrix completion problems and in a numerical sense to several test spectrahedra.
Subjects/Keywords: semidefinite programming; optimization; error bounds; singularity degree; mathematical programming; convex optimization; conic optimization; Semidefinite programming; Combinatorial optimization; Programming (Mathematics); Convex programming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sremac, S. (2020). Error Bounds and Singularity Degree in Semidefinite Programming. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/15583
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Sremac, Stefan. “Error Bounds and Singularity Degree in Semidefinite Programming.” 2020. Thesis, University of Waterloo. Accessed March 04, 2021.
http://hdl.handle.net/10012/15583.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Sremac, Stefan. “Error Bounds and Singularity Degree in Semidefinite Programming.” 2020. Web. 04 Mar 2021.
Vancouver:
Sremac S. Error Bounds and Singularity Degree in Semidefinite Programming. [Internet] [Thesis]. University of Waterloo; 2020. [cited 2021 Mar 04].
Available from: http://hdl.handle.net/10012/15583.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Sremac S. Error Bounds and Singularity Degree in Semidefinite Programming. [Thesis]. University of Waterloo; 2020. Available from: http://hdl.handle.net/10012/15583
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Penn State University
30.
Kang, Bosung.
Robust Covariance Matrix Estimation for Radar Space-Time Adaptive Processing (STAP).
Degree: 2015, Penn State University
URL: https://submit-etda.libraries.psu.edu/catalog/26539
► Estimating the disturbance or clutter covariance is a centrally important problem in radar space time adaptive processing (STAP) since estimation of the disturbance or interference…
(more)
▼ Estimating the disturbance or clutter covariance is a centrally important problem in radar space time adaptive processing (STAP) since estimation of the disturbance or interference covariance matrix plays a central role on radar target detection in the presence of clutter, noise and a jammer. The disturbance covariance matrix should be inferred from training sample observations in practice. Traditional maximum likelihood (ML) estimators are effective when homogeneous (target free) training data is abundant but lead to poor estimates, degraded false alarm rates, and detection loss in the regime of limited training. However, large number of homogeneous training samples are generally not available because of difficulty of guaranteeing target free disturbance observation, practical limitations imposed by the spatio-temporal nonstationarity, and system considerations. The problem has been exacerbated by recent advances that have led to more antenna elements (J) and higher temporal resolution (P) time epochs resulting in a large dimension (N = JP).
In this dissertation, we look to address the aforementioned challenges by exploiting physically inspired constraints into ML estimation. While adding constraints is beneficial to achieve satisfactory performance in the practical regime of limited training, it leads to a challenging problem. Unlike unconstrained estimators, a vast majority of constrained radar STAP estimators are iterative and expensive numerically, which prohibits practical deployment. We focus on breaking this classical trade-off between computational tractability and desirable performance measures, particularly in training starved regimes. In particular, we exploit both the structure of the disturbance covariance and importantly the knowledge of the clutter rank to yield a new rank constrained maximum likelihood (RCML) estimator of clutter/disturbance covariance. We demonstrate that the rank-constrained estimation problem can in fact be cast in the framework of a tractable
convex optimization problem, and derive closed form expressions for the estimated covariance matrix. In addition, we derive a new covariance estimator for STAP that jointly considers a Toeplitz structure and a rank constraint on the clutter component. Past work has shown that in the regime of low training, even handling each constraint individually is hard and techniques often resort to slow numerically based solutions. Our proposed solution leverages the rank constrained ML estimator (RCML) of structured covariances to build a computationally friendly approximation that involves a cascade of two closed form solutions. Performance analysis using the KASSPER data set (where ground truth covariance is made available) shows that the proposed RCML estimator vastly outperforms state-of-the art alternatives even for low training including the notoriously difficult regime of K <= N training regimes and for the experiments considering real-world scenarios such as target detection performance and the case that some of training samples are corrupted…
Advisors/Committee Members: Vishal Monga, Dissertation Advisor/Co-Advisor, David Miller, Committee Member, Constantino Manuel Lagoa, Committee Member, Jesse Louis Barlow, Committee Member.
Subjects/Keywords: convex optimization; STAP; radar signal processing; constrained optimization; detection and estimation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kang, B. (2015). Robust Covariance Matrix Estimation for Radar Space-Time Adaptive Processing (STAP). (Thesis). Penn State University. Retrieved from https://submit-etda.libraries.psu.edu/catalog/26539
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Kang, Bosung. “Robust Covariance Matrix Estimation for Radar Space-Time Adaptive Processing (STAP).” 2015. Thesis, Penn State University. Accessed March 04, 2021.
https://submit-etda.libraries.psu.edu/catalog/26539.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Kang, Bosung. “Robust Covariance Matrix Estimation for Radar Space-Time Adaptive Processing (STAP).” 2015. Web. 04 Mar 2021.
Vancouver:
Kang B. Robust Covariance Matrix Estimation for Radar Space-Time Adaptive Processing (STAP). [Internet] [Thesis]. Penn State University; 2015. [cited 2021 Mar 04].
Available from: https://submit-etda.libraries.psu.edu/catalog/26539.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Kang B. Robust Covariance Matrix Estimation for Radar Space-Time Adaptive Processing (STAP). [Thesis]. Penn State University; 2015. Available from: https://submit-etda.libraries.psu.edu/catalog/26539
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
◁ [1] [2] [3] [4] [5] … [18] ▶
.