You searched for subject:(Low rank matrix correction)
.
Showing records 1 – 30 of
23375 total matches.
◁ [1] [2] [3] [4] [5] … [780] ▶

Delft University of Technology
1.
Swart, Wouter (author).
Methods for improving the computational performance of sequentially linear analsysis.
Degree: 2018, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:dc35a7e3-beb7-4d46-88c6-36e6f980a597
► The numerical simulation of brittle failure with nonlinear finite element analysis (NLFEA) remains a challenge due to robustness issues. These problems are attributed to the…
(more)
▼ The numerical simulation of brittle failure with nonlinear finite element analysis (NLFEA) remains a challenge due to robustness issues. These problems are attributed to the softening material behaviour and the iterative nature of the Newton-Raphson type methods used in NLFEA. However, robust numerical simulations become increasingly important, for example due to recent developments in Groningen. To address these issues, sequentially linear analysis (SLA) was developed which exploits the fact that a linear analysis is inherently stable. By assuming a stepwise material degradation the nonlinear response of a structure can be approximated with a sequence of linear analyses. Although this approach has been proven to be effective for several case studies, the numerical performance is still a problem that has to be solved. After every linear analysis, a single element is damaged resulting in incremental damage. As a result, the system of equations only changes locally between these linear analyses. Traditional solution techniques do not exploit this property and calculate a matrix factorisation every linear analysis, resulting in high computational times per analysis step. Since SLA typically requires many linear analyses to obtain the desired structural response, this leads to unacceptable analysis times. The aim of this thesis is to improve the computational performance of SLA by developing numerical solution techniques which exploit the incremental approach of SLA. To this extend, the following methods have been developed. A direct solution technique has been developed which is based on the Woodbury matrix identity. This identity allows for the numerically cheap computation of the inverse of a low-rank corrected matrix. In this approach, the expensive matrix factorisation does not have to be calculated every linear analysis step. Instead, the old factorisation can be reused along with some additional matrix- and vector multiplications and solving a significantly smaller linear system of equations. An optimal strategy is derived to determine at which point a new factorisation should be calculated. An improved preconditioner for the conjugate gradient (CG) method has been developed. Instead of an incomplete factorisation, the complete factorisation is used as a preconditioner which reduces the number of required CG iterations significantly. The point at which too many CG iterations are required and a new factorisation is necessary, is determined using the same strategy as the first method. From numerical experiments it follows that both methods perform significantly better than the direct solution method, especially for large 3-dimensional problems. The best performance is achieved using the Woodbury matrix identity resulting in the solver no longer being the dominant factor in SLA. Furthermore, significantly larger problems are not solvable in time frames in which previously only smaller problems were solved.
Applied Mathematics
Advisors/Committee Members: van Gijzen, Martin (mentor), Schreppers, G.M.A (graduation committee), Rots, Jan (graduation committee), van Horssen, Wim (graduation committee), Delft University of Technology (degree granting institution).
Subjects/Keywords: Finite Element Analysis; Preconditioning; Structural analysis; Direct method; Iterative method; Low-rank matrix correction
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Swart, W. (. (2018). Methods for improving the computational performance of sequentially linear analsysis. (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:dc35a7e3-beb7-4d46-88c6-36e6f980a597
Chicago Manual of Style (16th Edition):
Swart, Wouter (author). “Methods for improving the computational performance of sequentially linear analsysis.” 2018. Masters Thesis, Delft University of Technology. Accessed January 20, 2021.
http://resolver.tudelft.nl/uuid:dc35a7e3-beb7-4d46-88c6-36e6f980a597.
MLA Handbook (7th Edition):
Swart, Wouter (author). “Methods for improving the computational performance of sequentially linear analsysis.” 2018. Web. 20 Jan 2021.
Vancouver:
Swart W(. Methods for improving the computational performance of sequentially linear analsysis. [Internet] [Masters thesis]. Delft University of Technology; 2018. [cited 2021 Jan 20].
Available from: http://resolver.tudelft.nl/uuid:dc35a7e3-beb7-4d46-88c6-36e6f980a597.
Council of Science Editors:
Swart W(. Methods for improving the computational performance of sequentially linear analsysis. [Masters Thesis]. Delft University of Technology; 2018. Available from: http://resolver.tudelft.nl/uuid:dc35a7e3-beb7-4d46-88c6-36e6f980a597
2.
Biradar, Rakesh.
Analysis and Prediction of Community Structure Using Unsupervised Learning.
Degree: MS, 2016, Worcester Polytechnic Institute
URL: etd-012616-134431
;
https://digitalcommons.wpi.edu/etd-theses/138
► In this thesis, we perform analysis and prediction for community structures in graphs using unsupervised learning. The methods we use require the data matrices to…
(more)
▼ In this thesis, we perform analysis and prediction for community structures in graphs using unsupervised learning. The methods we use require the data matrices to be of
low rank, and such matrices appear quite often in real world problems across a broad range of domains. Such a modelling assumption is widely considered by classical algorithms such as principal component analysis (PCA), and the same assumption is often used to achieve dimensionality reduction. Dimension reduction, which is a classic method in unsupervised learning, can be leveraged in a wide array of problems, including prediction of strength of connection between communities from unlabeled or partially labeled data. Accordingly, a
low rank assumption addresses many real world problems, and a
low rank assumption has been used in this thesis to predict the strength of connection between communities in Amazon product data. In particular, we have analyzed real world data across retail and cyber domains, with the focus being on the retail domain. Herein, our focus is on analyzing the strength of connection between the communities in Amazon product data, where each community represents a group of products, and we are given the strength of connection between the individual products but not between the product communities. We call the strength of connection between individual products first order data and the strength of connection between communities second order data. This usage is inspired by [1] where first order time series are used to compute second order covariance matrices where such covariance matrices encode the strength of connection between the time series. In order to find the strength of connection between the communities, we define various metrics to measure this strength, and one of the goals of this thesis is to choose a good metric, which supports effective predictions. However, the main objective is to predict the strength of connection between most of the communities, given measurements of the strength of connection between only a few communities. To address this challenge, we use modern extensions of PCA such as eRPCA that can provide better predictions and can be computationally efficient for large problems. However, the current theory of eRPCA algorithms is not designed to treat problems where the initial data (such as the second order
matrix of communities strength) is both
low rank and sparse. Therefore, we analyze the performance of eRPCA algorithm on such data and modify our approaches for the particular structure of Amazon product communities to perform the necessary predictions.
Advisors/Committee Members: Randy C. Paffenroth, Advisor, ;.
Subjects/Keywords: eRPCA; Community Prediction; Low Rank; Sparse Matrix
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Biradar, R. (2016). Analysis and Prediction of Community Structure Using Unsupervised Learning. (Thesis). Worcester Polytechnic Institute. Retrieved from etd-012616-134431 ; https://digitalcommons.wpi.edu/etd-theses/138
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Biradar, Rakesh. “Analysis and Prediction of Community Structure Using Unsupervised Learning.” 2016. Thesis, Worcester Polytechnic Institute. Accessed January 20, 2021.
etd-012616-134431 ; https://digitalcommons.wpi.edu/etd-theses/138.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Biradar, Rakesh. “Analysis and Prediction of Community Structure Using Unsupervised Learning.” 2016. Web. 20 Jan 2021.
Vancouver:
Biradar R. Analysis and Prediction of Community Structure Using Unsupervised Learning. [Internet] [Thesis]. Worcester Polytechnic Institute; 2016. [cited 2021 Jan 20].
Available from: etd-012616-134431 ; https://digitalcommons.wpi.edu/etd-theses/138.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Biradar R. Analysis and Prediction of Community Structure Using Unsupervised Learning. [Thesis]. Worcester Polytechnic Institute; 2016. Available from: etd-012616-134431 ; https://digitalcommons.wpi.edu/etd-theses/138
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Temple University
3.
Shank, Stephen David.
Low-rank solution methods for large-scale linear matrix equations.
Degree: PhD, 2014, Temple University
URL: http://digital.library.temple.edu/u?/p245801coll10,273331
► Mathematics
We consider low-rank solution methods for certain classes of large-scale linear matrix equations. Our aim is to adapt existing low-rank solution methods based on…
(more)
▼ Mathematics
We consider low-rank solution methods for certain classes of large-scale linear matrix equations. Our aim is to adapt existing low-rank solution methods based on standard, extended and rational Krylov subspaces to solve equations which may viewed as extensions of the classical Lyapunov and Sylvester equations. The first class of matrix equations that we consider are constrained Sylvester equations, which essentially consist of Sylvester's equation along with a constraint on the solution matrix. These therefore constitute a system of matrix equations. The second are generalized Lyapunov equations, which are Lyapunov equations with additional terms. Such equations arise as computational bottlenecks in model order reduction.
Temple University – Theses
Advisors/Committee Members: Szyld, Daniel, Simoncini, Valeria;, Seibold, Benjamin, Yang, Wei-Shih;.
Subjects/Keywords: Applied mathematics;
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shank, S. D. (2014). Low-rank solution methods for large-scale linear matrix equations. (Doctoral Dissertation). Temple University. Retrieved from http://digital.library.temple.edu/u?/p245801coll10,273331
Chicago Manual of Style (16th Edition):
Shank, Stephen David. “Low-rank solution methods for large-scale linear matrix equations.” 2014. Doctoral Dissertation, Temple University. Accessed January 20, 2021.
http://digital.library.temple.edu/u?/p245801coll10,273331.
MLA Handbook (7th Edition):
Shank, Stephen David. “Low-rank solution methods for large-scale linear matrix equations.” 2014. Web. 20 Jan 2021.
Vancouver:
Shank SD. Low-rank solution methods for large-scale linear matrix equations. [Internet] [Doctoral dissertation]. Temple University; 2014. [cited 2021 Jan 20].
Available from: http://digital.library.temple.edu/u?/p245801coll10,273331.
Council of Science Editors:
Shank SD. Low-rank solution methods for large-scale linear matrix equations. [Doctoral Dissertation]. Temple University; 2014. Available from: http://digital.library.temple.edu/u?/p245801coll10,273331

Colorado School of Mines
4.
Yang, Dehui.
Structured low-rank matrix recovery via optimization methods.
Degree: PhD, Electrical Engineering, 2018, Colorado School of Mines
URL: http://hdl.handle.net/11124/172154
► From single-molecule microscopy in biology, to collaborative filtering in recommendation systems, to quantum state tomography in physics, many scientific discoveries involve solving ill-posed inverse problems,…
(more)
▼ From single-molecule microscopy in biology, to collaborative filtering in recommendation systems, to quantum state tomography in physics, many scientific discoveries involve solving ill-posed inverse problems, where the number of parameters to be estimated far exceeds the number of available measurements. To make these daunting problems solvable,
low-dimensional geometric structures are often exploited, and regularizations that promote underlying structures are used for various inference tasks. To date, one of the most effective and plausible
low-dimensional models for
matrix data is the
low-
rank structure, which assumes that columns of the data
matrix are correlated and lie in a
low-dimensional subspace. This helps make certain
matrix inverse problems well-posed. However, in some cases, standard
low-
rank structure is not powerful enough for modeling the underlying data generating process, and additional modeling efforts are desired. This is the main focus of this research. Motivated by applications from different disciplines in engineering and science, in this dissertation, we consider the recovery of three instances of structured matrices from limited measurement data, where additional structures naturally occur in the data matrices beyond simple
low-rankness. The structured matrices that we consider include i)
low-
rank and spectrally sparse matrices in super-resolution imaging; ii)
low-
rank skew-symmetric matrices in pairwise comparisons; iii) and
low-
rank positive semidefinite matrices in physical and data sciences. Using optimization as a tool, we develop new regularizers and computationally efficient algorithmic frameworks to account for structured
low-rankness in solving these ill-posed inverse problems. For some of the problems considered in this dissertation, theoretical analysis is also carried out for the proposed optimization programs. We show that, under mild conditions, the structured
low-
rank matrices can be recovered reliably from a minimal number of random measurements.
Advisors/Committee Members: Wakin, Michael B. (advisor), Vincent, Tyrone (committee member), Yang, Dejun (committee member), Tang, Gongguo (committee member).
Subjects/Keywords: matrix completion; models; super-resolution; modal analysis; low-rank; optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yang, D. (2018). Structured low-rank matrix recovery via optimization methods. (Doctoral Dissertation). Colorado School of Mines. Retrieved from http://hdl.handle.net/11124/172154
Chicago Manual of Style (16th Edition):
Yang, Dehui. “Structured low-rank matrix recovery via optimization methods.” 2018. Doctoral Dissertation, Colorado School of Mines. Accessed January 20, 2021.
http://hdl.handle.net/11124/172154.
MLA Handbook (7th Edition):
Yang, Dehui. “Structured low-rank matrix recovery via optimization methods.” 2018. Web. 20 Jan 2021.
Vancouver:
Yang D. Structured low-rank matrix recovery via optimization methods. [Internet] [Doctoral dissertation]. Colorado School of Mines; 2018. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/11124/172154.
Council of Science Editors:
Yang D. Structured low-rank matrix recovery via optimization methods. [Doctoral Dissertation]. Colorado School of Mines; 2018. Available from: http://hdl.handle.net/11124/172154

Georgia Tech
5.
Rangel Walteros, Pedro Andres.
A non-asymptotic study of low-rank estimation of smooth kernels on graphs.
Degree: PhD, Mathematics, 2014, Georgia Tech
URL: http://hdl.handle.net/1853/52988
► This dissertation investigates the problem of estimating a kernel over a large graph based on a sample of noisy observations of linear measurements of the…
(more)
▼ This dissertation investigates the problem of estimating a kernel over a large graph based on a sample of noisy observations of linear measurements of the kernel. We are interested in solving this estimation problem in the case when the sample size is much smaller than the ambient dimension of the kernel. As is typical in high-dimensional statistics, we are able to design a suitable estimator based on a small number of samples only when the target kernel belongs to a subset of restricted complexity. In our study, we restrict the complexity by considering scenarios where the target kernel is both
low-
rank and smooth over a graph. Using standard tools of non-parametric estimation, we derive a minimax lower bound on the least squares error in terms of the
rank and the degree of smoothness of the target kernel. To prove the optimality of our lower-bound, we proceed to develop upper bounds on the error for a least-square estimator based on a non-convex penalty. The proof of these upper bounds depends on bounds for estimators over uniformly bounded function classes in terms of Rademacher complexities. We also propose a computationally tractable estimator based on least-squares with convex penalty. We derive an upper bound for the computationally tractable estimator in terms of a coherence function introduced in this work. Finally, we present some scenarios wherein this upper bound achieves a near-optimal rate. The motivations for studying such problems come from various real-world applications like recommender systems and social network analysis.
Advisors/Committee Members: Koltchinskii, Vladimir I. (advisor), Lounici, Karim (committee member), Blekherman, Greg (committee member), Leykin, Anton (committee member), Song, Le (committee member).
Subjects/Keywords: Low-rank matrix completion; Kernels on graphs; High dimensional probability
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Rangel Walteros, P. A. (2014). A non-asymptotic study of low-rank estimation of smooth kernels on graphs. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52988
Chicago Manual of Style (16th Edition):
Rangel Walteros, Pedro Andres. “A non-asymptotic study of low-rank estimation of smooth kernels on graphs.” 2014. Doctoral Dissertation, Georgia Tech. Accessed January 20, 2021.
http://hdl.handle.net/1853/52988.
MLA Handbook (7th Edition):
Rangel Walteros, Pedro Andres. “A non-asymptotic study of low-rank estimation of smooth kernels on graphs.” 2014. Web. 20 Jan 2021.
Vancouver:
Rangel Walteros PA. A non-asymptotic study of low-rank estimation of smooth kernels on graphs. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1853/52988.
Council of Science Editors:
Rangel Walteros PA. A non-asymptotic study of low-rank estimation of smooth kernels on graphs. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/52988

Georgia Tech
6.
Xia, Dong.
Statistical inference for large matrices.
Degree: PhD, Mathematics, 2016, Georgia Tech
URL: http://hdl.handle.net/1853/55632
► This thesis covers two topics on matrix analysis and estimation in machine learning and statistics. The first topic is about density matrix estimation with application…
(more)
▼ This thesis covers two topics on
matrix analysis and estimation in machine learning and statistics. The first topic is about density
matrix estimation with application in quantum state tomography. The density matrices are positively semi-definite Hermitian matrices of unit trace that describe the state of a quantum system. We develop minimax lower bounds on error rates of estimation of
low rank density matrices in trace regression models used in quantum state tomography (in particular, in the case of Pauli measurements) with explicit dependence of the bounds on the
rank and other complexity parameters. Such bounds are established for several statistically relevant distances, including quantum versions of Kullback-Leibler divergence (relative entropy distance) and of Hellinger distance (so called Bures distance), and Schatten p-norm distances. Sharp upper bounds and oracle inequalities for least squares estimator with von Neumann entropy penalization are obtained showing that minimax lower bounds are attained (up to logarithmic factors) for these distances.
Another topic is about the analysis of the spectral perturbations of matrices under Gaussian noise. Given a
matrix contaminated with Gaussian noise, we develop sharp upper bounds on the perturbation of linear forms of singular vectors. In particular, sharp upper bounds are proved for the component-wise perturbation of singular vectors. These results can be applied on sub-matrices localization and spectral clustering algorithms.
Advisors/Committee Members: Koltchinskii, Vladimir (advisor), Lounici, Karim (committee member), Romberg, Justin (committee member), Tetali, Prasad (committee member), Song, Le (committee member).
Subjects/Keywords: Low rank; Matrix estimation; Singular vectors; Random perturbation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xia, D. (2016). Statistical inference for large matrices. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/55632
Chicago Manual of Style (16th Edition):
Xia, Dong. “Statistical inference for large matrices.” 2016. Doctoral Dissertation, Georgia Tech. Accessed January 20, 2021.
http://hdl.handle.net/1853/55632.
MLA Handbook (7th Edition):
Xia, Dong. “Statistical inference for large matrices.” 2016. Web. 20 Jan 2021.
Vancouver:
Xia D. Statistical inference for large matrices. [Internet] [Doctoral dissertation]. Georgia Tech; 2016. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1853/55632.
Council of Science Editors:
Xia D. Statistical inference for large matrices. [Doctoral Dissertation]. Georgia Tech; 2016. Available from: http://hdl.handle.net/1853/55632

Princeton University
7.
Zhong, Yiqiao.
Spectral methods and MLE: a modern statistical perspective
.
Degree: PhD, 2019, Princeton University
URL: http://arks.princeton.edu/ark:/88435/dsp01gb19f8728
► Modern statistical analysis often requires the integration of statistical thinking and algorithmic thinking. There are new challenges posed for classical estimation principles. Indeed, in high-dimensional…
(more)
▼ Modern statistical analysis often requires the integration of statistical thinking and algorithmic thinking. There are new challenges posed for classical estimation principles. Indeed, in high-dimensional problems, statistically sound estimation procedures such as maximum-likelihood estimation (MLE) may be difficult to compute, at least in the naive form. Also, spectral methods such as principal component analysis, which enjoy
low computational costs, have unclear statistical guarantees in general.
This thesis addresses both spectral methods and MLE in a wide range of estimation problems, including high-dimensional factor models, community detection,
matrix completion, synchronization problems, etc. The fundamental structure that underlies these problems is
low rank, which is a core structure in modern statistics and machine learning. The
low rank structure enables the use of spectral methods, and it allows efficient algorithms for solving nonconvex optimization problems with certain structural assumptions.
The contribution of this thesis includes the following. It reveals interesting phenomena about entrywise behavior of eigenvectors, leading to sharp `∞ perturbation bounds. These bounds are provided in both the deterministic regime and the random regime. Besides, a stability-based strategy, namely leave-one-out, is proposed to analyze nonconvex optimization problems. Finally, a moments-based spectral aggregation method is proposed to handle practical issues such as data heterogeneity.
Advisors/Committee Members: Fan, Jianqing (advisor).
Subjects/Keywords: Eigenvectors;
High dimensional statistics;
Low rank matrices;
Matrix perturbation;
Nonconvex optimization;
Random matrix theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhong, Y. (2019). Spectral methods and MLE: a modern statistical perspective
. (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01gb19f8728
Chicago Manual of Style (16th Edition):
Zhong, Yiqiao. “Spectral methods and MLE: a modern statistical perspective
.” 2019. Doctoral Dissertation, Princeton University. Accessed January 20, 2021.
http://arks.princeton.edu/ark:/88435/dsp01gb19f8728.
MLA Handbook (7th Edition):
Zhong, Yiqiao. “Spectral methods and MLE: a modern statistical perspective
.” 2019. Web. 20 Jan 2021.
Vancouver:
Zhong Y. Spectral methods and MLE: a modern statistical perspective
. [Internet] [Doctoral dissertation]. Princeton University; 2019. [cited 2021 Jan 20].
Available from: http://arks.princeton.edu/ark:/88435/dsp01gb19f8728.
Council of Science Editors:
Zhong Y. Spectral methods and MLE: a modern statistical perspective
. [Doctoral Dissertation]. Princeton University; 2019. Available from: http://arks.princeton.edu/ark:/88435/dsp01gb19f8728

University of Manchester
8.
Borsdorf, Ruediger.
Structured Matrix Nearness Problems:Theory and
Algorithms.
Degree: 2012, University of Manchester
URL: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:162521
► In many areas of science one often has a given matrix, representing forexample a measured data set and is required to find a matrix that…
(more)
▼ In many areas of science one often has a given
matrix, representing forexample a measured data set and is required
to find a
matrix that isclosest in a suitable norm to the
matrix
and possesses additionally a structure,inherited from the model
used or coming from the application. We call these problems
structured
matrix nearness problems. We look at three different
groups of these problems that come from real applications, analyze
theproperties of the corresponding
matrix structure, and propose
algorithms to solve them efficiently.The first part of this thesis
concerns the nearness problem of finding the nearest k factor
correlation
matrix C(X) =\diag(I
n -XX
T)+XX
T to a given
symmetric
matrix,
subject to natural nonlinearconstraints on the
elements of the n × k
matrix X, where distance is measured
in the Frobenius norm.Such problems arise, for example, when one is
investigating factor models ofcollateralized debt obligations
(CDOs) or multivariate time series. We examineseveral algorithms
for solving the nearness problem that differ in whether or not they
can takeaccount of the nonlinear constraints and in their
convergence properties. Our numerical experimentsshow that the
performance of the methods depends strongly on the problem,
butthat, among our tested methods, the spectral projected gradient
method is the clear winner.In the second part we look at two
two-sided optimization problems where thematrix of unknowns
Y∈\R
n × p lies in the Stiefel manifold. These two
problems come from an application in atomic chemistry where one is
looking for atomic orbitalswith prescribed occupation numbers. We
analyze these two problems, propose ananalytic optimal solution of
the first and show that an optimal solutionof the second problem
can be found by solving a convex quadratic programmingproblem with
box constraints and p unknowns. We prove that the latter problem
can be solved by the active-set method in atmost 2p iterations.
Subsequently, we analyze the set of optimal
solutions𝓒={Y∈\R
n × p:Y
TY=I
p, Y
TNY=D} of
the firstproblem for N symmetric and D diagonal and find that a
slight modification of it is a Riemannian manifold. Wederive the
geometric objects required to make an optimization over
thismanifold possible. We propose an augmented Lagrangian-based
algorithm that uses these geometric tools andallows us to optimize
an arbitrary smooth function over 𝓒. This algorithm can
be used to select a particular solutionout of the latter set
𝓒 by posing a new optimization problem. We compareit
numerically with a similar algorithm that,however, does not apply
these geometric tools and find that our algorithm yields better
performance.The third part is devoted to
low rank nearness problems
in the Q-norm, where thematrix of interest is additionally of
linear structure, meaning it lies inthe set spanned by s
predefined matrices U
1,…, U
s∈{0,1}
n × p. These
problems are often associated with model reduction, for example in…
Advisors/Committee Members: SHARDLOW, TONY T, Higham, Nicholas, Shardlow, Tony.
Subjects/Keywords: correlation matrix; factor structure; matrix embedding; Stiefel manifold; linearly structured matrix; Grassmannian manifold; low rank; optimization over manifolds; matrix nearness problems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Borsdorf, R. (2012). Structured Matrix Nearness Problems:Theory and
Algorithms. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:162521
Chicago Manual of Style (16th Edition):
Borsdorf, Ruediger. “Structured Matrix Nearness Problems:Theory and
Algorithms.” 2012. Doctoral Dissertation, University of Manchester. Accessed January 20, 2021.
http://www.manchester.ac.uk/escholar/uk-ac-man-scw:162521.
MLA Handbook (7th Edition):
Borsdorf, Ruediger. “Structured Matrix Nearness Problems:Theory and
Algorithms.” 2012. Web. 20 Jan 2021.
Vancouver:
Borsdorf R. Structured Matrix Nearness Problems:Theory and
Algorithms. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2021 Jan 20].
Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:162521.
Council of Science Editors:
Borsdorf R. Structured Matrix Nearness Problems:Theory and
Algorithms. [Doctoral Dissertation]. University of Manchester; 2012. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:162521
9.
Haraldson, Joseph.
Matrix Polynomials and their Lower Rank Approximations.
Degree: 2019, University of Waterloo
URL: http://hdl.handle.net/10012/14847
► This thesis is a wide ranging work on computing a “lower-rank” approximation of a matrix polynomial using second-order non-linear optimization techniques. Two notions of rank…
(more)
▼ This thesis is a wide ranging work on computing a “lower-rank” approximation of a matrix polynomial using second-order non-linear optimization techniques. Two notions of rank are investigated. The first is the rank as the number of linearly independent rows or columns, which is the classical definition. The other notion considered is the lowest rank of a matrix polynomial when evaluated at a complex number, or the McCoy rank. Together, these two notions of rank allow one to compute a nearby matrix polynomial where the structure of both the left and right kernels is prescribed, along with the structure of both the infinite and finite eigenvalues. The computational theory of the calculus of matrix polynomial valued functions is developed and used in optimization algorithms based on second-order approximations. Special functions studied with a detailed error analysis are the determinant and adjoint of matrix polynomials.
The unstructured and structured variants of matrix polynomials are studied in a very general setting in the context of an equality constrained optimization problem. The most general instances of these optimization problems are NP hard to approximate solutions to in a global setting. In most instances we are able to prove that solutions to our optimization problems exist (possibly at infinity) and discuss techniques in conjunction with an implementation to compute local minimizers to the problem.
Most of the analysis of these problems is local and done through the Karush-Kuhn-Tucker optimality conditions for constrained optimization problems. We show that most formulations of the problems studied satisfy regularity conditions and admit Lagrange multipliers. Furthermore, we show that under some formulations that the second-order sufficient condition holds for instances of interest of the optimization problems in question. When Lagrange multipliers do not exist, we discuss why, and if it is reasonable to do so, how to regularize the problem. In several instances closed form expressions for the derivatives of matrix polynomial valued functions are derived to assist in analysis of the optimality conditions around a solution. From this analysis it is shown that variants of Newton’s method will have a local rate of convergence that is quadratic with a suitable initial guess for many problems.
The implementations are demonstrated on some examples from the literature and several examples are cross-validated with different optimization formulations of the same mathematical problem. We conclude with a special application of the theory developed in this thesis is computing a nearby pair of differential polynomials with a non-trivial greatest common divisor, a non-commutative symbolic-numeric computation problem. We formulate this problem as finding a nearby structured matrix polynomial that is rank deficient in the classical sense.
Subjects/Keywords: numerical linear algebra; optimization; matrix polynomial; eigenvalue; gcd; low rank; low rank approximation; polynomial eigenvalue; matrix pencil; smith form; kronecker form; kernel; matrix pencil; approximate gcd
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Haraldson, J. (2019). Matrix Polynomials and their Lower Rank Approximations. (Thesis). University of Waterloo. Retrieved from http://hdl.handle.net/10012/14847
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Haraldson, Joseph. “Matrix Polynomials and their Lower Rank Approximations.” 2019. Thesis, University of Waterloo. Accessed January 20, 2021.
http://hdl.handle.net/10012/14847.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Haraldson, Joseph. “Matrix Polynomials and their Lower Rank Approximations.” 2019. Web. 20 Jan 2021.
Vancouver:
Haraldson J. Matrix Polynomials and their Lower Rank Approximations. [Internet] [Thesis]. University of Waterloo; 2019. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/10012/14847.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Haraldson J. Matrix Polynomials and their Lower Rank Approximations. [Thesis]. University of Waterloo; 2019. Available from: http://hdl.handle.net/10012/14847
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Texas A&M University
10.
Sun, Ranye.
Thresholding Multivariate Regression and Generalized Principal Components.
Degree: PhD, Statistics, 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/152564
► As high-dimensional data arises from various fields in science and technology, traditional multivariate methods need to be updated. Principal component analysis and reduced rank regression…
(more)
▼ As high-dimensional data arises from various fields in science and technology, traditional multivariate methods need to be updated. Principal component analysis and reduced
rank regression are two of the most important multivariate statistical techniques that have seen major changes in recent years. To improving the statistical performance and achieve fast computational efficiency, recent approaches aim at regularizing both the row and column factors of the
low-
rank matrix approximation by adopting the Lasso-type penalties. Thresholding is another powerful technique for regularizing the row and column factors without solving an optimization problem. This dissertation research covers two novel applications of the idea of thresholding: the thresholding reduced
rank multivariate regression and the generalized principal component analysis/singular value decomposition (SVD). The following two para- graphs give brief introductions to each of the two topics, respectively.
Uncovering a meaningful relationship between the responses and the predictors is a fundamental goal in multivariate regression problems, which can be very challenging when data are high-dimensional. Dimension reduction and regularization techniques are applied extensively to alleviate the curse of dimensionality. It is desirable to estimate the regression coefficient
matrix by
low-
rank matrices constructed from its SVD. We reduce such regression problems to sparse SVD problems for cor- related data matrices and generalize the fast iterative thresholding for sparse SVDs algorithm to this situation. This generalization inherits the computational and statistical advantages of the original algorithm including its sparse initialization, novel ways of estimating the thresholding levels and the thresholded subspace iterations. It guarantees the orthogonality of the singular vectors and computes them simultaneously and not sequentially as in the existing methods. We also place this algorithm in an optimization framework by introducing a specific bi-convex objective function. An iterative algorithm that minimizes the objective function, via closed form iterates, is proposed and its convergence is established. This enables us to study the large sample properties of the solution of the multivariate regression problem and establishes consistency of the estimators as the sample size tends to infinity. The methodology and the potential adverse impact of dependence on the earlier algorithms are illustrated using simulation and real data.
The second part of this dissertation considers transposable data matrices where both their rows and columns are correlated. Such datasets are routinely encountered in fields such as econometrics, bio-informatics, chemometrics, network data and so on. While methods to approximate the high-dimensional data matrices have been extensively researched for uncorrelated and independent situations, they are much less so for the transposable data matrices. A generalization of principal component analysis and the related weighted least…
Advisors/Committee Members: Pourahmadi, Mohsen (advisor), Carroll, Raymond J (advisor), Longnecker, Michael T (committee member), Singh, Vijay P (committee member).
Subjects/Keywords: cross-validation; iterative subspace projections; low-rank matrix approximation; regularization; transposable data.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sun, R. (2014). Thresholding Multivariate Regression and Generalized Principal Components. (Doctoral Dissertation). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/152564
Chicago Manual of Style (16th Edition):
Sun, Ranye. “Thresholding Multivariate Regression and Generalized Principal Components.” 2014. Doctoral Dissertation, Texas A&M University. Accessed January 20, 2021.
http://hdl.handle.net/1969.1/152564.
MLA Handbook (7th Edition):
Sun, Ranye. “Thresholding Multivariate Regression and Generalized Principal Components.” 2014. Web. 20 Jan 2021.
Vancouver:
Sun R. Thresholding Multivariate Regression and Generalized Principal Components. [Internet] [Doctoral dissertation]. Texas A&M University; 2014. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1969.1/152564.
Council of Science Editors:
Sun R. Thresholding Multivariate Regression and Generalized Principal Components. [Doctoral Dissertation]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/152564

Université Catholique de Louvain
11.
Gillis, Nicolas.
Nonnegative matrix factorization : complexity, algorithms and applications.
Degree: 2011, Université Catholique de Louvain
URL: http://hdl.handle.net/2078.1/70744
► Linear dimensionality reduction techniques such as principal component analysis are powerful tools for the analysis of high-dimensional data. In this thesis, we explore a closely…
(more)
▼ Linear dimensionality reduction techniques such as principal component analysis are powerful tools for the analysis of high-dimensional data. In this thesis, we explore a closely related problem, namely nonnegative matrix factorization (NMF), a low-rank matrix approximation problem with nonnegativity constraints. More precisely, we seek to approximate a given nonnegative matrix with the product of two low-rank nonnegative matrices. These nonnegative factors can be interpreted in the same way as the data, e.g., as images (described by pixel intensities) or texts (represented by vectors of word counts), and lead to an additive and sparse representation. However, they render the problem much more difficult to solve (i.e., NP-hard).
A first goal of this thesis is to study theoretical issues related to NMF. In particular, we make connections with well-known problems in graph theory, combinatorial optimization and computational geometry. We also study computational complexity issues and show, using reductions from the maximum-edge biclique problem, NP-hardness of several low-rank matrix approximation problems, including the rank-one subproblems arising in NMF, a problem involving underapproximation constraints (NMU) and the unconstrained version of the factorization problem where some data is missing or unknown.
Our second goal is to improve existing techniques and develop new tools for the analysis of nonnegative data. We propose accelerated variants of several NMF algorithms based on a careful analysis of their computational cost. We also introduce a multilevel approach to speed up their initial convergence. Finally, a new greedy heuristic based on NMU is presented and used for the analysis of hyperspectral images, in which each pixel is measured along hundreds of wavelengths, which allows for example spectroscopy of satellite images.
(FSA 3) – UCL, 2011
Advisors/Committee Members: UCL - SST/ICTM/INMA - Pôle en ingénierie mathématique, Glineur, François, Lefèvre, Philippe, Van Dooren, Paul, Absil, Pierre-Antoine, Henrion, Didier, Van Huffel, Sabine, Vavasis, Stephen, Fiorini, Samuel.
Subjects/Keywords: Low-rank matrix approximation; Nonnegative matrices; Computational complexity; Optimization; Underapproximation; Data mining; Hyperspectral image analysis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gillis, N. (2011). Nonnegative matrix factorization : complexity, algorithms and applications. (Thesis). Université Catholique de Louvain. Retrieved from http://hdl.handle.net/2078.1/70744
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Gillis, Nicolas. “Nonnegative matrix factorization : complexity, algorithms and applications.” 2011. Thesis, Université Catholique de Louvain. Accessed January 20, 2021.
http://hdl.handle.net/2078.1/70744.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Gillis, Nicolas. “Nonnegative matrix factorization : complexity, algorithms and applications.” 2011. Web. 20 Jan 2021.
Vancouver:
Gillis N. Nonnegative matrix factorization : complexity, algorithms and applications. [Internet] [Thesis]. Université Catholique de Louvain; 2011. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2078.1/70744.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Gillis N. Nonnegative matrix factorization : complexity, algorithms and applications. [Thesis]. Université Catholique de Louvain; 2011. Available from: http://hdl.handle.net/2078.1/70744
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
12.
Zhu, Ziwei.
Distributed and Robust Statistical Learning
.
Degree: PhD, 2018, Princeton University
URL: http://arks.princeton.edu/ark:/88435/dsp01d217qs22x
► Decentralized and corrupted data are nowadays ubiquitous, which impose fundamental challenges for modern statistical analysis. Illustrative examples are massive and decentralized data produced by distributed…
(more)
▼ Decentralized and corrupted data are nowadays ubiquitous, which impose fundamental challenges for modern statistical analysis. Illustrative examples are massive and decentralized data produced by distributed data collection systems of giant IT companies, corrupted measurement in genetic micro-array analysis, heavy-tailed returns of stocks and etc. These notorious features of modern data often contradict conventional theoretical assumptions in statistics research and invalidate standard statistical procedures. My dissertation addresses these problems by proposing new methodologies with strong statistical guarantees.
When data are distributed over different places with limited communication budget, we propose to do local statistical analysis first and aggregate the local results rather than the data themselves to generate a final result. We applied this approach to
low-dimensional regression, high-dimensional sparse regression and principal component analysis. When data are not over-scattered, our distributed approach is proved to achieve the same statistical performance as the full sample oracle, i.e., the standard procedure based on all the data.
To handle heavy-tailed corruption, we propose a generic principle of data shrinkage for robust estimation and inference. To illustrate this principle, we apply it to estimate regression coefficients in the trace regression model and generalized linear model with heavy-tailed noise and design. The proposed method achieves nearly the same statistical error rate as the standard procedure while requiring only bounded moment conditions on data. This widens the scope of high-dimensional techniques, reducing the moment conditions from sub-exponential or sub-Gaussian distributions to merely bounded second or fourth moment.
Advisors/Committee Members: Fan, Jianqing (advisor).
Subjects/Keywords: distributed learning;
high-dimensional statistics;
low-rank matrix recovery;
principal component analysis;
regression;
robust statistics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhu, Z. (2018). Distributed and Robust Statistical Learning
. (Doctoral Dissertation). Princeton University. Retrieved from http://arks.princeton.edu/ark:/88435/dsp01d217qs22x
Chicago Manual of Style (16th Edition):
Zhu, Ziwei. “Distributed and Robust Statistical Learning
.” 2018. Doctoral Dissertation, Princeton University. Accessed January 20, 2021.
http://arks.princeton.edu/ark:/88435/dsp01d217qs22x.
MLA Handbook (7th Edition):
Zhu, Ziwei. “Distributed and Robust Statistical Learning
.” 2018. Web. 20 Jan 2021.
Vancouver:
Zhu Z. Distributed and Robust Statistical Learning
. [Internet] [Doctoral dissertation]. Princeton University; 2018. [cited 2021 Jan 20].
Available from: http://arks.princeton.edu/ark:/88435/dsp01d217qs22x.
Council of Science Editors:
Zhu Z. Distributed and Robust Statistical Learning
. [Doctoral Dissertation]. Princeton University; 2018. Available from: http://arks.princeton.edu/ark:/88435/dsp01d217qs22x

University of Illinois – Urbana-Champaign
13.
Balasubramanian, Arvind.
Applications of low-rank matrix recovery methods in computer vision.
Degree: PhD, 1200, 2012, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/31929
► The ubiquitous availability of high-dimensional data such as images and videos has generated a lot of interest in high-dimensional data analysis. One of the key…
(more)
▼ The ubiquitous availability of high-dimensional data such as images and videos has generated a lot of interest in high-dimensional data analysis. One of the key issues that needs to be addressed in real applications is the presence of large-magnitude non-Gaussian errors. For image data, the problem of deformations or domain transformations also poses interesting challenges. In this thesis, we harness recent advances in
low-
rank matrix recovery via convex optimization techniques to solve real problems in computer vision. This thesis also provides some theoretical analysis that extends existing results to new observation models.
Low-
rank matrix approximations are a popular tool in data analysis. The well-known Principal Component Analysis (PCA) algorithm is a good example. Recently, it was shown that
low-
rank matrices can be recovered exactly from grossly corrupted measurements via convex optimization. This framework, called Principal Component Pursuit (PCP), constitutes a powerful tool that allows us to handle corrupted measurements and even missing entries in a principled way. In this thesis, we extend existing theoretical results to the case when a large majority of the entries of the
matrix are badly corrupted.
On the application side, we first briefly look at the image formation model that naturally gives rise to a
low-
rank matrix structure, and see how PCP can be used effectively in the photometric stereo problem. We then extend the existing PCP framework in a non-trivial fashion to effectively handle domain transformations in images. The proposed ideas are used to align multiple images simultaneously with one another, as well as to represent structured and symmetric textures in a novel way that is invariant to deformations. In addition to achieving excellent performance on real data, these methods are potentially very useful for other vision tasks like 3D structure recovery for urban images, automatic camera calibration, etc.
Finally, we provide some theoretical guarantees for the new observation model encountered in the aforementioned applications. In particular, we show that under some conditions it is possible to recover most
low-
rank matrices even when a small linear fraction of their entries has been badly corrupted and, furthermore, when only linear measurements of the corrupted
matrix are available.
Besides being one of the first theoretical results for this case, this dissertation opens up many exciting avenues for future research in this direction.
Advisors/Committee Members: Ma, Yi (advisor), Ma, Yi (Committee Chair), Huang, Thomas S. (committee member), Milenkovic, Olgica (committee member), Meyn, Sean P. (committee member).
Subjects/Keywords: Image Alignment; Texture Rectification; Low-Rank Matrix Recovery; Convex Optimization; Photometric Stereo; Principal Component Pursuit
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Balasubramanian, A. (2012). Applications of low-rank matrix recovery methods in computer vision. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/31929
Chicago Manual of Style (16th Edition):
Balasubramanian, Arvind. “Applications of low-rank matrix recovery methods in computer vision.” 2012. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed January 20, 2021.
http://hdl.handle.net/2142/31929.
MLA Handbook (7th Edition):
Balasubramanian, Arvind. “Applications of low-rank matrix recovery methods in computer vision.” 2012. Web. 20 Jan 2021.
Vancouver:
Balasubramanian A. Applications of low-rank matrix recovery methods in computer vision. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2012. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2142/31929.
Council of Science Editors:
Balasubramanian A. Applications of low-rank matrix recovery methods in computer vision. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2012. Available from: http://hdl.handle.net/2142/31929

Georgia Tech
14.
Zhou, Fan.
Statistical inference for high dimensional data with low rank structure.
Degree: PhD, Mathematics, 2018, Georgia Tech
URL: http://hdl.handle.net/1853/60750
► We study two major topics on statistical inference for high dimensional data with low rank structure occurred in many machine learning and statistics applications. The…
(more)
▼ We study two major topics on statistical inference for high dimensional data with
low rank structure occurred in many machine learning and statistics applications. The first topic is about nonparametric estimation of
low rank matrix valued function with applications in building dynamic recommender systems and recovering euclidean distance matrices in molecular biology. We propose an innovative nuclear norm penalized local polynomial estimator and establish an upper bound on its point-wise risk measured by Frobenius norm. Then we extend this
estimator globally and prove an upper bound on its integrated risk measured by L
2-norm. We also propose another new estimator based on bias-reducing kernels to study the case when the
matrix valued function is not necessarily
low rank and establish an upper bound on its risk measured by L
∞-norm. We show that the obtained rates are all optimal up to some logarithmic factor in minimax sense. Finally, we propose an adaptive estimation procedure for practitioners based on Lepski's method and the penalized data splitting technique which is computationally efficient and can be easily implemented and parallelized.
The other topic is about spectral perturbation analysis of higher order singular value decomposition (HOSVD) of tensor under Gaussian noise. Given a tensor contaminated with Gaussian noise, we establish sharp upper bounds on the perturbation of linear forms of singular vectors of HOSVD. In particular, sharp upper bounds are proved for the component-wise perturbation of singular vectors. These results can be applied on sub-tensor localization and
low rank tensor denoising.
Advisors/Committee Members: Koltchinskii, Vladimir (advisor), Chow, Edmond (committee member), Zha, Hongyuan (committee member), Zhilova, Mayya (committee member), Davenport, Mark (committee member), Kang, Sung Ha (committee member).
Subjects/Keywords: Nonparametric statistics; Matrix completion; Low rank; Nuclear norm; Tensor; Singular vector perturbation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Zhou, F. (2018). Statistical inference for high dimensional data with low rank structure. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60750
Chicago Manual of Style (16th Edition):
Zhou, Fan. “Statistical inference for high dimensional data with low rank structure.” 2018. Doctoral Dissertation, Georgia Tech. Accessed January 20, 2021.
http://hdl.handle.net/1853/60750.
MLA Handbook (7th Edition):
Zhou, Fan. “Statistical inference for high dimensional data with low rank structure.” 2018. Web. 20 Jan 2021.
Vancouver:
Zhou F. Statistical inference for high dimensional data with low rank structure. [Internet] [Doctoral dissertation]. Georgia Tech; 2018. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1853/60750.
Council of Science Editors:
Zhou F. Statistical inference for high dimensional data with low rank structure. [Doctoral Dissertation]. Georgia Tech; 2018. Available from: http://hdl.handle.net/1853/60750

University of Texas – Austin
15.
Bhojanapalli, Venkata Sesha Pavana Srinadh.
Large scale matrix factorization with guarantees: sampling and bi-linearity.
Degree: PhD, Electrical and Computer Engineering, 2015, University of Texas – Austin
URL: http://hdl.handle.net/2152/32832
► Low rank matrix factorization is an important step in many high dimensional machine learning algorithms. Traditional algorithms for factorization do not scale well with the…
(more)
▼ Low rank matrix factorization is an important step in many high dimensional machine learning algorithms. Traditional algorithms for factorization do not scale well with the growing data sizes and there is a need for faster/scalable algorithms. In this dissertation we explore the following two major themes to design scalable factorization algorithms for the problems:
matrix completion,
low rank approximation (PCA) and semi-definite optimization. (a) Sampling: We develop the optimal way to sample entries of any
matrix while preserving its spectral properties. Using this sparse sketch (set of sampled entries) instead of the entire
matrix, gives rise to scalable algorithms with good approximation guarantees. (b) Bi-linear factorization structure: We design algorithms that operate explicitly on the factor space instead on the
matrix. While bi-linear structure of the factorization, in general, leads to a non-convex optimization problem, we show that under appropriate conditions they indeed recover the solution for the above problems. Both these techniques (individually or in combination) lead to algorithms with lower computational complexity and memory usage. Finally we extend these ideas of sampling and explicit factorization to design algorithms for higher order tensors.
Advisors/Committee Members: Sanghavi, Sujay Rajendra, 1979- (advisor), Caramanis, Constantine (committee member), Dhillon, Inderjit (committee member), Dimakis, Alexandros (committee member), Ravikumar, Pradeep (committee member), Ward, Rachel (committee member).
Subjects/Keywords: Matrix completion; Non-convex optimization; Low rank approximation; Semi-definite optimization; Tensor factorization; Scalable algorithms
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bhojanapalli, V. S. P. S. (2015). Large scale matrix factorization with guarantees: sampling and bi-linearity. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/32832
Chicago Manual of Style (16th Edition):
Bhojanapalli, Venkata Sesha Pavana Srinadh. “Large scale matrix factorization with guarantees: sampling and bi-linearity.” 2015. Doctoral Dissertation, University of Texas – Austin. Accessed January 20, 2021.
http://hdl.handle.net/2152/32832.
MLA Handbook (7th Edition):
Bhojanapalli, Venkata Sesha Pavana Srinadh. “Large scale matrix factorization with guarantees: sampling and bi-linearity.” 2015. Web. 20 Jan 2021.
Vancouver:
Bhojanapalli VSPS. Large scale matrix factorization with guarantees: sampling and bi-linearity. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2015. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2152/32832.
Council of Science Editors:
Bhojanapalli VSPS. Large scale matrix factorization with guarantees: sampling and bi-linearity. [Doctoral Dissertation]. University of Texas – Austin; 2015. Available from: http://hdl.handle.net/2152/32832

University of Pennsylvania
16.
Yang, Dan.
Singular Value Decomposition for High Dimensional Data.
Degree: 2012, University of Pennsylvania
URL: https://repository.upenn.edu/edissertations/595
► Singular value decomposition is a widely used tool for dimension reduction in multivariate analysis. However, when used for statistical estimation in high-dimensional low rank matrix…
(more)
▼ Singular value decomposition is a widely used tool for dimension reduction in multivariate analysis. However, when used for statistical estimation in high-dimensional low rank matrix models, singular vectors of the noise-corrupted matrix are inconsistent for their counterparts of the true mean matrix. We suppose the true singular vectors have sparse representations in a certain basis. We propose an iterative thresholding algorithm that can estimate the subspaces spanned by leading left and right singular vectors and also the true mean matrix optimally under Gaussian assumption. We further turn the algorithm into a practical methodology that is fast, data-driven and robust to heavy-tailed noises. Simulations and a real data example further show its competitive performance. The dissertation contains two chapters. For the ease of the delivery, Chapter 1 is dedicated to the description and the study of the practical methodology and Chapter 2 states and proves the theoretical property of the algorithm under Gaussian noise.
Subjects/Keywords: Cross validation; Denoise; Low rank matrix approximation; PCA; Penalization; Thresholding; Statistics and Probability
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Yang, D. (2012). Singular Value Decomposition for High Dimensional Data. (Thesis). University of Pennsylvania. Retrieved from https://repository.upenn.edu/edissertations/595
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Yang, Dan. “Singular Value Decomposition for High Dimensional Data.” 2012. Thesis, University of Pennsylvania. Accessed January 20, 2021.
https://repository.upenn.edu/edissertations/595.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Yang, Dan. “Singular Value Decomposition for High Dimensional Data.” 2012. Web. 20 Jan 2021.
Vancouver:
Yang D. Singular Value Decomposition for High Dimensional Data. [Internet] [Thesis]. University of Pennsylvania; 2012. [cited 2021 Jan 20].
Available from: https://repository.upenn.edu/edissertations/595.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Yang D. Singular Value Decomposition for High Dimensional Data. [Thesis]. University of Pennsylvania; 2012. Available from: https://repository.upenn.edu/edissertations/595
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Michigan Technological University
17.
Azzam, Joy.
Sub-Sampled Matrix Approximations.
Degree: PhD, Department of Mathematical Sciences, 2020, Michigan Technological University
URL: https://digitalcommons.mtu.edu/etdr/1002
► Matrix approximations are widely used to accelerate many numerical algorithms. Current methods sample row (or column) spaces to reduce their computational footprint and approximate…
(more)
▼ Matrix approximations are widely used to accelerate many numerical algorithms. Current methods sample row (or column) spaces to reduce their computational footprint and approximate a
matrix A with an appropriate embedding of the data sampled. This work introduces a novel family of randomized iterative algorithms which use significantly less data per iteration than current methods by sampling input and output spaces simultaneously. The data footprint of the algorithms can be tuned (independent of the underlying
matrix dimension) to available hardware. Proof is given for the convergence of the algorithms, which are referred to as sub-sampled, in terms of numerically tested error bounds. A heuristic accelerated scheme is developed and compared to current algorithms on a substantial test-suite of matrices. The sub-sampled algorithms provide a lightweight framework to construct more useful inverse and
low rank matrix approximations. Modifying the sub-sampled algorithms gives families of methods which iteratively approximate the inverse of a
matrix whose accelerated variant is comparable to current state of the art methods. Inserting a compression step in the algorithms gives
low rank approximations having accelerated variants which have fixed computational as well as storage footprints.
Advisors/Committee Members: Allan Struthers, Benjamin Ong.
Subjects/Keywords: Matrix Approximation; Inverse Matrix Approximation; Low Rank Approximation; Quasi-Newton; Randomized Numerical Linear Algebra; Preconditioner; Sub-Sampled; Other Applied Mathematics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Azzam, J. (2020). Sub-Sampled Matrix Approximations. (Doctoral Dissertation). Michigan Technological University. Retrieved from https://digitalcommons.mtu.edu/etdr/1002
Chicago Manual of Style (16th Edition):
Azzam, Joy. “Sub-Sampled Matrix Approximations.” 2020. Doctoral Dissertation, Michigan Technological University. Accessed January 20, 2021.
https://digitalcommons.mtu.edu/etdr/1002.
MLA Handbook (7th Edition):
Azzam, Joy. “Sub-Sampled Matrix Approximations.” 2020. Web. 20 Jan 2021.
Vancouver:
Azzam J. Sub-Sampled Matrix Approximations. [Internet] [Doctoral dissertation]. Michigan Technological University; 2020. [cited 2021 Jan 20].
Available from: https://digitalcommons.mtu.edu/etdr/1002.
Council of Science Editors:
Azzam J. Sub-Sampled Matrix Approximations. [Doctoral Dissertation]. Michigan Technological University; 2020. Available from: https://digitalcommons.mtu.edu/etdr/1002
18.
Amadeo, Lily.
Large Scale Matrix Completion and Recommender Systems.
Degree: MS, 2015, Worcester Polytechnic Institute
URL: etd-090415-162439
;
https://digitalcommons.wpi.edu/etd-theses/1021
► "The goal of this thesis is to extend the theory and practice of matrix completion algorithms, and how they can be utilized, improved, and scaled…
(more)
▼ "The goal of this thesis is to extend the theory and practice of
matrix completion algorithms, and how they can be utilized, improved, and scaled up to handle large data sets.
Matrix completion involves predicting missing entries in real-world data matrices using the modeling assumption that the fully observed
matrix is
low-
rank.
Low-
rank matrices appear across a broad selection of domains, and such a modeling assumption is similar in spirit to Principal Component Analysis. Our focus is on large scale problems, where the matrices have millions of rows and columns. In this thesis we provide new analysis for the convergence rates of
matrix completion techniques using convex nuclear norm relaxation. In addition, we validate these results on both synthetic data and data from two real-world domains (recommender systems and Internet tomography). The results we obtain show that with an empirical, data-inspired understanding of various parameters in the algorithm, this
matrix completion problem can be solved more efficiently than some previous theory suggests, and therefore can be extended to much larger problems with greater ease. "
Advisors/Committee Members: Andrew C. Trapp, Reader, Randy C. Paffenroth, Advisor;.
Subjects/Keywords: low rank matrix; robust principal component analysis; convex relaxation; principal component analysis; recommender systems; matrix completion
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Amadeo, L. (2015). Large Scale Matrix Completion and Recommender Systems. (Thesis). Worcester Polytechnic Institute. Retrieved from etd-090415-162439 ; https://digitalcommons.wpi.edu/etd-theses/1021
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Amadeo, Lily. “Large Scale Matrix Completion and Recommender Systems.” 2015. Thesis, Worcester Polytechnic Institute. Accessed January 20, 2021.
etd-090415-162439 ; https://digitalcommons.wpi.edu/etd-theses/1021.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Amadeo, Lily. “Large Scale Matrix Completion and Recommender Systems.” 2015. Web. 20 Jan 2021.
Vancouver:
Amadeo L. Large Scale Matrix Completion and Recommender Systems. [Internet] [Thesis]. Worcester Polytechnic Institute; 2015. [cited 2021 Jan 20].
Available from: etd-090415-162439 ; https://digitalcommons.wpi.edu/etd-theses/1021.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Amadeo L. Large Scale Matrix Completion and Recommender Systems. [Thesis]. Worcester Polytechnic Institute; 2015. Available from: etd-090415-162439 ; https://digitalcommons.wpi.edu/etd-theses/1021
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Georgia Tech
19.
Lee, Joonseok.
Local approaches for collaborative filtering.
Degree: PhD, Computer Science, 2015, Georgia Tech
URL: http://hdl.handle.net/1853/53846
► Recommendation systems are emerging as an important business application as the demand for personalized services in E-commerce increases. Collaborative filtering techniques are widely used for…
(more)
▼ Recommendation systems are emerging as an important business application as the demand for personalized services in E-commerce increases. Collaborative filtering techniques are widely used for predicting a user's preference or generating a list of items to be recommended. In this thesis, we develop several new approaches for collaborative filtering based on model combination and kernel smoothing. Specifically, we start with an experimental study that compares a wide variety of CF methods under different conditions. Based on this study, we formulate a combination model similar to boosting but where the combination coefficients are functions rather than constant. In another contribution we formulate and analyze a local variation of
matrix factorization. This formulation constructs multiple local
matrix factorization models and then combines them into a global model. This formulation is based on the local
low-
rank assumption, a slightly different but more plausible assumption about the rating
matrix. We apply this assumption to both rating prediction and ranking problems, with both empirical validations and theoretical analysis.
We contribute with this thesis in four aspects. First, the local approaches we present significantly improve the accuracy of recommendations both in rating prediction and ranking problems. Second, with the more realistic local
low-
rank assumption, we fundamentally change the underlying assumption for
matrix factorization-based recommendation systems. Third, we present highly efficient and scalable algorithms which take advantage of parallelism, suited for recent large scale datasets. Lastly, we provide an open source software implementing the local approaches in this thesis as well as many other recent recommendation algorithms, which can be used both in research and production.
Advisors/Committee Members: Chau, Duen Horng (Polo) (advisor), Lebanon, Guy (committee member), Zha, Hongyuan (committee member), Song, Le (committee member), Singer, Yoram (committee member).
Subjects/Keywords: Recommendation systems; Collaborative filtering; Machine learning; Local low-rank assumption; Matrix factorization; Matrix approximation; Ensemble collaborative ranking
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Lee, J. (2015). Local approaches for collaborative filtering. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/53846
Chicago Manual of Style (16th Edition):
Lee, Joonseok. “Local approaches for collaborative filtering.” 2015. Doctoral Dissertation, Georgia Tech. Accessed January 20, 2021.
http://hdl.handle.net/1853/53846.
MLA Handbook (7th Edition):
Lee, Joonseok. “Local approaches for collaborative filtering.” 2015. Web. 20 Jan 2021.
Vancouver:
Lee J. Local approaches for collaborative filtering. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1853/53846.
Council of Science Editors:
Lee J. Local approaches for collaborative filtering. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/53846
20.
MIAO WEIMIN.
Matrix Completion Models with Fixed Basis Coefficients and Rank Regularized Problems with Hard Constraints.
Degree: 2013, National University of Singapore
URL: http://scholarbank.nus.edu.sg/handle/10635/37889
Subjects/Keywords: matrix completion; rank minimization; low rank; error bound; rank consistency; semi-nuclear norm
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
WEIMIN, M. (2013). Matrix Completion Models with Fixed Basis Coefficients and Rank Regularized Problems with Hard Constraints. (Thesis). National University of Singapore. Retrieved from http://scholarbank.nus.edu.sg/handle/10635/37889
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
WEIMIN, MIAO. “Matrix Completion Models with Fixed Basis Coefficients and Rank Regularized Problems with Hard Constraints.” 2013. Thesis, National University of Singapore. Accessed January 20, 2021.
http://scholarbank.nus.edu.sg/handle/10635/37889.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
WEIMIN, MIAO. “Matrix Completion Models with Fixed Basis Coefficients and Rank Regularized Problems with Hard Constraints.” 2013. Web. 20 Jan 2021.
Vancouver:
WEIMIN M. Matrix Completion Models with Fixed Basis Coefficients and Rank Regularized Problems with Hard Constraints. [Internet] [Thesis]. National University of Singapore; 2013. [cited 2021 Jan 20].
Available from: http://scholarbank.nus.edu.sg/handle/10635/37889.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
WEIMIN M. Matrix Completion Models with Fixed Basis Coefficients and Rank Regularized Problems with Hard Constraints. [Thesis]. National University of Singapore; 2013. Available from: http://scholarbank.nus.edu.sg/handle/10635/37889
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Manchester
21.
Borsdorf, Ruediger.
Structured matrix nearness problems : theory and algorithms.
Degree: PhD, 2012, University of Manchester
URL: https://www.research.manchester.ac.uk/portal/en/theses/structured-matrix-nearness-problemstheory-and-algorithms(554f944d-9a78-4b54-90c2-1ef06866c402).html
;
http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554165
► In many areas of science one often has a given matrix, representing for example a measured data set and is required to find a matrix…
(more)
▼ In many areas of science one often has a given matrix, representing for example a measured data set and is required to find a matrix that is closest in a suitable norm to the matrix and possesses additionally a structure, inherited from the model used or coming from the application. We call these problems structured matrix nearness problems. We look at three different groups of these problems that come from real applications, analyze the properties of the corresponding matrix structure, and propose algorithms to solve them efficiently. The first part of this thesis concerns the nearness problem of finding the nearest k factor correlation matrix C(X) = diag(I_n -XX T)+XX T to a given symmetric matrix, subject to natural nonlinear constraints on the elements of the n x k matrix X, where distance is measured in the Frobenius norm. Such problems arise, for example, when one is investigating factor models of collateralized debt obligations (CDOs) or multivariate time series. We examine several algorithms for solving the nearness problem that differ in whether or not they can take account of the nonlinear constraints and in their convergence properties. Our numerical experiments show that the performance of the methods depends strongly on the problem, but that, among our tested methods, the spectral projected gradient method is the clear winner. In the second part we look at two two-sided optimization problems where the matrix of unknowns Y ε R {n x p} lies in the Stiefel manifold. These two problems come from an application in atomic chemistry where one is looking for atomic orbitals with prescribed occupation numbers. We analyze these two problems, propose an analytic optimal solution of the first and show that an optimal solution of the second problem can be found by solving a convex quadratic programming problem with box constraints and p unknowns. We prove that the latter problem can be solved by the active-set method in at most 2p iterations. Subsequently, we analyze the set of optimal solutions C}= {Y ε R n x p:Y TY=I_p,Y TNY=D} of the first problem for N symmetric and D diagonal and find that a slight modification of it is a Riemannian manifold. We derive the geometric objects required to make an optimization over this manifold possible. We propose an augmented Lagrangian-based algorithm that uses these geometric tools and allows us to optimize an arbitrary smooth function over C. This algorithm can be used to select a particular solution out of the latter set C by posing a new optimization problem. We compare it numerically with a similar algorithm that ,however, does not apply these geometric tools and find that our algorithm yields better performance. The third part is devoted to low rank nearness problems in the Q-norm, where the matrix of interest is additionally of linear structure, meaning it lies in the set spanned by s predefined matrices U₁,..., U_s ε {0,1} n x p. These problems are often associated with model reduction, for example in speech encoding, filter design, or latent semantic indexing. We…
Subjects/Keywords: 025.04; correlation matrix; factor structure; matrix embedding; Stiefel manifold; linearly structured matrix; Grassmannian manifold; low rank; optimization over manifolds; matrix nearness problems
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Borsdorf, R. (2012). Structured matrix nearness problems : theory and algorithms. (Doctoral Dissertation). University of Manchester. Retrieved from https://www.research.manchester.ac.uk/portal/en/theses/structured-matrix-nearness-problemstheory-and-algorithms(554f944d-9a78-4b54-90c2-1ef06866c402).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554165
Chicago Manual of Style (16th Edition):
Borsdorf, Ruediger. “Structured matrix nearness problems : theory and algorithms.” 2012. Doctoral Dissertation, University of Manchester. Accessed January 20, 2021.
https://www.research.manchester.ac.uk/portal/en/theses/structured-matrix-nearness-problemstheory-and-algorithms(554f944d-9a78-4b54-90c2-1ef06866c402).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554165.
MLA Handbook (7th Edition):
Borsdorf, Ruediger. “Structured matrix nearness problems : theory and algorithms.” 2012. Web. 20 Jan 2021.
Vancouver:
Borsdorf R. Structured matrix nearness problems : theory and algorithms. [Internet] [Doctoral dissertation]. University of Manchester; 2012. [cited 2021 Jan 20].
Available from: https://www.research.manchester.ac.uk/portal/en/theses/structured-matrix-nearness-problemstheory-and-algorithms(554f944d-9a78-4b54-90c2-1ef06866c402).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554165.
Council of Science Editors:
Borsdorf R. Structured matrix nearness problems : theory and algorithms. [Doctoral Dissertation]. University of Manchester; 2012. Available from: https://www.research.manchester.ac.uk/portal/en/theses/structured-matrix-nearness-problemstheory-and-algorithms(554f944d-9a78-4b54-90c2-1ef06866c402).html ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554165

University of Michigan
22.
Nayar, Himanshu.
Application of Random Matrix Theory to Multimodal Fusion.
Degree: PhD, Electrical Engineering: Systems, 2017, University of Michigan
URL: http://hdl.handle.net/2027.42/140962
► Multimodal data fusion is an interesting problem and its applications can be seen in image processing, signal processing and machine learning. In applications where we…
(more)
▼ Multimodal data fusion is an interesting problem and its applications can be seen in image processing, signal processing and machine learning. In applications where we are given
matrix data samples, for instance, adjacency matrices of networks or preference matrices from recommender matrices, it is desirable to extract trends from the data by using
low rank representations of the matrices and finding
low dimensional representations of the underlying entities.
In this thesis, we shall be focussing our attention on the problem of multimodal data fusion with an interest in eigen value decomposition based algorithms. The contribution of this thesis is to introduce algorithms, that in a principled sense combine the samples to generate inferences about the underlying signal components by leveraging recent results from Random
Matrix Theory. The focus of our study would be to understand these algorithms in terms of phase transition boundaries, give sharp asymptotic bounds on the performance. To that end, we discuss data driven algorithms at the data level fusion, {it OptFuse} and feature level fusion{it OptEigenFuse} that give optimal inference about the latent eigen-space.
We will then focus on the planted quasi clique recovery problem and discuss how could multiple independent network samples be used to generate an optimal inference about the clique structure in large networks. We develop an optimal linear fusion rule in such a setting by using ideas from {it OptFuse} and demonstrate the performance of this scheme. We also revisit a popular algorithm for tensor decomposition {it HOSVD} to understand its performance limitations in the missing data setting.
Advisors/Committee Members: Nadakuditi, Raj Rao (committee member), Baik, Jinho (committee member), Hero III, Alfred O (committee member), Swami, Ananthram (committee member).
Subjects/Keywords: Data driven fusion; Random Matrix Theory; Factor analysis; Low rank decomposition; Clique recovery; Electrical Engineering; Engineering
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Nayar, H. (2017). Application of Random Matrix Theory to Multimodal Fusion. (Doctoral Dissertation). University of Michigan. Retrieved from http://hdl.handle.net/2027.42/140962
Chicago Manual of Style (16th Edition):
Nayar, Himanshu. “Application of Random Matrix Theory to Multimodal Fusion.” 2017. Doctoral Dissertation, University of Michigan. Accessed January 20, 2021.
http://hdl.handle.net/2027.42/140962.
MLA Handbook (7th Edition):
Nayar, Himanshu. “Application of Random Matrix Theory to Multimodal Fusion.” 2017. Web. 20 Jan 2021.
Vancouver:
Nayar H. Application of Random Matrix Theory to Multimodal Fusion. [Internet] [Doctoral dissertation]. University of Michigan; 2017. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2027.42/140962.
Council of Science Editors:
Nayar H. Application of Random Matrix Theory to Multimodal Fusion. [Doctoral Dissertation]. University of Michigan; 2017. Available from: http://hdl.handle.net/2027.42/140962

Wayne State University
23.
Wang, Lijun.
Complex data analytics via sparse, low-rank matrix approximation.
Degree: PhD, Computer Science, 2012, Wayne State University
URL: https://digitalcommons.wayne.edu/oa_dissertations/583
► Today, digital data is accumulated at a faster than ever speed in science, engineering, biomedicine, and real-world sensing. Data mining provides us an effective…
(more)
▼ Today, digital data is accumulated at a faster than ever speed in science, engineering, biomedicine, and real-world sensing. Data mining provides us an effective way for the exploration and analysis of hidden patterns from these data for a broad spectrum of applications. Usually, these datasets share one prominent characteristic: tremendous in size with tens of thousands of objects and features. In addition, data is not only collected over a period of time, but the relationship between data points can change over that period too. Besides, knowledge is very sparsely encoded because the patterns are usually active only in a local area. The ubiquitous phenomenon of massive, dynamic, and sparse data imposes considerable challenges in data mining research. Recently, techniques that can expand the human ability to comprehend
large-scale data have attracted significant attention in the research community.
In this dissertation, we present approaches to solve the problems of complex data analysis in various applications. Specifically, we have achieved the following: 1) we develop Exemplar-based
low-
rank sparse
Matrix Decomposition (EMD), a novel method for fast clustering large-scale data by incorporating
low-
rank approximations into
matrix decomposition-based clustering; 2) we propose ECKF, a general model for large-scale Evolutionary Clustering based on
low-
rank Kernel
matrix Factorization; by monitoring the
low-
rank approximation errors at every time step, ECKF can analyze if the underlying structure of the data or the nature of the relationship between the data points has changed over different time steps; based on this, a decision to either succeed the previous clustering or perform a new clustering is made; 3) we propose a Multi-level
Low-
rank Approximation (MLA) framework for fast spectral clustering, which is empirically shown to cluster large-scale data very efficiently; 4) we extend the MLA framework with a non-linear kernel and apply it to HD image segmentation; with sufficient data samples selected by fast sampling strategy, our method shows superior performance compared with other leading approximate spectral clusterings; 5) we develop a fast algorithm to detect abnormal crowd behavior in surveillance videos by employing
low-
rank matrix approximations to model crowd behavior patterns; through experiments performed on simulation crowd
videos, we demonstrate the effectiveness of our method.
Advisors/Committee Members: Ming Dong.
Subjects/Keywords: abnormal event detection, Big data analytics, clustering, evolutionary clustering, large-scale data analysis, low-rank matrix approximation; Computer Sciences
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, L. (2012). Complex data analytics via sparse, low-rank matrix approximation. (Doctoral Dissertation). Wayne State University. Retrieved from https://digitalcommons.wayne.edu/oa_dissertations/583
Chicago Manual of Style (16th Edition):
Wang, Lijun. “Complex data analytics via sparse, low-rank matrix approximation.” 2012. Doctoral Dissertation, Wayne State University. Accessed January 20, 2021.
https://digitalcommons.wayne.edu/oa_dissertations/583.
MLA Handbook (7th Edition):
Wang, Lijun. “Complex data analytics via sparse, low-rank matrix approximation.” 2012. Web. 20 Jan 2021.
Vancouver:
Wang L. Complex data analytics via sparse, low-rank matrix approximation. [Internet] [Doctoral dissertation]. Wayne State University; 2012. [cited 2021 Jan 20].
Available from: https://digitalcommons.wayne.edu/oa_dissertations/583.
Council of Science Editors:
Wang L. Complex data analytics via sparse, low-rank matrix approximation. [Doctoral Dissertation]. Wayne State University; 2012. Available from: https://digitalcommons.wayne.edu/oa_dissertations/583

Northeastern University
24.
Shao, Ming.
Efficient transfer feature learning and its applications on social media.
Degree: PhD, Department of Electrical and Computer Engineering, 2016, Northeastern University
URL: http://hdl.handle.net/2047/D20213068
► In the era of social media, more and more social characteristics are conveyed by multimedia, i.e., images, videos, audios, and webpages with rich media information.…
(more)
▼ In the era of social media, more and more social characteristics are conveyed by multimedia, i.e., images, videos, audios, and webpages with rich media information. With the emergence and popularity of the Internet, collecting multimedia data cannot be much easier than today. However, tons of data available on-line are lack of organization and their tags or labels are sparse or loosely organized for certain problems we are interested in. The basic question is how to learn discriminant features or representations given very limited or poor training data for social media analytics.; In this thesis, we focus on the popular social media data such as, face, object, digital number images, and study the problems of social media analytics in two lines: (1) developing efficient and effective machine learning tools given limited or poor training data by considering the structure of the data from different domains, (2) applying existing or developed machine learning tools to novel social media problems, e.g., kinship verification, family photo understanding. These two lines are detailed as followings:; For knowledge-based machine learning algorithms, label or tag is critical in training the discriminative model. However, labeling data is not an easy task because these data are either too costly to obtain or too expensive to hand-label. For that reason, researchers use labeled, yet relevant, data from different databases to facilitate learning process. This is exactly transfer learning that studies how to transfer the knowledge gained from an existing and well-established data (source) to a new problem (target). To this end, we propose two novel methods to align the structure of the source and target data in the learned subspace to mitigate the domain divergence, which provide robust, scalable, and discriminant features for learning tasks. The first method utilizes a low-rank regularizer to guide the discriminant subspace learning. The second method manages to take advantage of the unlabeled source data for efficient large-scale transfer learning.; Transfer learning approaches above are appropriate for social media analytics problems such as kinship verification. A critical observation is that faces of parents captured while they were young are more like their children's compared with images captured when they are old. Therefore, we can readily apply the proposed transfer learning methods to kinship verification defined above, where kin relation between young parent and child is the source problem, while that between old parent and child is the target. Promising research outcome can be extended to real-world applications: family album management, image retrieval and annotation, missing children search, etc.
Subjects/Keywords: domain adaptation; kinship verification; low-rank matrix analysis; transfer learning; Computer vision; Machine learning; Social media; Biometric identification; Mathematical models; Matrices
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Shao, M. (2016). Efficient transfer feature learning and its applications on social media. (Doctoral Dissertation). Northeastern University. Retrieved from http://hdl.handle.net/2047/D20213068
Chicago Manual of Style (16th Edition):
Shao, Ming. “Efficient transfer feature learning and its applications on social media.” 2016. Doctoral Dissertation, Northeastern University. Accessed January 20, 2021.
http://hdl.handle.net/2047/D20213068.
MLA Handbook (7th Edition):
Shao, Ming. “Efficient transfer feature learning and its applications on social media.” 2016. Web. 20 Jan 2021.
Vancouver:
Shao M. Efficient transfer feature learning and its applications on social media. [Internet] [Doctoral dissertation]. Northeastern University; 2016. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/2047/D20213068.
Council of Science Editors:
Shao M. Efficient transfer feature learning and its applications on social media. [Doctoral Dissertation]. Northeastern University; 2016. Available from: http://hdl.handle.net/2047/D20213068

University of Illinois – Chicago
25.
Bhoi, Amlaan.
Invariant Kernels for Few-shot Learning.
Degree: 2019, University of Illinois – Chicago
URL: http://hdl.handle.net/10027/23714
► Recent advances in few-shot learning algorithms focus on the development of meta-learning or improvements in distance-based algorithms. However, the majority of these approaches do not…
(more)
▼ Recent advances in few-shot learning algorithms focus on the development of meta-learning or improvements in distance-based algorithms. However, the majority of these approaches do not take into consideration the robustness of the model to data that is augmented by various transforms. Transformations such as rotation, scaling, shifting, shearing and various other affine transforms exist, which might negatively impact the performance of a trained model. While such variations can be instilled by training time random augmentation, the computational cost incurred for training such models to achieve robustness is significant.
In this thesis, we propose a general framework to induce a variety of invariances such as rotation, shifting, shearing and more in few-shot learning by extending orbit embeddings to patch-wise image representations to preserve spatial-invariant structure required by convolutional neural networks. However, the computational cost of these kernels can be expensive when done on millions of patches/samples in a dataset. Thus, we utilize Nyström’s method to generate a
low-
rank matrix approximation of the kernel
matrix by sampling a smaller number of data points. Finally, these transformed features are passed as input to a modified Relation Network for few-shot learning. When analyzing the performance of these model under scenarios with heavy test time data augmentation, we show results that empirically prove the effectiveness of our approach when compared to baseline methods.
Advisors/Committee Members: Zhang, Xinhua (advisor), Berger-Wolf, Tanya (committee member), Sun, Xiaorui (committee member), Zhang, Xinhua (chair).
Subjects/Keywords: few-shot learning; invariance learning; adversarial learning; low-rank matrix approximation; image classification; deep learning; kernel learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Bhoi, A. (2019). Invariant Kernels for Few-shot Learning. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/23714
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Bhoi, Amlaan. “Invariant Kernels for Few-shot Learning.” 2019. Thesis, University of Illinois – Chicago. Accessed January 20, 2021.
http://hdl.handle.net/10027/23714.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Bhoi, Amlaan. “Invariant Kernels for Few-shot Learning.” 2019. Web. 20 Jan 2021.
Vancouver:
Bhoi A. Invariant Kernels for Few-shot Learning. [Internet] [Thesis]. University of Illinois – Chicago; 2019. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/10027/23714.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Bhoi A. Invariant Kernels for Few-shot Learning. [Thesis]. University of Illinois – Chicago; 2019. Available from: http://hdl.handle.net/10027/23714
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Iowa
26.
Wang, Tianming.
Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completion.
Degree: PhD, Applied Mathematical and Computational Sciences, 2018, University of Iowa
URL: https://ir.uiowa.edu/etd/6331
► Spectrally sparse signals arise in many applications of signal processing. A spectrally sparse signal is a mixture of a few undamped or damped complex…
(more)
▼ Spectrally sparse signals arise in many applications of signal processing. A spectrally sparse signal is a mixture of a few undamped or damped complex sinusoids. An important problem from practice is to reconstruct such a signal from partial time domain samples. Previous convex methods have the drawback that the computation and storage costs do not scale well with respect to the signal length. This common drawback restricts their applicabilities to large and high-dimensional signals. The reconstruction of a spectrally sparse signal from partial samples can be formulated as a
low-
rank Hankel
matrix completion problem. We develop two fast and provable non-convex solvers, FIHT and PGD. FIHT is based on Riemannian optimization while PGD is based on Burer-Monteiro factorization with projected gradient descent. Suppose the underlying spectrally sparse signal is of model order r and length n. We prove that O(r
2log2(n)) and O(r
2log(n)) random samples are sufficient for FIHT and PGD respectively to achieve exact recovery with overwhelming probability. Every iteration, the computation and storage costs of both methods are linear with respect to signal length n. Therefore they are suitable for handling spectrally sparse signals of large size, which may be prohibited for previous convex methods. Extensive numerical experiments verify their recovery abilities as well as computation efficiency, and also show that the algorithms are robust to noise and mis-specification of the model order. Comparing the two solvers, FIHT is faster for easier problems while PGD has a better recovery ability.
Advisors/Committee Members: Cai, Jianfeng (supervisor).
Subjects/Keywords: low-rank Hankel matrix completion; NMR spectroscopy; projected gradient descent; Riemannian optimization; spectrally sparse signals; Applied Mathematics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, T. (2018). Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completion. (Doctoral Dissertation). University of Iowa. Retrieved from https://ir.uiowa.edu/etd/6331
Chicago Manual of Style (16th Edition):
Wang, Tianming. “Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completion.” 2018. Doctoral Dissertation, University of Iowa. Accessed January 20, 2021.
https://ir.uiowa.edu/etd/6331.
MLA Handbook (7th Edition):
Wang, Tianming. “Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completion.” 2018. Web. 20 Jan 2021.
Vancouver:
Wang T. Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completion. [Internet] [Doctoral dissertation]. University of Iowa; 2018. [cited 2021 Jan 20].
Available from: https://ir.uiowa.edu/etd/6331.
Council of Science Editors:
Wang T. Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completion. [Doctoral Dissertation]. University of Iowa; 2018. Available from: https://ir.uiowa.edu/etd/6331

Rice University
27.
Darvish Rouhani, Bita.
A Resource-Aware Streaming-based Framework for Big Data Analysis.
Degree: MS, Engineering, 2015, Rice University
URL: http://hdl.handle.net/1911/87764
► The ever growing body of digital data is challenging conventional analytical techniques in machine learning, computer vision, and signal processing. Traditional analytical methods have been…
(more)
▼ The ever growing body of digital data is challenging conventional analytical techniques in machine learning, computer vision, and signal processing. Traditional analytical methods have been mainly developed based on the assumption that designers can work with data within the confines of their own computing environment. The growth of big data, however, is changing that paradigm especially in scenarios where severe memory and computational resource constraints exist. This thesis aims at addressing major challenges in big data learning problem by devising a new customizable computing framework that holistically takes into account the data structure and underlying platform constraints. It targets a widely used class of analytical algorithms that model the data dependencies by iteratively updating a set of
matrix parameters, including but not limited to most regression methods, expectation maximization, and stochastic optimizations, as well as the emerging deep learning techniques. The key to our approach is a customizable, streaming-based data projection methodology that adaptively transforms data into a new lower-dimensional embedding by simultaneously considering both data and hardware characteristics. It enables scalable data analysis and rapid prototyping of an arbitrary
matrix-based learning task using a sparse-approximation of the collection that is constantly updated inline with the data arrival. Our work is supported by a set of user-friendly Application Programming Interfaces (APIs) that ensure automated adaptation of the proposed framework to various datasets and System on Chip (SoC) platforms including CPUs, GPUs, and FPGAs. Proof of concept evaluations using a variety of large contemporary datasets corroborate the practicability and scalability of our approach in resource-limited settings. For instance, our results demonstrate 50-fold improvement over the best known prior-art in terms of memory, energy, power, and runtime for training and execution of deep learning models in deployment of different sensing applications including indoor localization and speech recognition on constrained embedded platforms used in today's IoT enabled devices such as autonomous vehicles, robots, and smartphone.
Advisors/Committee Members: Koushanfar, Farinaz (advisor), Aazhang, Behnaam (committee member), Baraniuk, Richard (committee member).
Subjects/Keywords: Streaming model; Big data; Dense matrix; Low-rank approximation; HW/SW co-design; Deep Learning; Scalable machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Darvish Rouhani, B. (2015). A Resource-Aware Streaming-based Framework for Big Data Analysis. (Masters Thesis). Rice University. Retrieved from http://hdl.handle.net/1911/87764
Chicago Manual of Style (16th Edition):
Darvish Rouhani, Bita. “A Resource-Aware Streaming-based Framework for Big Data Analysis.” 2015. Masters Thesis, Rice University. Accessed January 20, 2021.
http://hdl.handle.net/1911/87764.
MLA Handbook (7th Edition):
Darvish Rouhani, Bita. “A Resource-Aware Streaming-based Framework for Big Data Analysis.” 2015. Web. 20 Jan 2021.
Vancouver:
Darvish Rouhani B. A Resource-Aware Streaming-based Framework for Big Data Analysis. [Internet] [Masters thesis]. Rice University; 2015. [cited 2021 Jan 20].
Available from: http://hdl.handle.net/1911/87764.
Council of Science Editors:
Darvish Rouhani B. A Resource-Aware Streaming-based Framework for Big Data Analysis. [Masters Thesis]. Rice University; 2015. Available from: http://hdl.handle.net/1911/87764
28.
Vinyes, Marina.
Convex matrix sparsity for demixing with an application to graphical model structure estimation : Parcimonie matricielle convexe pour les problèmes de démixage avec une application à l'apprentissage de structure de modèles graphiques.
Degree: Docteur es, Signal, Image, Automatique, 2018, Université Paris-Est
URL: http://www.theses.fr/2018PESC1130
► En apprentissage automatique on a pour but d'apprendre un modèle, à partir de données, qui soit capable de faire des prédictions sur des nouvelles données…
(more)
▼ En apprentissage automatique on a pour but d'apprendre un modèle, à partir de données, qui soit capable de faire des prédictions sur des nouvelles données (pas explorées auparavant). Pour obtenir un modèle qui puisse se généraliser sur les nouvelles données, et éviter le sur-apprentissage, nous devons restreindre le modèle. Ces restrictions sont généralement une connaissance a priori de la structure du modèle. Les premières approches considérées dans la littérature sont la régularisation de Tikhonov et plus tard le Lasso pour induire de la parcimonie dans la solution. La parcimonie fait partie d'un concept fondamental en apprentissage automatique. Les modèles parcimonieux sont attrayants car ils offrent plus d'interprétabilité et une meilleure généralisation (en évitant le sur-apprentissage) en induisant un nombre réduit de paramètres dans le modèle. Au-delà de la parcimonie générale et dans de nombreux cas, les modèles sont structurellement contraints et ont une représentation simple de certains éléments fondamentaux, comme par exemple une collection de vecteurs, matrices ou tenseurs spécifiques. Ces éléments fondamentaux sont appelés atomes. Dans ce contexte, les normes atomiques fournissent un cadre général pour estimer ce type de modèles. périodes de modèles. Le but de cette thèse est d'utiliser le cadre de parcimonie convexe fourni par les normes atomiques pour étudier une forme de parcimonie matricielle. Tout d'abord, nous développons un algorithme efficace basé sur les méthodes de Frank-Wolfe et qui est particulièrement adapté pour résoudre des problèmes convexes régularisés par une norme atomique. Nous nous concentrons ensuite sur l'estimation de la structure des modèles graphiques gaussiens, où la structure du modèle est encodée dans la matrice de précision et nous étudions le cas avec des variables manquantes. Nous proposons une formulation convexe avec une approche algorithmique et fournissons un résultat théorique qui énonce les conditions nécessaires pour récupérer la structure souhaitée. Enfin, nous considérons le problème de démixage d'un signal en deux composantes ou plus via la minimisation d’une somme de normes ou de jauges, encodant chacune la structure a priori des composants à récupérer. En particulier, nous fournissons une garantie de récupération exacte dans le cadre sans bruit, basée sur des mesures d'incohérence
The goal of machine learning is to learn a model from some data that will make accurate predictions on data that it has not seen before. In order to obtain a model that will generalize on new data, and avoid overfitting, we need to restrain the model. These restrictions are usually some a priori knowledge of the structure of the model. First considered approaches included a regularization, first ridge regression and later Lasso regularization for inducing sparsity in the solution. Sparsity, also known as parsimony, has emerged as a fundamental concept in machine learning. Parsimonious models are appealing since they provide more interpretability and better generalization (avoid…
Advisors/Committee Members: Komodakis, Nikos (thesis director).
Subjects/Keywords: Normes atomiques; Optimisation convexe; Parcimonie matricielle; Rang faible; Parcimonie; Atomic norms; Convex optimisation; Matrix sparsity; Low rank; Sparsity
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Vinyes, M. (2018). Convex matrix sparsity for demixing with an application to graphical model structure estimation : Parcimonie matricielle convexe pour les problèmes de démixage avec une application à l'apprentissage de structure de modèles graphiques. (Doctoral Dissertation). Université Paris-Est. Retrieved from http://www.theses.fr/2018PESC1130
Chicago Manual of Style (16th Edition):
Vinyes, Marina. “Convex matrix sparsity for demixing with an application to graphical model structure estimation : Parcimonie matricielle convexe pour les problèmes de démixage avec une application à l'apprentissage de structure de modèles graphiques.” 2018. Doctoral Dissertation, Université Paris-Est. Accessed January 20, 2021.
http://www.theses.fr/2018PESC1130.
MLA Handbook (7th Edition):
Vinyes, Marina. “Convex matrix sparsity for demixing with an application to graphical model structure estimation : Parcimonie matricielle convexe pour les problèmes de démixage avec une application à l'apprentissage de structure de modèles graphiques.” 2018. Web. 20 Jan 2021.
Vancouver:
Vinyes M. Convex matrix sparsity for demixing with an application to graphical model structure estimation : Parcimonie matricielle convexe pour les problèmes de démixage avec une application à l'apprentissage de structure de modèles graphiques. [Internet] [Doctoral dissertation]. Université Paris-Est; 2018. [cited 2021 Jan 20].
Available from: http://www.theses.fr/2018PESC1130.
Council of Science Editors:
Vinyes M. Convex matrix sparsity for demixing with an application to graphical model structure estimation : Parcimonie matricielle convexe pour les problèmes de démixage avec une application à l'apprentissage de structure de modèles graphiques. [Doctoral Dissertation]. Université Paris-Est; 2018. Available from: http://www.theses.fr/2018PESC1130

University of Lund
29.
Grussler, Christian.
Rank Reduction with Convex Constraints.
Degree: 2017, University of Lund
URL: https://lup.lub.lu.se/record/54cb814f-59fe-4bc9-a7ef-773cbcf06889
;
https://portal.research.lu.se/ws/files/19595129/Thesis.pdf
► This thesis addresses problems which require low-rank solutions under convex constraints. In particular, the focus lies on model reduction of positive systems, as well as…
(more)
▼ This thesis addresses problems which require
low-rank solutions under convex constraints. In particular, the
focus lies on model reduction of positive systems, as well as
finite dimensional optimization problems that are convex, apart
from a low-rank constraint. Traditional model reduction techniques
try to minimize the error between the original and the reduced
system. Typically, the resulting reduced models, however, no longer
fulfill physically meaningful constraints. This thesis considers
the problem of model reduction with internal and external
positivity constraints. Both problems are solved by means of
balanced truncation. While internal positivity is shown to be
preserved by a symmetry property; external positivity preservation
is accomplished by deriving a modified balancing approach based on
ellipsoidal cone invariance.In essence, positivity preserving model
reduction attempts to find an infinite dimensional low-rank
approximation that preserves nonnegativity, as well as Hankel
structure. Due to the non-convexity of the low-rank constraint,
this problem is even challenging in a finite dimensional setting.
In addition to model reduction, the present work also considers
such finite dimensional low-rank optimization problems with convex
constraints. These problems frequently appear in applications such
as image compression, multivariate linear regression, matrix
completion and many more. The main idea of this thesis is to derive
the largest convex minorizers of rank-constrained unitarily
invariant norms. These minorizers can be used to construct optimal
convex relaxations for the original non-convex problem. Unlike
other methods such as nuclear norm regularization, this approach
benefits from having verifiable a posterior conditions for which a
solution to the convex relaxation and the corresponding non-convex
problem coincide. It is shown that this applies to various
numerical examples of well-known low-rank optimization problems. In
particular, the proposed convex relaxation performs significantly
better than nuclear norm regularization. Moreover, it can be
observed that a careful choice among the proposed convex
relaxations may have a tremendous positive impact on matrix
completion. Computational tractability of the proposed approach is
accomplished in two ways. First, the considered relaxations are
shown to be representable by semi-definite programs. Second, it is
shown how to compute the proximal mappings, for both, the convex
relaxations, as well as the non-convex problem. This makes it
possible to apply first order method such as so-called
Douglas-Rachford splitting. In addition to the convex case, where
global convergence of this algorithm is guaranteed, conditions for
local convergence in the non-convex setting are presented. Finally,
it is shown that the findings of this thesis also extend to the
general class of so-called atomic norms that allow us to cover
other non-convex constraints.
Subjects/Keywords: Control Engineering; low-rank approximation; model reduction; non-convex optimization; Douglas-Rachford; matrix completion; overlapping norm; k-support norm; atomic norm
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Grussler, C. (2017). Rank Reduction with Convex Constraints. (Doctoral Dissertation). University of Lund. Retrieved from https://lup.lub.lu.se/record/54cb814f-59fe-4bc9-a7ef-773cbcf06889 ; https://portal.research.lu.se/ws/files/19595129/Thesis.pdf
Chicago Manual of Style (16th Edition):
Grussler, Christian. “Rank Reduction with Convex Constraints.” 2017. Doctoral Dissertation, University of Lund. Accessed January 20, 2021.
https://lup.lub.lu.se/record/54cb814f-59fe-4bc9-a7ef-773cbcf06889 ; https://portal.research.lu.se/ws/files/19595129/Thesis.pdf.
MLA Handbook (7th Edition):
Grussler, Christian. “Rank Reduction with Convex Constraints.” 2017. Web. 20 Jan 2021.
Vancouver:
Grussler C. Rank Reduction with Convex Constraints. [Internet] [Doctoral dissertation]. University of Lund; 2017. [cited 2021 Jan 20].
Available from: https://lup.lub.lu.se/record/54cb814f-59fe-4bc9-a7ef-773cbcf06889 ; https://portal.research.lu.se/ws/files/19595129/Thesis.pdf.
Council of Science Editors:
Grussler C. Rank Reduction with Convex Constraints. [Doctoral Dissertation]. University of Lund; 2017. Available from: https://lup.lub.lu.se/record/54cb814f-59fe-4bc9-a7ef-773cbcf06889 ; https://portal.research.lu.se/ws/files/19595129/Thesis.pdf

University of Lund
30.
Jiang, Fangyuan.
Low Rank Matrix Factorization and Relative Pose Problems
in Computer Vision.
Degree: 2015, University of Lund
URL: https://lup.lub.lu.se/record/5368358
;
https://portal.research.lu.se/ws/files/5379996/5368400.pdf
► This thesis is focused on geometric computer vision problems. The first part of the thesis aims at solving one fundamental problem, namely low-rank matrix factorization.…
(more)
▼ This thesis is focused on geometric computer vision
problems. The first part of the thesis aims at solving one
fundamental problem, namely low-rank matrix factorization. We
provide several novel insights into the problem. In brief, we
characterize, generate, parametrize and solve the minimal problems
associated with low-rank matrix factorization. Beyond that, we give
several new algorithms based on the minimal solvers when the
measurement matrix is either sparse, noisy or with outliers. The
cost function and the algorithm can easily be adapted to several
robust norms, for example, the L1-norm and the truncated L1-norm.
We demonstrate our approach on several geometric computer vision
problems. Another application is in sensor network calibration,
which is also explored. The second part of the thesis deals with
the relative pose problem. We solve the minimal problem of
estimating the relative pose with unknown focal length and radial
distortion. Beyond that, we also propose a brute force approach,
which does not suffer from common algorithmic degeneracies.
Further, the algorithm achieves a globally optimal solution up to a
discretization error and it is easily parallelizable. Finally, we
look into the problem of object detection with unknown
pose.
Subjects/Keywords: Mathematics; Computer Vision and Robotics (Autonomous
Systems); Geometric Computer Vision; Low-rank Matrix Factorization; Relative Pose
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jiang, F. (2015). Low Rank Matrix Factorization and Relative Pose Problems
in Computer Vision. (Doctoral Dissertation). University of Lund. Retrieved from https://lup.lub.lu.se/record/5368358 ; https://portal.research.lu.se/ws/files/5379996/5368400.pdf
Chicago Manual of Style (16th Edition):
Jiang, Fangyuan. “Low Rank Matrix Factorization and Relative Pose Problems
in Computer Vision.” 2015. Doctoral Dissertation, University of Lund. Accessed January 20, 2021.
https://lup.lub.lu.se/record/5368358 ; https://portal.research.lu.se/ws/files/5379996/5368400.pdf.
MLA Handbook (7th Edition):
Jiang, Fangyuan. “Low Rank Matrix Factorization and Relative Pose Problems
in Computer Vision.” 2015. Web. 20 Jan 2021.
Vancouver:
Jiang F. Low Rank Matrix Factorization and Relative Pose Problems
in Computer Vision. [Internet] [Doctoral dissertation]. University of Lund; 2015. [cited 2021 Jan 20].
Available from: https://lup.lub.lu.se/record/5368358 ; https://portal.research.lu.se/ws/files/5379996/5368400.pdf.
Council of Science Editors:
Jiang F. Low Rank Matrix Factorization and Relative Pose Problems
in Computer Vision. [Doctoral Dissertation]. University of Lund; 2015. Available from: https://lup.lub.lu.se/record/5368358 ; https://portal.research.lu.se/ws/files/5379996/5368400.pdf
◁ [1] [2] [3] [4] [5] … [780] ▶
.