You searched for +publisher:"Georgia Tech" +contributor:("Balcan, Maria-Florina")
.
Showing records 1 – 16 of
16 total matches.
No search limiters apply to these results.
1.
Balasubramanian, Krishnakumar.
Learning matrix and functional models in high-dimensions.
Degree: PhD, Computer Science, 2014, Georgia Tech
URL: http://hdl.handle.net/1853/52284
► Statistical machine learning methods provide us with a principled framework for extracting meaningful information from noisy high-dimensional data sets. A significant feature of such procedures…
(more)
▼ Statistical machine learning methods provide us with a principled framework for extracting meaningful information from noisy high-dimensional data sets. A significant feature of such procedures is that the inferences made are statistically significant, computationally efficient and scientifically meaningful. In this thesis we make several contributions to such statistical procedures. Our contributions are two-fold.
We first address prediction and estimation problems in non-standard situations. We show that even when given no access to labeled samples, one can still consistently estimate error rate of predictors and train predictors with respect to a given (convex) loss function. We next propose an efficient procedure for predicting with large output spaces, that scales logarithmically in the dimensionality of the output space. We further propose an asymptotically optimal procedure for sparse multi-task learning when the tasks share a joint support. We show consistency of the proposed method and derive rates of convergence.
We next address the problem of learning meaningful representations of data. We propose a method for learning sparse representations that takes into account the structure of the data space and demonstrate how it enables one to obtain meaningful features. We establish sample complexity results for the proposed approach. We then propose a model-free feature selection procedure and establish its sure-screening property in the high dimensional regime. Furthermore we show that with a slight modification, the approach previously proposed for sparse multi-task learning enables one to obtain sparse representations for multiple related tasks simultaneously.
Advisors/Committee Members: Lebanon, Guy (advisor), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member),
Song, Le (committee member),
Lafferty, John (committee member),
Yuan, Ming (committee member).
Subjects/Keywords: Statistics; Machine learning; Matrix; Kernel; Consistency
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Balasubramanian, K. (2014). Learning matrix and functional models in high-dimensions. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52284
Chicago Manual of Style (16th Edition):
Balasubramanian, Krishnakumar. “Learning matrix and functional models in high-dimensions.” 2014. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/52284.
MLA Handbook (7th Edition):
Balasubramanian, Krishnakumar. “Learning matrix and functional models in high-dimensions.” 2014. Web. 18 Apr 2021.
Vancouver:
Balasubramanian K. Learning matrix and functional models in high-dimensions. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/52284.
Council of Science Editors:
Balasubramanian K. Learning matrix and functional models in high-dimensions. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/52284
2.
Ganti Mahapatruni, Ravi Sastry.
New formulations for active learning.
Degree: PhD, Computer Science, 2014, Georgia Tech
URL: http://hdl.handle.net/1853/51801
► In this thesis, we provide computationally efficient algorithms with provable statistical guarantees, for the problem of active learning, by using ideas from sequential analysis. We…
(more)
▼ In this thesis, we provide computationally efficient algorithms with provable statistical guarantees, for the problem of active learning, by using ideas from sequential analysis. We provide a generic algorithmic framework for active learning in the pool setting, and instantiate this framework by using ideas from learning with experts, stochastic optimization, and multi-armed bandits. For the problem of learning convex combination of a given set of hypothesis, we provide a stochastic mirror descent based active learning algorithm in the stream setting.
Advisors/Committee Members: Gray, Alexander (advisor), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member),
Song, Le (committee member),
Rakhlin, Alexander (committee member),
Zhang, Tong (committee member).
Subjects/Keywords: Active learning; Sequential analysis; Stochastic optimization; Active learning; Algorithms; Sequential analysis; Mathematical optimization; Machine learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ganti Mahapatruni, R. S. (2014). New formulations for active learning. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/51801
Chicago Manual of Style (16th Edition):
Ganti Mahapatruni, Ravi Sastry. “New formulations for active learning.” 2014. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/51801.
MLA Handbook (7th Edition):
Ganti Mahapatruni, Ravi Sastry. “New formulations for active learning.” 2014. Web. 18 Apr 2021.
Vancouver:
Ganti Mahapatruni RS. New formulations for active learning. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/51801.
Council of Science Editors:
Ganti Mahapatruni RS. New formulations for active learning. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/51801
3.
He, Niao.
Saddle point techniques in convex composite and error-in-measurement optimization.
Degree: PhD, Industrial and Systems Engineering, 2015, Georgia Tech
URL: http://hdl.handle.net/1853/54400
► This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of…
(more)
▼ This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects – i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts.
In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle.
In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz…
Advisors/Committee Members: Nemirovski, Arkadi (advisor), Shapiro, Alexander (committee member), Ahmed, Shabbir (committee member), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member),
Kleywegt, Anton (committee member).
Subjects/Keywords: Nonsmooth optimization; Composite minimization; First order methods; Stochastic optimization; Mirror prox
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
He, N. (2015). Saddle point techniques in convex composite and error-in-measurement optimization. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54400
Chicago Manual of Style (16th Edition):
He, Niao. “Saddle point techniques in convex composite and error-in-measurement optimization.” 2015. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/54400.
MLA Handbook (7th Edition):
He, Niao. “Saddle point techniques in convex composite and error-in-measurement optimization.” 2015. Web. 18 Apr 2021.
Vancouver:
He N. Saddle point techniques in convex composite and error-in-measurement optimization. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/54400.
Council of Science Editors:
He N. Saddle point techniques in convex composite and error-in-measurement optimization. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/54400

Georgia Tech
4.
Li, Yaxian.
Lower bounds for integer programming problems.
Degree: PhD, Industrial and Systems Engineering, 2013, Georgia Tech
URL: http://hdl.handle.net/1853/48959
► Solving real world problems with mixed integer programming (MIP) involves efforts in modeling and efficient algorithms. To solve a minimization MIP problem, a lower bound…
(more)
▼ Solving real world problems with mixed integer programming (MIP) involves efforts in modeling and efficient algorithms. To solve a minimization MIP problem, a lower bound is needed in a branch-and-bound algorithm to evaluate the quality of a feasible solution and to improve the efficiency of the algorithm. This thesis develops a new MIP model and studies algorithms for obtaining lower bounds for MIP.
The first part of the thesis is dedicated to a new production planning model with pricing decisions. To increase profit, a company can use pricing to influence its demand to increase revenue, decrease cost, or both. We present a model that uses pricing discounts to increase production and delivery flexibility, which helps to decrease costs. Although the revenue can be hurt by introducing pricing discounts, the total profit can be increased by properly choosing the discounts and production and delivery decisions. We further explore the idea with variations of the model and present the advantages of using flexibility to increase profit.
The second part of the thesis focuses on solving integer programming(IP) problems by improving lower bounds. Specifically, we consider obtaining lower bounds for the multi- dimensional knapsack problem (MKP). Because MKP lacks special structures, it allows us to consider general methods for obtaining lower bounds for IP, which includes various relaxation algorithms. A problem relaxation is achieved by either enlarging the feasible region, or decreasing the value of the objective function on the feasible region. In addition, dual algorithms can also be used to obtain lower bounds, which work directly on solving the dual problems.
We first present some characteristics of the value function of MKP and extend some properties from the knapsack problem to MKP. The properties of MKP allow some large scale problems to be reduced to smaller ones. In addition, the quality of corner relaxation bounds of MKP is considered. We explore conditions under which the corner relaxation is
tight for MKP, such that relaxing some of the constraints does not affect the quality of the lower bounds. To evaluate the overall tightness of the corner relaxation, we also show the worst-case gap of the corner relaxation for MKP.
To identify parameters that contribute the most to the hardness of MKP and further evaluate the quality of lower bounds obtained from various algorithms, we analyze the characteristics that impact the hardness of MKP with a series of computational tests and establish a testbed of instances for computational experiments in the thesis.
Next, we examine the lower bounds obtained from various relaxation algorithms com- putationally. We study methods of choosing constraints for relaxations that produce high- quality lower bounds. We use information obtained from linear relaxations to choose con- straints to relax. However, for many hard instances, choosing the right constraints can be challenging, due to the inaccuracy of the LP information. We thus develop a dual heuristic algorithm that…
Advisors/Committee Members: Nemhauser, George L. (advisor), Ergun, Ozlem (advisor), Savelsbergh, Martin (committee member), Johnson, Ellis (committee member), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member).
Subjects/Keywords: Lower bounds; Integer programming problems; Multi-dimensional knapsack problem; Algorithms; Knapsack problem (Mathematics); Integer programming
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Li, Y. (2013). Lower bounds for integer programming problems. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/48959
Chicago Manual of Style (16th Edition):
Li, Yaxian. “Lower bounds for integer programming problems.” 2013. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/48959.
MLA Handbook (7th Edition):
Li, Yaxian. “Lower bounds for integer programming problems.” 2013. Web. 18 Apr 2021.
Vancouver:
Li Y. Lower bounds for integer programming problems. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/48959.
Council of Science Editors:
Li Y. Lower bounds for integer programming problems. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/48959

Georgia Tech
5.
Mehta, Nishant A.
On sparse representations and new meta-learning paradigms for representation learning.
Degree: PhD, Computer Science, 2013, Georgia Tech
URL: http://hdl.handle.net/1853/52159
► Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to…
(more)
▼ Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it translates to learning to learn itself. The presentation unfolds in two parts. In the first part, we establish learning theoretic results for learning sparse representations. The second part introduces new multi-task and meta-learning paradigms for representation learning.
On the sparse representations front, our main pursuits are generalization error bounds to support a supervised dictionary learning model for Lasso-style sparse coding. Such predictive sparse coding algorithms have been applied with much success in the literature; even more common have been applications of unsupervised sparse coding followed by supervised linear hypothesis learning. We present two generalization error bounds for predictive sparse coding, handling the overcomplete setting (more original dimensions than learned features) and the infinite-dimensional setting. Our analysis led to a fundamental stability result for the Lasso that shows the stability of the solution vector to design matrix perturbations. We also introduce and analyze new multi-task models for (unsupervised) sparse coding and predictive sparse coding, allowing for one dictionary per task but with sharing between the tasks' dictionaries.
The second part introduces new meta-learning paradigms to realize unprecedented types of learning guarantees for meta-learning. Specifically sought are guarantees on a meta-learner's performance on new tasks encountered in an environment of tasks. Nearly all previous work produced bounds on the expected risk, whereas we produce tail bounds on the risk, thereby providing performance guarantees on the risk for a single new task drawn from the environment. The new paradigms include minimax multi-task learning (minimax MTL) and sample variance penalized meta-learning (SVP-ML). Regarding minimax MTL, we provide a high probability learning guarantee on its performance on individual tasks encountered in the future, the first of its kind. We also present two continua of meta-learning formulations, each interpolating between classical multi-task learning and minimax multi-task learning. The idea of SVP-ML is to minimize the task average of the training tasks' empirical risks plus a penalty on their sample variance. Controlling this sample variance can potentially yield a faster rate of decrease for upper bounds on the expected risk of new tasks, while also yielding high probability guarantees on the meta-learner's average performance over a draw of new test tasks. An algorithm is presented for SVP-ML with feature selection representations, as well as a quite natural convex relaxation of the SVP-ML objective.
Advisors/Committee Members: Isbell, Charles L. (advisor), Gray, Alexander G (committee member), Lebanon, Guy (committee member), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member),
Zhang, Tong (committee member).
Subjects/Keywords: Learning theory; Data-dependent complexity; Luckiness; Dictionary learning; Sparse coding; Lasso; Multi-task learning; Meta-learning; Learning to learn
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mehta, N. A. (2013). On sparse representations and new meta-learning paradigms for representation learning. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52159
Chicago Manual of Style (16th Edition):
Mehta, Nishant A. “On sparse representations and new meta-learning paradigms for representation learning.” 2013. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/52159.
MLA Handbook (7th Edition):
Mehta, Nishant A. “On sparse representations and new meta-learning paradigms for representation learning.” 2013. Web. 18 Apr 2021.
Vancouver:
Mehta NA. On sparse representations and new meta-learning paradigms for representation learning. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/52159.
Council of Science Editors:
Mehta NA. On sparse representations and new meta-learning paradigms for representation learning. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/52159

Georgia Tech
6.
Berlind, Christopher.
New insights on the power of active learning.
Degree: PhD, Computer Science, 2015, Georgia Tech
URL: http://hdl.handle.net/1853/53948
► Traditional supervised machine learning algorithms are expected to have access to a large corpus of labeled examples, but the massive amount of data available in…
(more)
▼ Traditional supervised machine learning algorithms are expected to have access to a large corpus of labeled examples, but the massive amount of data available in the modern world has made unlabeled data much easier to acquire than accompanying labels. Active learning is an extension of the classical paradigm intended to lessen the expense of the labeling process by allowing the learning algorithm to intelligently choose which examples should be labeled.
In this dissertation, we demonstrate that the power to make adaptive label queries has benefits beyond reducing labeling effort over passive learning. We develop and explore several novel methods for active learning that exemplify these new capabilities. Some of these methods use active learning for a non-standard purpose, such as computational speedup, structure discovery, and domain adaptation. Others successfully apply active learning in situations where prior results have given evidence of its ineffectiveness.
Specifically, we first give an active algorithm for learning disjunctions that is able to overcome a computational intractability present in the semi-supervised version of the same problem. This is the first known example of the computational advantages of active learning. Next, we investigate using active learning to determine structural properties (margins) of the data-generating distribution that can further improve learning rates. This is in contrast to most active learning algorithms which either assume or ignore structure rather than seeking to identify and exploit it. We then give an active nearest neighbors algorithm for domain adaptation, the task of learning a predictor for some target domain using mostly examples from a different source domain. This is the first formal analysis of the generalization and query behavior of an active domain adaptation algorithm. Finally, we show a situation where active learning can outperform passive learning on very noisy data, circumventing prior results that active learning cannot have a significant advantage over passive learning in high-noise regimes.
Advisors/Committee Members: Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (advisor),
Song, Le (advisor),
Vempala, Santosh (committee member),
Blum, Avrim (committee member),
Isbell, Charles L. (committee member).
Subjects/Keywords: Machine learning; Learning theory; Active learning; Semi-supervised learning; Domain adaptation; Large margin learning
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Berlind, C. (2015). New insights on the power of active learning. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/53948
Chicago Manual of Style (16th Edition):
Berlind, Christopher. “New insights on the power of active learning.” 2015. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/53948.
MLA Handbook (7th Edition):
Berlind, Christopher. “New insights on the power of active learning.” 2015. Web. 18 Apr 2021.
Vancouver:
Berlind C. New insights on the power of active learning. [Internet] [Doctoral dissertation]. Georgia Tech; 2015. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/53948.
Council of Science Editors:
Berlind C. New insights on the power of active learning. [Doctoral Dissertation]. Georgia Tech; 2015. Available from: http://hdl.handle.net/1853/53948
7.
Gui, Luyi.
Managing and optimizing decentralized
networks with resource sharing.
Degree: PhD, Industrial and Systems Engineering, 2013, Georgia Tech
URL: http://hdl.handle.net/1853/47707
► Resource sharing is a common collaborative strategy used in practice. It has the potential to create synergistic value and leads to higher system efficiency. However,…
(more)
▼ Resource sharing is a common collaborative strategy used in practice. It has the potential to create synergistic value and leads to higher system efficiency. However, realizing this synergistic value can be challenging given the prevalence of decentralization in practice, where individual operators manage resources based on their own benefits. Hence, optimizing a decentralized system requires understanding not only the optimal operational strategy in terms of the overall system efficiency, but also the implementation of the strategy through proper management of individual incentives. However, traditional network optimization approaches typically assume a centralized perspective. The classic game theory framework, on the other hand, addresses incentive issues of decentralized decision makers, but mainly takes a high-level, economic perspective that does not fully capture the operational complexity involved in optimizing systems with resource sharing.
The purpose of this thesis is to bridge this gap between practice and theory by studying the design of tools to manage and optimize the operations in decentralized systems with resource sharing using approaches that combine optimization and game theory. In particular, we focus on decentralized network systems and analyze two research streams in two application domains: (i) implementation of environmental legislation, and (ii) managing collaborative transportation systems. These applications are characterized by their decentralized multi-stakeholder nature where the conflicts and tension between the heterogeneous individual perspectives make system management very challenging. The main methodology used in this thesis is to adopt game theory models where individual decisions are endogenized as the solutions to network optimization problems that reflect their incentives. Such an approach allows us to capture the connection between the operational features of the system (e.g., capacity configuration, network structure, synergy level from resource sharing) and the individual incentives thus the effectiveness of the management tools, which is a main research contribution of this thesis.
In the first research stream, we consider designing effective, efficient and practical implementation of electronic waste take-back legislation based on the widely-adopted Extended Producer Responsibility (EPR) concept that mandates the financial responsibility of post-use treatment of their products. Typical implementations of EPR are collective, and allocate the resulting operating cost to involved producers. In this thesis, we demonstrate the complexity of collective EPR implementation due to the tension among different stakeholder perspectives, based on a case analysis of the Washington implementation. We then perform analytical studies of the two prominent challenges identified in current implementations: (i) developing cost allocation mechanisms that induce the voluntary participation of all producers in a collective system, thus promoting implementation efficiency; and (ii)…
Advisors/Committee Members: Ergun, Ozlem (Committee Chair), Atasu, Atalay (Committee Co-Chair), Toktay, L. Beril (Committee Co-Chair), Florina%20%22%29&pagesize-30">
Balcan,
Maria-
Florina (Committee Member),
Nemhauser, George (Committee Member).
Subjects/Keywords: Legislation; Mechanism design; Resource sharing; Decentralized networks; Optimization; EPR; Game theory; Cooperative games (Mathematics); Self-interest; Mathematical optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Gui, L. (2013). Managing and optimizing decentralized
networks with resource sharing. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/47707
Chicago Manual of Style (16th Edition):
Gui, Luyi. “Managing and optimizing decentralized
networks with resource sharing.” 2013. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/47707.
MLA Handbook (7th Edition):
Gui, Luyi. “Managing and optimizing decentralized
networks with resource sharing.” 2013. Web. 18 Apr 2021.
Vancouver:
Gui L. Managing and optimizing decentralized
networks with resource sharing. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/47707.
Council of Science Editors:
Gui L. Managing and optimizing decentralized
networks with resource sharing. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/47707
8.
Minsker, Stanislav.
Non-asymptotic bounds for prediction problems and density estimation.
Degree: PhD, Mathematics, 2012, Georgia Tech
URL: http://hdl.handle.net/1853/44808
► This dissertation investigates the learning scenarios where a high-dimensional parameter has to be estimated from a given sample of fixed size, often smaller than the…
(more)
▼ This dissertation investigates the learning scenarios where a high-dimensional parameter has to be estimated from a given sample of fixed size, often smaller than the dimension of the problem. The first part answers some open questions for the binary classification problem in the framework of active learning.
Given a random couple (X,Y) with unknown distribution P, the goal of binary classification is to predict a label Y based on the observation X. Prediction rule is constructed from a sequence of observations sampled from P. The concept of active learning can be informally characterized as follows: on every iteration, the algorithm is allowed to request a label Y for any instance X which it considers to be the most informative. The contribution of this work consists of two parts: first, we provide the minimax lower bounds for the performance of active learning methods. Second, we propose an active learning algorithm which attains nearly optimal rates over a broad class of underlying distributions and is adaptive with respect to the unknown parameters of the problem.
The second part of this thesis is related to sparse recovery in the framework of dictionary learning. Let (X,Y) be a random couple with unknown distribution P. Given a collection of functions H, the goal of dictionary learning is to construct a prediction rule for Y given by a linear combination of the elements of H. The problem is sparse if there exists a good prediction rule that depends on a small number of functions from H. We propose an estimator of the unknown optimal prediction rule based on penalized empirical risk minimization algorithm. We show that the proposed estimator is able to take advantage of the possible sparse structure of the problem by providing probabilistic bounds for its performance.
Advisors/Committee Members: Koltchinskii, Vladimir (Committee Chair), Bakhtin, Yuri (Committee Member), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (Committee Member),
Houdre, Christian (Committee Member),
Romberg, Justin (Committee Member).
Subjects/Keywords: Active learning; Sparse recovery; Oracle inequality; Confidence bands; Infinite dictionary; Estimation theory Asymptotic theory; Estimation theory; Distribution (Probability theory); Prediction theory; Active learning; Algorithms; Mathematical optimization; Chebyshev approximation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Minsker, S. (2012). Non-asymptotic bounds for prediction problems and density estimation. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/44808
Chicago Manual of Style (16th Edition):
Minsker, Stanislav. “Non-asymptotic bounds for prediction problems and density estimation.” 2012. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/44808.
MLA Handbook (7th Edition):
Minsker, Stanislav. “Non-asymptotic bounds for prediction problems and density estimation.” 2012. Web. 18 Apr 2021.
Vancouver:
Minsker S. Non-asymptotic bounds for prediction problems and density estimation. [Internet] [Doctoral dissertation]. Georgia Tech; 2012. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/44808.
Council of Science Editors:
Minsker S. Non-asymptotic bounds for prediction problems and density estimation. [Doctoral Dissertation]. Georgia Tech; 2012. Available from: http://hdl.handle.net/1853/44808
9.
Dudebout, Nicolas.
Empirical-evidence equilibria in stochastic games.
Degree: PhD, Electrical and Computer Engineering, 2014, Georgia Tech
URL: http://hdl.handle.net/1853/52205
► The objective of this research is to develop the framework of empirical-evidence equilibria (EEEs) in stochastic games. This framework was developed while attempting to design…
(more)
▼ The objective of this research is to develop the framework of empirical-evidence equilibria (EEEs) in stochastic games. This framework was developed while attempting to design decentralized controllers using learning in stochastic games. The overarching goal is to enable a set of agents to control a dynamical system in a decentralized fashion. To do so, the agents play a stochastic game crafted such that its equilibria are decentralized controllers for the dynamical system. Unfortunately, there exists no algorithm to compute equilibria in stochastic games. One explanation for this lack of results is the full-rationality requirement of game theory. In the case of stochastic games, full rationality imposes that two requirements be met at equilibrium. First, each agent has a perfect model of the game and of its opponents strategies. Second, each agent plays an optimal strategy for the POMDP induced by its opponents strategies. Both requirements are unrealistic. An agent cannot know the strategies of its opponents; it can only observe the combined effect of its own strategy interacting with its opponents. Furthermore, POMDPs are intractable; an agent cannot compute an optimal strategy in a reasonable time. In addition to these two requirements, engineered agents cannot carry perfect analytical reasoning and have limited memory; they naturally exhibit bounded rationality. In this research, bounded rationality is not seen as a limitation and is instead used to relax the two requirements. In the EEE framework, agents formulate low-order empirical models of observed quantities called mockups. Mockups have unmodeled states and dynamic effects, but they are statistically consistent; the empirical evidence observed by an agent does not contradict its mockup. Each agent uses its mockup to derive an optimal strategy. 1Since agents are interconnected through the system, these mockups are sensitive to the specific strategies employed by other agents. In an EEE, the two requirements are weakened. First, each agent has a consistent mockup of the game and the strategies of its opponents. Second, each agent plays an optimal strategy for the MDP induced by its mockup. The main contribution of this dissertation is the use of modeling to study stochastic games. This approach, while common in engineering, had not been applied to stochastic games.
Advisors/Committee Members: Shamma, Jeff (advisor), Feron, Eric (committee member), Zhang, Fumin (committee member), Reveliotis, Spiridon (committee member), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member),
Wardi, Yorai (committee member).
Subjects/Keywords: Equilibrium; Stochastic games; Bounded rationality
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Dudebout, N. (2014). Empirical-evidence equilibria in stochastic games. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52205
Chicago Manual of Style (16th Edition):
Dudebout, Nicolas. “Empirical-evidence equilibria in stochastic games.” 2014. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/52205.
MLA Handbook (7th Edition):
Dudebout, Nicolas. “Empirical-evidence equilibria in stochastic games.” 2014. Web. 18 Apr 2021.
Vancouver:
Dudebout N. Empirical-evidence equilibria in stochastic games. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/52205.
Council of Science Editors:
Dudebout N. Empirical-evidence equilibria in stochastic games. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/52205
10.
Xiao, Ying.
New tools for unsupervised learning.
Degree: PhD, Computer Science, 2014, Georgia Tech
URL: http://hdl.handle.net/1853/52995
► In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidden structure; the prototypical example is clustering similar data.…
(more)
▼ In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidden structure; the prototypical example is clustering similar data. Such problems often arise in machine learning and statistics, but also in signal processing, theoretical computer science, and any number of quantitative scientific fields. The distinguishing feature of unsupervised learning is that there are no privileged variables or labels which are particularly informative, and thus the greatest challenge is often to differentiate between what is relevant or irrelevant in any particular dataset or problem.
In the course of this thesis, we study a number of problems which span the breadth of unsupervised learning. We make progress in Gaussian mixtures, independent component analysis (where we solve the open problem of underdetermined ICA), and we formulate and solve a feature selection/dimension reduction model. Throughout, our goal is to give finite sample complexity bounds for our algorithms – these are essentially the strongest type of quantitative bound that one can prove for such algorithms. Some of our algorithmic techniques turn out to be very efficient in practice as well.
Our major technical tool is tensor spectral decomposition: tensors are generalisations of matrices, and often allow access to the "fine structure" of data. Thus, they are often the right tools for unravelling the hidden structure in an unsupervised learning setting. However, naive generalisations of matrix algorithms to tensors run into NP-hardness results almost immediately, and thus to solve our problems, we are obliged to develop two new tensor decompositions (with robust analyses) from scratch. Both of these decompositions are polynomial time, and can be viewed as efficient generalisations of PCA extended to tensors.
Advisors/Committee Members: Vempala, Santosh S. (advisor), Song, Le (committee member), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member),
Romberg, Justin K. (committee member),
Nemirovski, Arkadi S. (committee member).
Subjects/Keywords: Tensor; Spectral decomposition; Unsupervised learning; Independent component analysis; Fourier transform; Gaussian mixture model; Feature selection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Xiao, Y. (2014). New tools for unsupervised learning. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52995
Chicago Manual of Style (16th Edition):
Xiao, Ying. “New tools for unsupervised learning.” 2014. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/52995.
MLA Handbook (7th Edition):
Xiao, Ying. “New tools for unsupervised learning.” 2014. Web. 18 Apr 2021.
Vancouver:
Xiao Y. New tools for unsupervised learning. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/52995.
Council of Science Editors:
Xiao Y. New tools for unsupervised learning. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/52995
11.
Ram, Parikshit.
New paradigms for approximate nearest-neighbor search.
Degree: PhD, Computational Science and Engineering, 2013, Georgia Tech
URL: http://hdl.handle.net/1853/49112
► Nearest-neighbor search is a very natural and universal problem in computer science. Often times, the problem size necessitates approximation. In this thesis, I present new…
(more)
▼ Nearest-neighbor search is a very natural and universal problem in computer science. Often times, the problem size necessitates approximation. In this thesis, I present new paradigms for nearest-neighbor search (along with new algorithms and theory in these paradigms) that make nearest-neighbor search more usable and accurate. First, I consider a new notion of search error, the rank error, for an approximate neighbor candidate. Rank error corresponds to the number of possible candidates which are better than the approximate neighbor candidate. I motivate this notion of error and present new efficient algorithms that return approximate neighbors with rank error no more than a user specified amount. Then I focus on approximate search in a scenario where the user does not specify the tolerable search error (error constraint); instead the user specifies the amount of time available for search (time constraint). After differentiating between these two scenarios, I present some simple algorithms for time constrained search with provable performance guarantees. I use this theory to motivate a new space-partitioning data structure, the max-margin tree, for improved search performance in the time constrained setting. Finally, I consider the scenario where we do not require our objects to have an explicit fixed-length representation (vector data). This allows us to search with a large class of objects which include images, documents, graphs, strings, time series and natural language. For nearest-neighbor search in this general setting, I present a provably fast novel exact search algorithm. I also discuss the empirical performance of all the presented algorithms on real data.
Advisors/Committee Members: Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (advisor),
Gray, Alexander G. (advisor),
Lebanon, Guy (committee member),
Clarkson, Kenneth L. (committee member),
Vempala, Santosh S. (committee member).
Subjects/Keywords: Similarity search; Nearest-neighbor search; Computational geometry; Algorithms and analysis; Nearest neighbor analysis (Statistics); Approximation algorithms; Search theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ram, P. (2013). New paradigms for approximate nearest-neighbor search. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/49112
Chicago Manual of Style (16th Edition):
Ram, Parikshit. “New paradigms for approximate nearest-neighbor search.” 2013. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/49112.
MLA Handbook (7th Edition):
Ram, Parikshit. “New paradigms for approximate nearest-neighbor search.” 2013. Web. 18 Apr 2021.
Vancouver:
Ram P. New paradigms for approximate nearest-neighbor search. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/49112.
Council of Science Editors:
Ram P. New paradigms for approximate nearest-neighbor search. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/49112
12.
Mac Dermed, Liam Charles.
Value methods for efficiently solving stochastic games of complete and incomplete information.
Degree: PhD, Computer Science, 2013, Georgia Tech
URL: http://hdl.handle.net/1853/50270
► Multi-agent reinforcement learning (MARL) poses the same planning problem as traditional reinforcement learning (RL): What actions over time should an agent take in order to…
(more)
▼ Multi-agent reinforcement learning (MARL) poses the same planning problem as traditional reinforcement learning (RL): What actions over time should an agent take in order to maximize its rewards? MARL tackles a challenging set of problems that can be better understood by modeling them as having a relatively simple environment but with complex dynamics attributed to the presence of other agents who are also attempting to maximize their rewards. A great wealth of research has developed around specific subsets of this problem, most notably when the rewards for each agent are either the same or directly opposite each other. However, there has been relatively little progress made for the general problem. This thesis address this lack.
Our goal is to tackle the most general, least restrictive class of MARL problems. These are general-sum, non-deterministic, infinite horizon, multi-agent sequential decision problems of complete and incomplete information. Towards this goal, we engage in two complementary endeavors: the creation of tractable models and the construction of efficient algorithms to solve these models. We tackle three well known models: stochastic games, decentralized partially observable Markov decision problems, and partially observable stochastic games. We also present a new fourth model, Markov games of incomplete information, to help solve the partially observable models.
For stochastic games and decentralized partially observable Markov decision problems, we develop novel and efficient value iteration algorithms to solve for game theoretic solutions. We empirically evaluate these algorithms on a range of problems, including well known benchmarks and show that our value iteration algorithms perform better than current policy iteration algorithms. Finally, we argue that our approach is easily extendable to new models and solution concepts, thus providing a foundation for a new class of multi-agent value iteration algorithms.
Advisors/Committee Members: Isbell, Charles L. (advisor), Gordon, Geoff (committee member), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member),
Weiss, Lora (committee member),
Liu, C. Karen (committee member).
Subjects/Keywords: Multi-agent planning; Game theory; Reinforcement learning; Reinforcement learning; Multiagent systems; Game theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Mac Dermed, L. C. (2013). Value methods for efficiently solving stochastic games of complete and incomplete information. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/50270
Chicago Manual of Style (16th Edition):
Mac Dermed, Liam Charles. “Value methods for efficiently solving stochastic games of complete and incomplete information.” 2013. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/50270.
MLA Handbook (7th Edition):
Mac Dermed, Liam Charles. “Value methods for efficiently solving stochastic games of complete and incomplete information.” 2013. Web. 18 Apr 2021.
Vancouver:
Mac Dermed LC. Value methods for efficiently solving stochastic games of complete and incomplete information. [Internet] [Doctoral dissertation]. Georgia Tech; 2013. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/50270.
Council of Science Editors:
Mac Dermed LC. Value methods for efficiently solving stochastic games of complete and incomplete information. [Doctoral Dissertation]. Georgia Tech; 2013. Available from: http://hdl.handle.net/1853/50270

Georgia Tech
13.
Chen, Shang-Tse.
AI-infused security: Robust defense by bridging theory and practice.
Degree: PhD, Computational Science and Engineering, 2019, Georgia Tech
URL: http://hdl.handle.net/1853/62296
► While Artificial Intelligence (AI) has tremendous potential as a defense against real-world cybersecurity threats, understanding the capabilities and robustness of AI remains a fundamental challenge.…
(more)
▼ While Artificial Intelligence (AI) has tremendous potential as a defense against real-world cybersecurity threats, understanding the capabilities and robustness of AI remains a fundamental challenge. This dissertation tackles problems essential to successful deployment of AI in security settings and is comprised of the following three interrelated research thrusts. (1) Adversarial Attack and Defense of Deep Neural Networks: We discover vulnerabilities of deep neural networks in real-world settings and the countermeasures to mitigate the threat. We develop ShapeShifter, the first targeted physical adversarial attack that fools state-of-the-art object detectors. For defenses, we develop SHIELD, an efficient defense leveraging stochastic image compression, and UnMask, a knowledge-based adversarial detection and defense framework. (2) Theoretically Principled Defense via Game Theory and ML: We develop new theories that guide defense resources allocation to guard against unexpected attacks and catastrophic events, using a novel online decision-making framework that compels players to employ ``diversified'' mixed strategies. Furthermore, by leveraging the deep connection between game theory and boosting, we develop a communication-efficient distributed boosting algorithm with strong theoretical guarantees in the agnostic learning setting. (3) Using AI to Protect Enterprise and Society: We show how AI can be used in real enterprise environment with a novel framework called Virtual Product that predicts potential enterprise cyber threats. Beyond cybersecurity, we also develop the Firebird framework to help
municipal fire departments prioritize fire inspections. Our work has made multiple important contributions to both theory and practice: our distributed boosting algorithm solved an open problem of distributed learning; ShaperShifter motivated a new DARPA program (GARD); Virtual Product led to two patents; and Firebird was highlighted by National Fire Protection Association as a best practice for using data to inform fire inspections.
Advisors/Committee Members: Chau, Duen Horng (advisor), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (advisor),
Lee, Wenke (committee member),
Song, Le (committee member),
Roundy, Kevin A. (committee member),
Cornelius, Cory (committee member).
Subjects/Keywords: Security; Cybersecurity; Machine learning; Artificial Intelligence; Adversarial machine learning; Game theory; Boosting; Fire risk
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chen, S. (2019). AI-infused security: Robust defense by bridging theory and practice. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/62296
Chicago Manual of Style (16th Edition):
Chen, Shang-Tse. “AI-infused security: Robust defense by bridging theory and practice.” 2019. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/62296.
MLA Handbook (7th Edition):
Chen, Shang-Tse. “AI-infused security: Robust defense by bridging theory and practice.” 2019. Web. 18 Apr 2021.
Vancouver:
Chen S. AI-infused security: Robust defense by bridging theory and practice. [Internet] [Doctoral dissertation]. Georgia Tech; 2019. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/62296.
Council of Science Editors:
Chen S. AI-infused security: Robust defense by bridging theory and practice. [Doctoral Dissertation]. Georgia Tech; 2019. Available from: http://hdl.handle.net/1853/62296

Georgia Tech
14.
Liang, Yingyu.
Modern aspects of unsupervised learning.
Degree: PhD, Computer Science, 2014, Georgia Tech
URL: http://hdl.handle.net/1853/52282
► Unsupervised learning has become more and more important due to the recent explosion of data. Clustering, a key topic in unsupervised learning, is a well-studied…
(more)
▼ Unsupervised learning has become more and more important due to the recent explosion of data. Clustering, a key topic in unsupervised learning, is a well-studied task arising in many applications ranging from computer vision to computational biology to the social sciences. This thesis is a collection of work exploring two modern aspects of clustering: stability and scalability.
In the first part, we study clustering under a stability property called perturbation resilience. As an alternative approach to worst case analysis, this novel theoretical framework aims at understanding the complexity of clustering instances that satisfy natural stability assumptions. In particular, we show how to correctly cluster instances whose optimal solutions are resilient to small multiplicative perturbations on the distances between data points, significantly improving existing guarantees. We further propose a generalized property that allows small changes in the optimal solutions after perturbations, and provide the first known positive results in this more challenging setting.
In the second part, we study the problem of clustering large scale data distributed across nodes which communicate over the edges of a connected graph. We provide algorithms with small communication cost and provable guarantees on the clustering quality. We also propose algorithms for distributed principal component analysis, which can be used to reduce the communication cost of clustering high dimensional data while merely comprising the clustering quality.
In the third part, we study community detection, the modern extension of clustering to network data. We propose a theoretical model of communities that are stable in the presence of noisy nodes in the network, and design an algorithm that provably detects all such communities. We also provide a local algorithm for large scale networks, whose running time depends on the sizes of the output communities but not that of the entire network.
Advisors/Committee Members: Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (advisor),
Blum, Avrim (committee member),
Fortnow, Lance (committee member),
Isbell, Charles L. (committee member),
Randall, Dana (committee member),
Song, Le (committee member).
Subjects/Keywords: Unsupervised learning; Clustering; Perturbation resilience; Distributed clustering; Community detection
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liang, Y. (2014). Modern aspects of unsupervised learning. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/52282
Chicago Manual of Style (16th Edition):
Liang, Yingyu. “Modern aspects of unsupervised learning.” 2014. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/52282.
MLA Handbook (7th Edition):
Liang, Yingyu. “Modern aspects of unsupervised learning.” 2014. Web. 18 Apr 2021.
Vancouver:
Liang Y. Modern aspects of unsupervised learning. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/52282.
Council of Science Editors:
Liang Y. Modern aspects of unsupervised learning. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/52282
15.
Balavoine, Aurele.
Mathematical analysis of a dynamical system for sparse recovery.
Degree: PhD, Electrical and Computer Engineering, 2014, Georgia Tech
URL: http://hdl.handle.net/1853/51882
► This thesis presents the mathematical analysis of a continuous-times system for sparse signal recovery. Sparse recovery arises in Compressed Sensing (CS), where signals of large…
(more)
▼ This thesis presents the mathematical analysis of a continuous-times system for sparse signal recovery. Sparse recovery arises in Compressed Sensing (CS), where signals of large dimension must be recovered from a small number of linear measurements, and can be accomplished by solving a complex optimization program. While many solvers have been proposed and analyzed to solve such programs in digital, their high complexity currently prevents their use in real-time applications. On the contrary, a continuous-time neural network implemented in analog VLSI could lead to significant gains in both time and power consumption. The contributions of this thesis are threefold. First, convergence results for neural networks that solve a large class of nonsmooth optimization programs are presented. These results extend previous analysis by allowing the interconnection matrix to be singular and the activation function to have many constant regions and grow unbounded. The exponential convergence rate of the networks is demonstrated and an analytic expression for the convergence speed is given. Second, these results are specialized to the L1-minimization problem, which is the most famous approach to solving the sparse recovery problem. The analysis relies on standard techniques in CS and proves that the network takes an efficient path toward the solution for parameters that match results obtained for digital solvers. Third, the convergence rate and accuracy of both the continuous-time system and its discrete-time equivalent are derived in the case where the underlying sparse signal is time-varying and the measurements are streaming. Such a study is of great interest for practical applications that need to operate in real-time, when the data are streaming at high rates or the computational resources are limited. As a conclusion, while existing analysis was concentrated on discrete-time algorithms for the recovery of static signals, this thesis provides convergence rate and accuracy results for the recovery of static signals using a continuous-time solver, and for the recovery of time-varying signals with both a discrete-time and a continuous-time solver.
Advisors/Committee Members: Romberg, Justin K. (advisor), Rozell, Christopher J. (committee member), Hasler, Jennifer 0. (committee member), Yezzi, Anthony J. (committee member), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (committee member),
Davenport, Mark A. (committee member).
Subjects/Keywords: Sparse recovery; Neural network; L1-minimization; Nonsmooth optimization; Compressed sensing; Tracking; ISTA; LCA; Sparse matrices; Signal processing Digital techniques; Mathematical optimization
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Balavoine, A. (2014). Mathematical analysis of a dynamical system for sparse recovery. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/51882
Chicago Manual of Style (16th Edition):
Balavoine, Aurele. “Mathematical analysis of a dynamical system for sparse recovery.” 2014. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/51882.
MLA Handbook (7th Edition):
Balavoine, Aurele. “Mathematical analysis of a dynamical system for sparse recovery.” 2014. Web. 18 Apr 2021.
Vancouver:
Balavoine A. Mathematical analysis of a dynamical system for sparse recovery. [Internet] [Doctoral dissertation]. Georgia Tech; 2014. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/51882.
Council of Science Editors:
Balavoine A. Mathematical analysis of a dynamical system for sparse recovery. [Doctoral Dissertation]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/51882

Georgia Tech
16.
Karande, Chinmay.
Algorithms and mechanism design for multi-agent systems.
Degree: PhD, Computing, 2010, Georgia Tech
URL: http://hdl.handle.net/1853/37229
► A scenario where multiple entities interact with a common environment to achieve individual and common goals either co-operatively or competitively can be classified as a…
(more)
▼ A scenario where multiple entities interact with a common environment to achieve individual and common goals either co-operatively or competitively can be classified as a Multi-Agent System. In this thesis, we concentrate on the situations where the agents exhibit selfish, competitive and strategic behaviour, giving rise to interesting game theoretic and optimization problems. From a computational point of view, the presence of multiple agents introduces strategic and temporal issues, apart from enhancing the difficulty of optimization.
We study the following natural mathematical models of such multi-agent problems faced in practice: a) combinatorial optimization problems with multi-agent submodular cost functions, b) combinatorial auctions with partially public valuations and c) online vertex-weighted bipartite matching and single bid budgeted allocations. We provide approximation algorithms, online algorithms and hardness of approximation results for these problems.
Advisors/Committee Members: Vazirani, Vijay (Committee Chair), Florina%22%29&pagesize-30">
Balcan,
Maria-
Florina (Committee Member),
Cook, William (Committee Member),
Thomas, Robin (Committee Member),
Vigoda, Eric (Committee Member).
Subjects/Keywords: Multi-agent systems; Online auction; Algorithms; Mechanism design; Submodular functions; Matroids
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Karande, C. (2010). Algorithms and mechanism design for multi-agent systems. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/37229
Chicago Manual of Style (16th Edition):
Karande, Chinmay. “Algorithms and mechanism design for multi-agent systems.” 2010. Doctoral Dissertation, Georgia Tech. Accessed April 18, 2021.
http://hdl.handle.net/1853/37229.
MLA Handbook (7th Edition):
Karande, Chinmay. “Algorithms and mechanism design for multi-agent systems.” 2010. Web. 18 Apr 2021.
Vancouver:
Karande C. Algorithms and mechanism design for multi-agent systems. [Internet] [Doctoral dissertation]. Georgia Tech; 2010. [cited 2021 Apr 18].
Available from: http://hdl.handle.net/1853/37229.
Council of Science Editors:
Karande C. Algorithms and mechanism design for multi-agent systems. [Doctoral Dissertation]. Georgia Tech; 2010. Available from: http://hdl.handle.net/1853/37229
.